Improved Image Quality in Head and Neck CT Using a 3D Iterative Approach to Reduce Metal Artifact.
Wuest, W; May, M S; Brand, M; Bayerl, N; Krauss, A; Uder, M; Lell, M
2015-10-01
Metal artifacts from dental fillings and other devices degrade image quality and may compromise the detection and evaluation of lesions in the oral cavity and oropharynx by CT. The aim of this study was to evaluate the effect of iterative metal artifact reduction on CT of the oral cavity and oropharynx. Data from 50 consecutive patients with metal artifacts from dental hardware were reconstructed with standard filtered back-projection, linear interpolation metal artifact reduction (LIMAR), and iterative metal artifact reduction. The image quality of sections that contained metal was analyzed for the severity of artifacts and diagnostic value. A total of 455 sections (mean ± standard deviation, 9.1 ± 4.1 sections per patient) contained metal and were evaluated with each reconstruction method. Sections without metal were not affected by the algorithms and demonstrated image quality identical to each other. Of these sections, 38% were considered nondiagnostic with filtered back-projection, 31% with LIMAR, and only 7% with iterative metal artifact reduction. Thirty-three percent of the sections had poor image quality with filtered back-projection, 46% with LIMAR, and 10% with iterative metal artifact reduction. Thirteen percent of the sections with filtered back-projection, 17% with LIMAR, and 22% with iterative metal artifact reduction were of moderate image quality, 16% of the sections with filtered back-projection, 5% with LIMAR, and 30% with iterative metal artifact reduction were of good image quality, and 1% of the sections with LIMAR and 31% with iterative metal artifact reduction were of excellent image quality. Iterative metal artifact reduction yields the highest image quality in comparison with filtered back-projection and linear interpolation metal artifact reduction in patients with metal hardware in the head and neck area. © 2015 by American Journal of Neuroradiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, M; Baek, J
2016-06-15
Purpose: To investigate the slice direction dependent detectability in cone beam CT images with anatomical background. Methods: We generated 3D anatomical background images using breast anatomy model. To generate 3D breast anatomy, we filtered 3D Gaussian noise with a square root of 1/f{sup 3}, and then assigned the attenuation coefficient of glandular (0.8cm{sup −1}) and adipose (0.46 cm{sup −1}) tissues based on voxel values. Projections were acquired by forward projection, and quantum noise was added to the projection data. The projection data were reconstructed by FDK algorithm. We compared the detectability of a 3 mm spherical signal in the imagemore » reconstructed from four different backprojection Methods: Hanning weighted ramp filter with linear interpolation (RECON1), Hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON3), and ramp filter with Fourier interpolation (RECON4), respectively. We computed task SNR of the spherical signal in transverse and longitudinal planes using channelized Hotelling observer with Laguerre-Gauss channels. Results: Transverse plane has similar task SNR values for different backprojection methods, while longitudinal plane has a maximum task SNR value in RECON1. For all backprojection methods, longitudinal plane has higher task SNR than transverse plane. Conclusion: In this work, we investigated detectability for different slice direction in cone beam CT images with anatomical background. Longitudinal plane has a higher task SNR than transverse plane, and backprojection with hanning weighted ramp filter with linear interpolation method (i.e., RECON1) produced the highest task SNR among four different backprojection methods. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Programs(IITP-2015-R0346-15-1008) supervised by the IITP (Institute for Information & Communications Technology Promotion), Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the MSIP (2015R1C1A1A01052268) and framework of international cooperation program managed by NRF (NRF-2015K2A1A2067635).« less
Introduction of Total Variation Regularization into Filtered Backprojection Algorithm
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.
A fast method to emulate an iterative POCS image reconstruction algorithm.
Zeng, Gengsheng L
2017-10-01
Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.
Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter
2014-12-29
Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.
Feature selection and back-projection algorithms for nonline-of-sight laser-gated viewing
NASA Astrophysics Data System (ADS)
Laurenzis, Martin; Velten, Andreas
2014-11-01
We discuss new approaches to analyze laser-gated viewing data for nonline-of-sight vision with a frame-to-frame back-projection as well as feature selection algorithms. Although first back-projection approaches use time transients for each pixel, our method has the ability to calculate the projection of imaging data on the voxel space for each frame. Further, different data analysis algorithms and their sequential application were studied with the aim of identifying and selecting signals from different target positions. A slight modification of commonly used filters leads to a powerful selection of local maximum values. It is demonstrated that the choice of the filter has an impact on the selectivity i.e., multiple target detection as well as on the localization precision.
Chromotomography for a rotating-prism instrument using backprojection, then filtering.
Deming, Ross W
2006-08-01
A simple closed-form solution is derived for reconstructing a 3D spatial-chromatic image cube from a set of chromatically dispersed 2D image frames. The algorithm is tailored for a particular instrument in which the dispersion element is a matching set of mechanically rotated direct vision prisms positioned between a lens and a focal plane array. By using a linear operator formalism to derive the Tikhonov-regularized pseudoinverse operator, it is found that the unique minimum-norm solution is obtained by applying the adjoint operator, followed by 1D filtering with respect to the chromatic variable. Thus the filtering and backprojection (adjoint) steps are applied in reverse order relative to an existing method. Computational efficiency is provided by use of the fast Fourier transform in the filtering step.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelliccia, Daniele; Vaz, Raquel; Svalbe, Imants
X-ray imaging of soft tissue is made difficult by their low absorbance. The use of x-ray phase imaging and tomography can significantly enhance the detection of these tissues and several approaches have been proposed to this end. Methods such as analyzer-based imaging or grating interferometry produce differential phase projections that can be used to reconstruct the 3D distribution of the sample refractive index. We report on the quantitative comparison of three different methods to obtain x-ray phase tomography with filtered back-projection from differential phase projections in the presence of noise. The three procedures represent different numerical approaches to solve themore » same mathematical problem, namely phase retrieval and filtered back-projection. It is found that obtaining individual phase projections and subsequently applying a conventional filtered back-projection algorithm produces the best results for noisy experimental data, when compared with other procedures based on the Hilbert transform. The algorithms are tested on simulated phantom data with added noise and the predictions are confirmed by experimental data acquired using a grating interferometer. The experiment is performed on unstained adult zebrafish, an important model organism for biomedical studies. The method optimization described here allows resolution of weak soft tissue features, such as muscle fibers.« less
Use of the Hotelling observer to optimize image reconstruction in digital breast tomosynthesis
Sánchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan
2015-01-01
Abstract. We propose an implementation of the Hotelling observer that can be applied to the optimization of linear image reconstruction algorithms in digital breast tomosynthesis. The method is based on considering information within a specific region of interest, and it is applied to the optimization of algorithms for detectability of microcalcifications. Several linear algorithms are considered: simple back-projection, filtered back-projection, back-projection filtration, and Λ-tomography. The optimized algorithms are then evaluated through the reconstruction of phantom data. The method appears robust across algorithms and parameters and leads to the generation of algorithm implementations which subjectively appear optimized for the task of interest. PMID:26702408
Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Mascolo-Fortin, Julia, E-mail: julia.mascolo-fortin.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca
Purpose: The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. Methods: This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numericalmore » simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. Results: The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. Conclusions: The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of spatial features and reduce cone-beam optical CT artifacts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr; Clackdoyle, Rolf; Keuschnigg, Peter
Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used tomore » evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.« less
Filtered back-projection algorithm for Compton telescopes
Gunter, Donald L [Lisle, IL
2008-03-18
A method for the conversion of Compton camera data into a 2D image of the incident-radiation flux on the celestial sphere includes detecting coincident gamma radiation flux arriving from various directions of a 2-sphere. These events are mapped by back-projection onto the 2-sphere to produce a convolution integral that is subsequently stereographically projected onto a 2-plane to produce a second convolution integral which is deconvolved by the Fourier method to produce an image that is then projected onto the 2-sphere.
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790
Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Ha, Seongmin; Kim, Woo Sun; Kim, In-One
2016-03-01
CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose(4), levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose(4) levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose(4) level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose(4) obtained at 1.81 mSv.
DOE Office of Scientific and Technical Information (OSTI.GOV)
PELT, DANIEL
2017-04-21
Small Python package to compute tomographic reconstructions using a reconstruction method published in: Pelt, D.M., & De Andrade, V. (2017). Improved tomographic reconstruction of large-scale real-world data by filter optimization. Advanced Structural and Chemical Imaging 2: 17; and Pelt, D. M., & Batenburg, K. J. (2015). Accurately approximating algebraic tomographic reconstruction by filtered backprojection. In Proceedings of The 13th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine (pp. 158-161).
NASA Astrophysics Data System (ADS)
Fujii, M.
2017-07-01
Two variations of a depth-selective back-projection filter for functional near-infrared spectroscopy (fNIRS) systems are introduced. The filter comprises a depth-selective algorithm that uses inverse problems applied to an optically diffusive multilayer medium. In this study, simultaneous signal reconstruction of both superficial and deep tissue from fNIRS experiments of the human forehead using a prototype of a CW-NIRS system is demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gingold, E; Dave, J
2014-06-01
Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurementsmore » included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction.« less
A comparison of earthquake backprojection imaging methods for dense local arrays
NASA Astrophysics Data System (ADS)
Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.
2018-03-01
Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we therefore recommend backprojecting kurtosis waveforms, followed by a second pass on the detected events using noise-filtered raw waveforms to achieve the best of all criteria.
Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT.
Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe
2015-11-01
The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of spatial features and reduce cone-beam optical CT artifacts.
Does thorax EIT image analysis depend on the image reconstruction method?
NASA Astrophysics Data System (ADS)
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2013-04-01
Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.
NASA Astrophysics Data System (ADS)
Li, Qin; Berman, Benjamin P.; Schumacher, Justin; Liang, Yongguang; Gavrielides, Marios A.; Yang, Hao; Zhao, Binsheng; Petrick, Nicholas
2017-03-01
Tumor volume measured from computed tomography images is considered a biomarker for disease progression or treatment response. The estimation of the tumor volume depends on the imaging system parameters selected, as well as lesion characteristics. In this study, we examined how different image reconstruction methods affect the measurement of lesions in an anthropomorphic liver phantom with a non-uniform background. Iterative statistics-based and model-based reconstructions, as well as filtered back-projection, were evaluated and compared in this study. Statistics-based and filtered back-projection yielded similar estimation performance, while model-based yielded higher precision but lower accuracy in the case of small lesions. Iterative reconstructions exhibited higher signal-to-noise ratio but slightly lower contrast of the lesion relative to the background. A better understanding of lesion volumetry performance as a function of acquisition parameters and lesion characteristics can lead to its incorporation as a routine sizing tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, X.D.; Tsui, B.M.W.; Gregoriou, G.K.
The goal of the investigation was to study the effectiveness of the corrective reconstruction methods in cardiac SPECT using a realistic phantom and to qualitatively and quantitatively evaluate the reconstructed images using bull's-eye plots. A 3D mathematical phantom which realistically models the anatomical structures of the cardiac-torso region of patients was used. The phantom allows simulation of both the attenuation distribution and the uptake of radiopharmaceuticals in different organs. Also, the phantom can be easily modified to simulate different genders and variations in patient anatomy. Two-dimensional projection data were generated from the phantom and included the effects of attenuation andmore » detector response blurring. The reconstruction methods used in the study included the conventional filtered backprojection (FBP) with no attenuation compensation, and the first-order Chang algorithm, an iterative filtered backprojection algorithm (IFBP), the weighted least square conjugate gradient algorithm and the ML-EM algorithm with non-uniform attenuation compensation. The transaxial reconstructed images were rearranged into short-axis slices from which bull's-eye plots of the count density distribution in the myocardium were generated.« less
CT image reconstruction with half precision floating-point values.
Maaß, Clemens; Baer, Matthias; Kachelrieß, Marc
2011-07-01
Analytic CT image reconstruction is a computationally demanding task. Currently, the even more demanding iterative reconstruction algorithms find their way into clinical routine because their image quality is superior to analytic image reconstruction. The authors thoroughly analyze a so far unconsidered but valuable tool of tomorrow's reconstruction hardware (CPU and GPU) that allows implementing the forward projection and backprojection steps, which are the computationally most demanding parts of any reconstruction algorithm, much more efficiently. Instead of the standard 32 bit floating-point values (float), a recently standardized floating-point value with 16 bit (half) is adopted for data representation in image domain and in rawdata domain. The reduction in the total data amount reduces the traffic on the memory bus, which is the bottleneck of today's high-performance algorithms, by 50%. In CT simulations and CT measurements, float reconstructions (gold standard) and half reconstructions are visually compared via difference images and by quantitative image quality evaluation. This is done for analytical reconstruction (filtered backprojection) and iterative reconstruction (ordered subset SART). The magnitude of quantization noise, which is caused by a reduction in the data precision of both rawdata and image data during image reconstruction, is negligible. This is clearly shown for filtered backprojection and iterative ordered subset SART reconstruction. In filtered backprojection, the implementation of the backprojection should be optimized for low data precision if the image data are represented in half format. In ordered subset SART image reconstruction, no adaptations are necessary and the convergence speed remains unchanged. Half precision floating-point values allow to speed up CT image reconstruction without compromising image quality.
Deblurring in digital tomosynthesis by iterative self-layer subtraction
NASA Astrophysics Data System (ADS)
Youn, Hanbean; Kim, Jee Young; Jang, SunYoung; Cho, Min Kook; Cho, Seungryong; Kim, Ho Kyung
2010-04-01
Recent developments in large-area flat-panel detectors have made tomosynthesis technology revisited in multiplanar xray imaging. However, the typical shift-and-add (SAA) or backprojection reconstruction method is notably claimed by a lack of sharpness in the reconstructed images because of blur artifact which is the superposition of objects which are out of planes. In this study, we have devised an intuitive simple method to reduce the blur artifact based on an iterative approach. This method repeats a forward and backward projection procedure to determine the blur artifact affecting on the plane-of-interest (POI), and then subtracts it from the POI. The proposed method does not include any Fourierdomain operations hence excluding the Fourier-domain-originated artifacts. We describe the concept of the self-layer subtractive tomosynthesis and demonstrate its performance with numerical simulation and experiments. Comparative analysis with the conventional methods, such as the SAA and filtered backprojection methods, is addressed.
Quantitative analysis of a reconstruction method for fully three-dimensional PET.
Suckling, J; Ott, R J; Deehan, B J
1992-03-01
The major advantage of positron emission tomography (PET) using large area planar detectors over scintillator-based commercial ring systems is the potentially larger (by a factor of two or three) axial field-of-view (FOV). However, to achieve the space invariance of the point spread function necessary for Fourier filtering a polar angle rejection criterion is applied to the data during backprojection resulting in a trade-off between FOV size and sensitivity. A new algorithm due to Defrise and co-workers developed for list-mode data overcomes this problem with a solution involving the division of the image into several subregions. A comparison between the existing backprojection-then-filter algorithm and the new method (with three subregions) has been made using both simulated and real data collected from the MUP-PET positron camera. Signal-to-noise analysis reveals that improvements of up to a factor of 1.4 are possible resulting from an increased data usage of up to a factor of 2.5 depending on the axial extent of the imaged object. Quantitation is also improved.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Huang, Zhen
2012-11-01
The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.
Development of a high-performance noise-reduction filter for tomographic reconstruction
NASA Astrophysics Data System (ADS)
Kao, Chien-Min; Pan, Xiaochuan
2001-07-01
We propose a new noise-reduction method for tomographic reconstruction. The method incorporates a priori information on the source image for allowing the derivation of the energy spectrum of its ideal sinogram. In combination with the energy spectrum of the Poisson noise in the measured sinogram, we are able to derive a Wiener-like filter for effective suppression of the sinogram noise. The filtered backprojection (FBP) algorithm, with a ramp filter, is then applied to the filtered sinogram to produce tomographic images. The resulting filter has a closed-form expression in the frequency space and contains a single user-adjustable regularization parameter. The proposed method is hence simple to implement and easy to use. In contrast to the ad hoc apodizing windows, such as Hanning and Butterworth filters, that are commonly used in the conventional FBP reconstruction, the proposed filter is theoretically more rigorous as it is derived by basing upon an optimization criterion, subject to a known class of source image intensity distributions.
Axial Cone Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering
Tang, Shaojie; Tang, Xiangyang
2016-01-01
Goal The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. Methods The solution is an integration of three-dimensional (3D) weighted axial CB-BPF/ DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting reconstruction accuracy and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate performance of the proposed algorithm. Results Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Conclusion Integrated with orthogonal butterfly filtering, the 3D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. Significance The proposed 3D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications. PMID:26660512
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.
Zhang, Man; Wang, Guanyong; Zhang, Lei
2017-10-26
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
performance on a low cost, low size, weight, and power (SWAP) computer : a Raspberry Pi Model B. For a comparison of performance, a baseline implementation...improvement factor of 2-3 compared to filtered backprojection. Execution on a single Raspberry Pi is too slow for real-time imaging. However, factorized...backprojection is easily parallelized, and we include a discussion of parallel implementation across multiple Pis .
Regridding reconstruction algorithm for real-time tomographic imaging
Marone, F.; Stampanoni, M.
2012-01-01
Sub-second temporal-resolution tomographic microscopy is becoming a reality at third-generation synchrotron sources. Efficient data handling and post-processing is, however, difficult when the data rates are close to 10 GB s−1. This bottleneck still hinders exploitation of the full potential inherent in the ultrafast acquisition speed. In this paper the fast reconstruction algorithm gridrec, highly optimized for conventional CPU technology, is presented. It is shown that gridrec is a valuable alternative to standard filtered back-projection routines, despite being based on the Fourier transform method. In fact, the regridding procedure used for resampling the Fourier space from polar to Cartesian coordinates couples excellent performance with negligible accuracy degradation. The stronger dependence of the observed signal-to-noise ratio for gridrec reconstructions on the number of angular views makes the presented algorithm even superior to filtered back-projection when the tomographic problem is well sampled. Gridrec not only guarantees high-quality results but it provides up to 20-fold performance increase, making real-time monitoring of the sub-second acquisition process a reality. PMID:23093766
Exact Fan-Beam Reconstruction With Arbitrary Object Translations and Truncated Projections
NASA Astrophysics Data System (ADS)
Hoskovec, Jan; Clackdoyle, Rolf; Desbat, Laurent; Rit, Simon
2016-06-01
This article proposes a new method for reconstructing two-dimensional (2D) computed tomography (CT) images from truncated and motion contaminated sinograms. The type of motion considered here is a sequence of rigid translations which are assumed to be known. The algorithm first identifies the sufficiency of angular coverage in each 2D point of the CT image to calculate the Hilbert transform from the local “virtual” trajectory which accounts for the motion and the truncation. By taking advantage of data redundancy in the full circular scan, our method expands the reconstructible region beyond the one obtained with chord-based methods. The proposed direct reconstruction algorithm is based on the Differentiated Back-Projection with Hilbert filtering (DBP-H). The motion is taken into account during backprojection which is the first step of our direct reconstruction, before taking the derivatives and inverting the finite Hilbert transform. The algorithm has been tested in a proof-of-concept study on Shepp-Logan phantom simulations with several motion cases and detector sizes.
FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector.
Schäfer, Dirk; Grass, Michael; van de Haar, Peter
2011-07-01
Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical case with a detector overlap of about 17 mm confirms these results. The BPF-type reconstructions with Katsevich differentiation are widely independent of the size of the detector overlap and give the best results with respect to RMSD and visual inspection for minimal detector overlap. The increased homogeneity will improve correct assessment of lesions in the entire field of view.
Plenoptic projection fluorescence tomography.
Iglesias, Ignacio; Ripoll, Jorge
2014-09-22
A new method to obtain the three-dimensional localization of fluorochrome distributions in micrometric samples is presented. It uses a microlens array coupled to the image port of a standard microscope to obtain tomographic data by a filtered back-projection algorithm. Scanning of the microlens array is proposed to obtain a dense data set for reconstruction. Simulation and experimental results are shown and the implications of this approach in fast 3D imaging are discussed.
A biological phantom for evaluation of CT image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.
2014-03-01
In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.
Zooming in on vibronic structure by lowest-value projection reconstructed 4D coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, Elad
2018-05-01
A fundamental goal of chemical physics is an understanding of microscopic interactions in liquids at and away from equilibrium. In principle, this microscopic information is accessible by high-order and high-dimensionality nonlinear optical measurements. Unfortunately, the time required to execute such experiments increases exponentially with the dimensionality, while the signal decreases exponentially with the order of the nonlinearity. Recently, we demonstrated a non-uniform acquisition method based on radial sampling of the time-domain signal [W. O. Hutson et al., J. Phys. Chem. Lett. 9, 1034 (2018)]. The four-dimensional spectrum was then reconstructed by filtered back-projection using an inverse Radon transform. Here, we demonstrate an alternative reconstruction method based on the statistical analysis of different back-projected spectra which results in a dramatic increase in sensitivity and at least a 100-fold increase in dynamic range compared to conventional uniform sampling and Fourier reconstruction. These results demonstrate that alternative sampling and reconstruction methods enable applications of increasingly high-order and high-dimensionality methods toward deeper insights into the vibronic structure of liquids.
Elementary test for nonclassicality based on measurements of position and momentum
NASA Astrophysics Data System (ADS)
Fresta, Luca; Borregaard, Johannes; Sørensen, Anders S.
2015-12-01
We generalize a nonclassicality test described by Kot et al. [Phys. Rev. Lett. 108, 233601 (2012), 10.1103/PhysRevLett.108.233601], which can be used to rule out any classical description of a physical system. The test is based on measurements of quadrature operators and works by proving a contradiction with the classical description in terms of a probability distribution in phase space. As opposed to the previous work, we generalize the test to include states without rotational symmetry in phase space. Furthermore, we compare the performance of the nonclassicality test with classical tomography methods based on the inverse Radon transform, which can also be used to establish the quantum nature of a physical system. In particular, we consider a nonclassicality test based on the so-called filtered back-projection formula. We show that the general nonclassicality test is conceptually simpler, requires less assumptions on the system, and is statistically more reliable than the tests based on the filtered back-projection formula. As a specific example, we derive the optimal test for quadrature squeezed single-photon states and show that the efficiency of the test does not change with the degree of squeezing.
Three-dimensional Image Reconstruction in J-PET Using Filtered Back-projection Method
NASA Astrophysics Data System (ADS)
Shopa, R. Y.; Klimaszewski, K.; Kowalski, P.; Krzemień, W.; Raczyński, L.; Wiślicki, W.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
We present a method and preliminary results of the image reconstruction in the Jagiellonian PET tomograph. Using GATE (Geant4 Application for Tomographic Emission), interactions of the 511 keV photons with a cylindrical detector were generated. Pairs of such photons, flying back-to-back, originate from e+e- annihilations inside a 1-mm spherical source. Spatial and temporal coordinates of hits were smeared using experimental resolutions of the detector. We incorporated the algorithm of the 3D Filtered Back Projection, implemented in the STIR and TomoPy software packages, which differ in approximation methods. Consistent results for the Point Spread Functions of ~5/7,mm and ~9/20, mm were obtained, using STIR, for transverse and longitudinal directions, respectively, with no time of flight information included.
Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves.
Yu, Hengyong; Ye, Yangbo; Zhao, Shiying; Wang, Ge
2006-01-01
We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction.
FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaefer, Dirk; Grass, Michael; Haar, Peter van de
2011-05-15
Purpose: Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. Methods: The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting andmore » back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. Results: The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical case with a detector overlap of about 17 mm confirms these results. Conclusions: The BPF-type reconstructions with Katsevich differentiation are widely independent of the size of the detector overlap and give the best results with respect to RMSD and visual inspection for minimal detector overlap. The increased homogeneity will improve correct assessment of lesions in the entire field of view.« less
Axial Cone-Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering.
Tang, Shaojie; Tang, Xiangyang
2016-09-01
The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone-beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane, determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. The solution is an integration of three-dimensional (3-D) weighted axial CB-BPF/DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting the reconstruction accuracy, and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate the performance of the proposed algorithm. Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3-D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Integrated with orthogonal butterfly filtering, the 3-D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3-D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. The proposed 3-D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications.
Tomographic image reconstruction using the cell broadband engine (CBE) general purpose hardware
NASA Astrophysics Data System (ADS)
Knaup, Michael; Steckmann, Sven; Bockenbach, Olivier; Kachelrieß, Marc
2007-02-01
Tomographic image reconstruction, such as the reconstruction of CT projection values, of tomosynthesis data, PET or SPECT events, is computational very demanding. In filtered backprojection as well as in iterative reconstruction schemes, the most time-consuming steps are forward- and backprojection which are often limited by the memory bandwidth. Recently, a novel general purpose architecture optimized for distributed computing became available: the Cell Broadband Engine (CBE). Its eight synergistic processing elements (SPEs) currently allow for a theoretical performance of 192 GFlops (3 GHz, 8 units, 4 floats per vector, 2 instructions, multiply and add, per clock). To maximize image reconstruction speed we modified our parallel-beam and perspective backprojection algorithms which are highly optimized for standard PCs, and optimized the code for the CBE processor. 1-3 In addition, we implemented an optimized perspective forwardprojection on the CBE which allows us to perform statistical image reconstructions like the ordered subset convex (OSC) algorithm. 4 Performance was measured using simulated data with 512 projections per rotation and 5122 detector elements. The data were backprojected into an image of 512 3 voxels using our PC-based approaches and the new CBE- based algorithms. Both the PC and the CBE timings were scaled to a 3 GHz clock frequency. On the CBE, we obtain total reconstruction times of 4.04 s for the parallel backprojection, 13.6 s for the perspective backprojection and 192 s for a complete OSC reconstruction, consisting of one initial Feldkamp reconstruction, followed by 4 OSC iterations.
Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves
Ye, Yangbo; Zhao, Shiying; Wang, Ge
2006-01-01
We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction. PMID:23165018
PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.
2016-02-01
Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.
Miéville, Frédéric A; Gudinchet, François; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Bochud, François O; Verdun, Francis R
2011-09-01
Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI(vol) 4.8-7.9 mGy, DLP 37.1-178.9 mGy·cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone.
Image-guided filtering for improving photoacoustic tomographic image reconstruction.
Awasthi, Navchetan; Kalva, Sandeep Kumar; Pramanik, Manojit; Yalavarthy, Phaneendra K
2018-06-01
Several algorithms exist to solve the photoacoustic image reconstruction problem depending on the expected reconstructed image features. These reconstruction algorithms promote typically one feature, such as being smooth or sharp, in the output image. Combining these features using a guided filtering approach was attempted in this work, which requires an input and guiding image. This approach act as a postprocessing step to improve commonly used Tikhonov or total variational regularization method. The result obtained from linear backprojection was used as a guiding image to improve these results. Using both numerical and experimental phantom cases, it was shown that the proposed guided filtering approach was able to improve (as high as 11.23 dB) the signal-to-noise ratio of the reconstructed images with the added advantage being computationally efficient. This approach was compared with state-of-the-art basis pursuit deconvolution as well as standard denoising methods and shown to outperform them. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C; Han, M; Baek, J
Purpose: To investigate the detectability of a small target for different slice direction of a volumetric cone beam CT image and its impact on dose reduction. Methods: Analytic projection data of a sphere object (1 mm diameter, 0.2/cm attenuation coefficient) were generated and reconstructed by FDK algorithm. In this work, we compared the detectability of the small target from four different backprojection Methods: hanning weighted ramp filter with linear interpolation (RECON 1), hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON 3), and ramp filter with Fourier interpolation (RECON4), respectively. For noise simulation, 200 photonsmore » per measurement were used, and the noise only data were reconstructed using FDK algorithm. For each reconstructed volume, axial and coronal slice were extracted and detection-SNR was calculated using channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels. Results: Detection-SNR of coronal images varies for different backprojection methods, while axial images have a similar detection-SNR. Detection-SNR{sup 2} ratios of coronal and axial images in RECON1 and RECON2 are 1.33 and 1.15, implying that the coronal image has a better detectability than axial image. In other words, using coronal slices for the small target detection can reduce the patient dose about 33% and 15% compared to using axial slices in RECON 1 and RECON 2. Conclusion: In this work, we investigated slice direction dependent detectability of a volumetric cone beam CT image. RECON 1 and RECON 2 produced the highest detection-SNR, with better detectability in coronal slices. These results indicate that it is more beneficial to use coronal slice to improve detectability of a small target in a volumetric cone beam CT image. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Program (NIPA-2014-H0201-14-1002) supervised by the NIPA (National IT Industry Promotion Agency). Authors declares that s/he has no conflict of Interest in relation to the work in this abstract.« less
Neural network Hilbert transform based filtered backprojection for fast inline x-ray inspection
NASA Astrophysics Data System (ADS)
Janssens, Eline; De Beenhouwer, Jan; Van Dael, Mattias; De Schryver, Thomas; Van Hoorebeke, Luc; Verboven, Pieter; Nicolai, Bart; Sijbers, Jan
2018-03-01
X-ray imaging is an important tool for quality control since it allows to inspect the interior of products in a non-destructive way. Conventional x-ray imaging, however, is slow and expensive. Inline x-ray inspection, on the other hand, can pave the way towards fast and individual quality control, provided that a sufficiently high throughput can be achieved at a minimal cost. To meet these criteria, an inline inspection acquisition geometry is proposed where the object moves and rotates on a conveyor belt while it passes a fixed source and detector. Moreover, for this acquisition geometry, a new neural-network-based reconstruction algorithm is introduced: the neural network Hilbert transform based filtered backprojection. The proposed algorithm is evaluated both on simulated and real inline x-ray data and has shown to generate high quality reconstructions of 400 × 400 reconstruction pixels within 200 ms, thereby meeting the high throughput criteria.
Exact BPF and FBP algorithms for nonstandard saddle curves.
Yu, Hengyong; Zhao, Shiying; Ye, Yangbo; Wang, Ge
2005-11-01
A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better image quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.
Fan beam image reconstruction with generalized Fourier slice theorem.
Zhao, Shuangren; Yang, Kang; Yang, Kevin
2014-01-01
For parallel beam geometry the Fourier reconstruction works via the Fourier slice theorem (or central slice theorem, projection slice theorem). For fan beam situation, Fourier slice can be extended to a generalized Fourier slice theorem (GFST) for fan-beam image reconstruction. We have briefly introduced this method in a conference. This paper reintroduces the GFST method for fan beam geometry in details. The GFST method can be described as following: the Fourier plane is filled by adding up the contributions from all fanbeam projections individually; thereby the values in the Fourier plane are directly calculated for Cartesian coordinates such avoiding the interpolation from polar to Cartesian coordinates in the Fourier domain; inverse fast Fourier transform is applied to the image in Fourier plane and leads to a reconstructed image in spacial domain. The reconstructed image is compared between the result of the GFST method and the result from the filtered backprojection (FBP) method. The major differences of the GFST and the FBP methods are: (1) The interpolation process are at different data sets. The interpolation of the GFST method is at projection data. The interpolation of the FBP method is at filtered projection data. (2) The filtering process are done in different places. The filtering process of the GFST is at Fourier domain. The filtering process of the FBP method is the ramp filter which is done at projections. The resolution of ramp filter is variable with different location but the filter in the Fourier domain lead to resolution invariable with location. One advantage of the GFST method over the FBP method is in short scan situation, an exact solution can be obtained with the GFST method, but it can not be obtained with the FBP method. The calculation of both the GFST and the FBP methods are at O(N
Backprojection of volcanic tremor
Haney, Matthew M.
2014-01-01
Backprojection has become a powerful tool for imaging the rupture process of global earthquakes. We demonstrate the ability of backprojection to illuminate and track volcanic sources as well. We apply the method to the seismic network from Okmok Volcano, Alaska, at the time of an escalation in tremor during the 2008 eruption. Although we are able to focus the wavefield close to the location of the active cone, the network array response lacks sufficient resolution to reveal kilometer-scale changes in tremor location. By deconvolving the response in successive backprojection images, we enhance resolution and find that the tremor source moved toward an intracaldera lake prior to its escalation. The increased tremor therefore resulted from magma-water interaction, in agreement with the overall phreatomagmatic character of the eruption. Imaging of eruption tremor shows that time reversal methods, such as backprojection, can provide new insights into the temporal evolution of volcanic sources.
Exact BPF and FBP algorithms for nonstandard saddle curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu Hengyong; Zhao Shiying; Ye Yangbo
2005-11-15
A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better imagemore » quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.« less
Bistatic synthetic aperture radar imaging for arbitrary flight trajectories.
Yarman, Can Evren; Yazici, Birsen; Cheney, Margaret
2008-01-01
In this paper, we present an analytic, filtered backprojection (FBP) type inversion method for bistatic synthetic aperture radar (BISAR). We consider a BISAR system where a scene of interest is illuminated by electromagnetic waves that are transmitted, at known times, from positions along an arbitrary, but known, flight trajectory and the scattered waves are measured from positions along a different flight trajectory which is also arbitrary, but known. We assume a single-scattering model for the radar data, and we assume that the ground topography is known but not necessarily flat. We use microlocal analysis to develop the FBP-type reconstruction method. We analyze the computational complexity of the numerical implementation of the method and present numerical simulations to demonstrate its performance.
SU-F-J-200: An Improved Method for Event Selection in Compton Camera Imaging for Particle Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackin, D; Beddar, S; Polf, J
2016-06-15
Purpose: The uncertainty in the beam range in particle therapy limits the conformality of the dose distributions. Compton scatter cameras (CC), which measure the prompt gamma rays produced by nuclear interactions in the patient tissue, can reduce this uncertainty by producing 3D images confirming the particle beam range and dose delivery. However, the high intensity and short time windows of the particle beams limit the number of gammas detected. We attempt to address this problem by developing a method for filtering gamma ray scattering events from the background by applying the known gamma ray spectrum. Methods: We used a 4more » stage Compton camera to record in list mode the energy deposition and scatter positions of gammas from a Co-60 source. Each CC stage contained a 4×4 array of CdZnTe crystal. To produce images, we used a back-projection algorithm and four filtering Methods: basic, energy windowing, delta energy (ΔE), or delta scattering angle (Δθ). Basic filtering requires events to be physically consistent. Energy windowing requires event energy to fall within a defined range. ΔE filtering selects events with the minimum difference between the measured and a known gamma energy (1.17 and 1.33 MeV for Co-60). Δθ filtering selects events with the minimum difference between the measured scattering angle and the angle corresponding to a known gamma energy. Results: Energy window filtering reduced the FWHM from 197.8 mm for basic filtering to 78.3 mm. ΔE and Δθ filtering achieved the best results, FWHMs of 64.3 and 55.6 mm, respectively. In general, Δθ filtering selected events with scattering angles < 40°, while ΔE filtering selected events with angles > 60°. Conclusion: Filtering CC events improved the quality and resolution of the corresponding images. ΔE and Δθ filtering produced similar results but each favored different events.« less
Filtering of the Radon transform to enhance linear signal features via wavelet pyramid decomposition
NASA Astrophysics Data System (ADS)
Meckley, John R.
1995-09-01
The information content in many signal processing applications can be reduced to a set of linear features in a 2D signal transform. Examples include the narrowband lines in a spectrogram, ship wakes in a synthetic aperture radar image, and blood vessels in a medical computer-aided tomography scan. The line integrals that generate the values of the projections of the Radon transform can be characterized as a bank of matched filters for linear features. This localization of energy in the Radon transform for linear features can be exploited to enhance these features and to reduce noise by filtering the Radon transform with a filter explicitly designed to pass only linear features, and then reconstructing a new 2D signal by inverting the new filtered Radon transform (i.e., via filtered backprojection). Previously used methods for filtering the Radon transform include Fourier based filtering (a 2D elliptical Gaussian linear filter) and a nonlinear filter ((Radon xfrm)**y with y >= 2.0). Both of these techniques suffer from the mismatch of the filter response to the true functional form of the Radon transform of a line. The Radon transform of a line is not a point but is a function of the Radon variables (rho, theta) and the total line energy. This mismatch leads to artifacts in the reconstructed image and a reduction in achievable processing gain. The Radon transform for a line is computed as a function of angle and offset (rho, theta) and the line length. The 2D wavelet coefficients are then compared for the Haar wavelets and the Daubechies wavelets. These filter responses are used as frequency filters for the Radon transform. The filtering is performed on the wavelet pyramid decomposition of the Radon transform by detecting the most likely positions of lines in the transform and then by convolving the local area with the appropriate response and zeroing the pyramid coefficients outside of the response area. The response area is defined to contain 95% of the total wavelet coefficient energy. The detection algorithm provides an estimate of the line offset, orientation, and length that is then used to index the appropriate filter shape. Additional wavelet pyramid decomposition is performed in areas of high energy to refine the line position estimate. After filtering, the new Radon transform is generated by inverting the wavelet pyramid. The Radon transform is then inverted by filtered backprojection to produce the final 2D signal estimate with the enhanced linear features. The wavelet-based method is compared to both the Fourier and the nonlinear filtering with examples of sparse and dense shapes in imaging, acoustics and medical tomography with test images of noisy concentric lines, a real spectrogram of a blow fish (a very nonstationary spectrum), and the Shepp Logan Computer Tomography phantom image. Both qualitative and derived quantitative measures demonstrate the improvement of wavelet-based filtering. Additional research is suggested based on these results. Open questions include what level(s) to use for detection and filtering because multiple-level representations exist. The lower levels are smoother at reduced spatial resolution, while the higher levels provide better response to edges. Several examples are discussed based on analytical and phenomenological arguments.
SU-E-J-174: Adaptive PET-Based Dose Painting with Tomotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darwish, N; Mackie, T; Thomadsen, B
2014-06-01
Purpose: PET imaging can be converted into dose prescription directly. Due to the variability of the intensity of PET the image, PET prescription maybe superior over uniform dose prescription. Furthermore, unlike the case in image reconstruction of not knowing the image solution in advance, the prescribed dose is known from a PET image a priori. Therefore, optimum beam orientations are derivable. Methods: We can assume the PET image to be the prescribed dose and invert it to determine the energy fluence. The same method used to reconstruct tissue images from projections could be used to solve the inverse problem ofmore » determining beam orientations and modulation patterns from a dose prescription [10]. Unlike standard tomographic reconstruction of images from measured projection profiles, the inversion of the prescribed dose results in photon fluence which may be negative and therefore unphysical. Two-dimensional modulated beams can be modelled in terms of the attenuated or exponential radon transform of the prescribed dose function (assumed to be the PET image in this case), an application of a Ram-Lak filter, and inversion by backprojection. Unlike the case in PET processing, however, the filtered beam obtained from the inversion represents a physical photon fluence. Therefore, a positivity constraint for the fluence (setting negative fluence to zero) must be applied (Brahme et al 1982, Bortfeld et al 1990) Results: Truncating the negative profiles from the PET data results in an approximation of the derivable energy fluence. Backprojection of the deliverable fluence is an approximation of the dose delivered. The deliverable dose is comparable to the original PET image and is similar to the PET image. Conclusion: It is possible to use the PET data or image as a direct indicator of deliverable fluence for cylindrical radiotherapy systems such as TomoTherapy.« less
Min, James K; Swaminathan, Rajesh V; Vass, Melissa; Gallagher, Scott; Weinsaft, Jonathan W
2009-01-01
The assessment of coronary stents with present-generation 64-detector row computed tomography scanners that use filtered backprojection and operating at standard definition of 0.5-0.75 mm (standard definition, SDCT) is limited by imaging artifacts and noise. We evaluated the performance of a novel, high-definition 64-slice CT scanner (HDCT), with improved spatial resolution (0.23 mm) and applied statistical iterative reconstruction (ASIR) for evaluation of coronary artery stents. HDCT and SDCT stent imaging was performed with the use of an ex vivo phantom. HDCT was compared with SDCT with both smooth and sharp kernels for stent intraluminal diameter, intraluminal area, and image noise. Intrastent visualization was assessed with an ASIR algorithm on HDCT scans, compared with the filtered backprojection algorithms by SDCT. Six coronary stents (2.5, 2.5, 2.75, 3.0, 3.5, 4.0mm) were analyzed by 2 independent readers. Interobserver correlation was high for both HDCT and SDCT. HDCT yielded substantially larger luminal area visualization compared with SDCT, both for smooth (29.4+/-14.5 versus 20.1+/-13.0; P<0.001) and sharp (32.0+/-15.2 versus 25.5+/-12.0; P<0.001) kernels. Stent diameter was higher with HDCT compared with SDCT, for both smooth (1.54+/-0.59 versus1.00+/-0.50; P<0.0001) and detailed (1.47+/-0.65 versus 1.08+/-0.54; P<0.0001) kernels. With detailed kernels, HDCT scans that used algorithms showed a trend toward decreased image noise compared with SDCT-filtered backprojection algorithms. On the basis of this ex vivo study, HDCT provides superior detection of intrastent luminal area and diameter visualization, compared with SDCT. ASIR image reconstruction techniques for HDCT scans enhance the in-stent assessment while decreasing image noise.
Comparison of forward- and back-projection in vivo EPID dosimetry for VMAT treatment of the prostate
NASA Astrophysics Data System (ADS)
Bedford, James L.; Hanson, Ian M.; Hansen, Vibeke N.
2018-01-01
In the forward-projection method of portal dosimetry for volumetric modulated arc therapy (VMAT), the integrated signal at the electronic portal imaging device (EPID) is predicted at the time of treatment planning, against which the measured integrated image is compared. In the back-projection method, the measured signal at each gantry angle is back-projected through the patient CT scan to give a measure of total dose to the patient. This study aims to investigate the practical agreement between the two types of EPID dosimetry for prostate radiotherapy. The AutoBeam treatment planning system produced VMAT plans together with corresponding predicted portal images, and a total of 46 sets of gantry-resolved portal images were acquired in 13 patients using an iViewGT portal imager. For the forward-projection method, each acquisition of gantry-resolved images was combined into a single integrated image and compared with the predicted image. For the back-projection method, iViewDose was used to calculate the dose distribution in the patient for comparison with the planned dose. A gamma index for 3% and 3 mm was used for both methods. The results were investigated by delivering the same plans to a phantom and repeating some of the deliveries with deliberately introduced errors. The strongest agreement between forward- and back-projection methods is seen in the isocentric intensity/dose difference, with moderate agreement in the mean gamma. The strongest correlation is observed within a given patient, with less correlation between patients, the latter representing the accuracy of prediction of the two methods. The error study shows that each of the two methods has its own distinct sensitivity to errors, but that overall the response is similar. The forward- and back-projection EPID dosimetry methods show moderate agreement in this series of prostate VMAT patients, indicating that both methods can contribute to the verification of dose delivered to the patient.
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-06-21
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.
NASA Astrophysics Data System (ADS)
Li, Tianfang; Wang, Jing; Wen, Junhai; Li, Xiang; Lu, Hongbing; Hsieh, Jiang; Liang, Zhengrong
2004-05-01
To treat the noise in low-dose x-ray CT projection data more accurately, analysis of the noise properties of the data and development of a corresponding efficient noise treatment method are two major problems to be addressed. In order to obtain an accurate and realistic model to describe the x-ray CT system, we acquired thousands of repeated measurements on different phantoms at several fixed scan angles by a GE high-speed multi-slice spiral CT scanner. The collected data were calibrated and log-transformed by the sophisticated system software, which converts the detected photon energy into sinogram data that satisfies the Radon transform. From the analysis of these experimental data, a nonlinear relation between mean and variance for each datum of the sinogram was obtained. In this paper, we integrated this nonlinear relation into a penalized likelihood statistical framework for a SNR (signal-to-noise ratio) adaptive smoothing of noise in the sinogram. After the proposed preprocessing, the sinograms were reconstructed with unapodized FBP (filtered backprojection) method. The resulted images were evaluated quantitatively, in terms of noise uniformity and noise-resolution tradeoff, with comparison to other noise smoothing methods such as Hanning filter and Butterworth filter at different cutoff frequencies. Significant improvement on noise and resolution tradeoff and noise property was demonstrated.
Fieselmann, Andreas; Dennerlein, Frank; Deuerling-Zheng, Yu; Boese, Jan; Fahrig, Rebecca; Hornegger, Joachim
2011-06-21
Filtered backprojection is the basis for many CT reconstruction tasks. It assumes constant attenuation values of the object during the acquisition of the projection data. Reconstruction artifacts can arise if this assumption is violated. For example, contrast flow in perfusion imaging with C-arm CT systems, which have acquisition times of several seconds per C-arm rotation, can cause this violation. In this paper, we derived and validated a novel spatio-temporal model to describe these kinds of artifacts. The model separates the temporal dynamics due to contrast flow from the scan and reconstruction parameters. We introduced derivative-weighted point spread functions to describe the spatial spread of the artifacts. The model allows prediction of reconstruction artifacts for given temporal dynamics of the attenuation values. Furthermore, it can be used to systematically investigate the influence of different reconstruction parameters on the artifacts. We have shown that with optimized redundancy weighting function parameters the spatial spread of the artifacts around a typical arterial vessel can be reduced by about 70%. Finally, an inversion of our model could be used as the basis for novel dynamic reconstruction algorithms that further minimize these artifacts.
BPF-type region-of-interest reconstruction for parallel translational computed tomography.
Wu, Weiwen; Yu, Hengyong; Wang, Shaoyu; Liu, Fenglin
2017-01-01
The objective of this study is to present and test a new ultra-low-cost linear scan based tomography architecture. Similar to linear tomosynthesis, the source and detector are translated in opposite directions and the data acquisition system targets on a region-of-interest (ROI) to acquire data for image reconstruction. This kind of tomographic architecture was named parallel translational computed tomography (PTCT). In previous studies, filtered backprojection (FBP)-type algorithms were developed to reconstruct images from PTCT. However, the reconstructed ROI images from truncated projections have severe truncation artefact. In order to overcome this limitation, we in this study proposed two backprojection filtering (BPF)-type algorithms named MP-BPF and MZ-BPF to reconstruct ROI images from truncated PTCT data. A weight function is constructed to deal with data redundancy for multi-linear translations modes. Extensive numerical simulations are performed to evaluate the proposed MP-BPF and MZ-BPF algorithms for PTCT in fan-beam geometry. Qualitative and quantitative results demonstrate that the proposed BPF-type algorithms cannot only more accurately reconstruct ROI images from truncated projections but also generate high-quality images for the entire image support in some circumstances.
NASA Astrophysics Data System (ADS)
Demirkaya, Omer
2001-07-01
This study investigates the efficacy of filtering two-dimensional (2D) projection images of Computer Tomography (CT) by the nonlinear diffusion filtration in removing the statistical noise prior to reconstruction. The projection images of Shepp-Logan head phantom were degraded by Gaussian noise. The variance of the Gaussian distribution was adaptively changed depending on the intensity at a given pixel in the projection image. The corrupted projection images were then filtered using the nonlinear anisotropic diffusion filter. The filtered projections as well as original noisy projections were reconstructed using filtered backprojection (FBP) with Ram-Lak filter and/or Hanning window. The ensemble variance was computed for each pixel on a slice. The nonlinear filtering of projection images improved the SNR substantially, on the order of fourfold, in these synthetic images. The comparison of intensity profiles across a cross-sectional slice indicated that the filtering did not result in any significant loss of image resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr
Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less
NASA Astrophysics Data System (ADS)
Beskardes, G. D.; Hole, J. A.; Wang, K.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Michaelides, M.; Brown, L. D.; Quiros, D. A.
2016-12-01
Back-projection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. Back-projection is scalable to earthquakes with a wide range of magnitudes from very tiny to very large. Local dense arrays provide the opportunity to capture very tiny events for a range applications, such as tectonic microseismicity, source scaling studies, wastewater injection-induced seismicity, hydraulic fracturing, CO2 injection monitoring, volcano studies, and mining safety. While back-projection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed to overcome imaging issues. We compare the performance of back-projection using four previously used data pre-processing methods: full waveform, envelope, short-term averaging / long-term averaging (STA/LTA), and kurtosis. The goal is to identify an optimized strategy for an entirely automated imaging process that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the energy imaged at the source, preserves magnitude information, and considers computational cost. Real data issues include aliased station spacing, low signal-to-noise ratio (to <1), large noise bursts and spatially varying waveform polarity. For evaluation, the four imaging methods were applied to the aftershock sequence of the 2011 Virginia earthquake as recorded by the AIDA array with 200-400 m station spacing. These data include earthquake magnitudes from -2 to 3 with highly variable signal to noise, spatially aliased noise, and large noise bursts: realistic issues in many environments. Each of the four back-projection methods has advantages and disadvantages, and a combined multi-pass method achieves the best of all criteria. Preliminary imaging results from the 2011 Virginia dataset will be presented.
Dahmen, Tim; Kohr, Holger; de Jonge, Niels; Slusallek, Philipp
2015-06-01
Combined tilt- and focal series scanning transmission electron microscopy is a recently developed method to obtain nanoscale three-dimensional (3D) information of thin specimens. In this study, we formulate the forward projection in this acquisition scheme as a linear operator and prove that it is a generalization of the Ray transform for parallel illumination. We analytically derive the corresponding backprojection operator as the adjoint of the forward projection. We further demonstrate that the matched backprojection operator drastically improves the convergence rate of iterative 3D reconstruction compared to the case where a backprojection based on heuristic weighting is used. In addition, we show that the 3D reconstruction is of better quality.
Glick, S J; Hawkins, W G; King, M A; Penney, B C; Soares, E J; Byrne, C L
1992-01-01
The application of stationary restoration techniques to SPECT images assumes that the modulation transfer function (MTF) of the imaging system is shift invariant. It was hypothesized that using intrinsic attenuation correction (i.e., methods which explicitly invert the exponential radon transform) would yield a three-dimensional (3-D) MTF which varies less with position within the transverse slices than the combined conjugate view two-dimensional (2-D) MTF varies with depth. Thus the assumption of shift invariance would become less of an approximation for 3-D post- than for 2-D pre-reconstruction restoration filtering. SPECT acquisitions were obtained from point sources located at various positions in three differently shaped, water-filled phantoms. The data were reconstructed with intrinsic attenuation correction, and 3-D MTFs were calculated. Four different intrinsic attenuation correction methods were compared: (1) exponentially weighted backprojection, (2) a modified exponentially weighted backprojection as described by Tanaka et al. [Phys. Med. Biol. 29, 1489-1500 (1984)], (3) a Fourier domain technique as described by Bellini et al. [IEEE Trans. ASSP 27, 213-218 (1979)], and (4) the circular harmonic transform (CHT) method as described by Hawkins et al. [IEEE Trans. Med. Imag. 7, 135-148 (1988)]. The dependence of the 3-D MTF obtained with these methods, on point source location within an attenuator, and on shape of the attenuator, was studied. These 3-D MTFs were compared to: (1) those MTFs obtained with no attenuation correction, and (2) the depth dependence of the arithmetic mean combined conjugate view 2-D MTFs.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Kelly, C. L.; Lawrence, J. F.
2014-12-01
During October 2012, 51 geophones and 6 broadband seismometers were deployed in an ~50x50m region surrounding a periodically erupting columnar geyser in the El Tatio Geyser Field, Chile. The dense array served as the seismic framework for a collaborative project to study the mechanics of complex hydrothermal systems. Contemporaneously, complementary geophysical measurements (including down-hole temperature and pressure, discharge rates, thermal imaging, water chemistry, and video) were also collected. Located on the western flanks of the Andes Mountains at an elevation of 4200m, El Tatio is the third largest geyser field in the world. Its non-pristine condition makes it an ideal location to perform minutely invasive geophysical studies. The El Jefe Geyser was chosen for its easily accessible conduit and extremely periodic eruption cycle (~120s). During approximately 2 weeks of continuous recording, we recorded ~2500 nighttime eruptions which lack cultural noise from tourism. With ample data, we aim to study how the source varies spatially and temporally during each phase of the geyser's eruption cycle. We are developing a new back-projection processing technique to improve source imaging for diffuse signals. Our method was previously applied to the Sierra Negra Volcano system, which also exhibits repeating harmonic and diffuse seismic sources. We back-project correlated seismic signals from the receivers back to their sources, assuming linear source to receiver paths and a known velocity model (obtained from ambient noise tomography). We apply polarization filters to isolate individual and concurrent geyser energy associated with P and S phases. We generate 4D, time-lapsed images of the geyser source field that illustrate how the source distribution changes through the eruption cycle. We compare images for pre-eruption, co-eruption, post-eruption and quiescent periods. We use our images to assess eruption mechanics in the system (i.e. top-down vs. bottom-up) and determine variations in source depth and distribution in the conduit and larger geyser field over many eruption cycles.
Interferometric tomography of fuel cells for monitoring membrane water content.
Waller, Laura; Kim, Jungik; Shao-Horn, Yang; Barbastathis, George
2009-08-17
We have developed a system that uses two 1D interferometric phase projections for reconstruction of 2D water content changes over time in situ in a proton exchange membrane (PEM) fuel cell system. By modifying the filtered backprojection tomographic algorithm, we are able to incorporate a priori information about the object distribution into a fast reconstruction algorithm which is suitable for real-time monitoring.
Instrument performance enhancement and modification through an extended instrument paradigm
NASA Astrophysics Data System (ADS)
Mahan, Stephen Lee
An extended instrument paradigm is proposed, developed and shown in various applications. The CBM (Chin, Blass, Mahan) method is an extension to the linear systems model of observing systems. In the most obvious and practical application of image enhancement of an instrument characterized by a time-invariant instrumental response function, CBM can be used to enhance images or spectra through a simple convolution application of the CBM filter for a resolution improvement of as much as a factor of two. The CBM method can be used in many applications. We discuss several within this work including imaging through turbulent atmospheres, or what we've called Adaptive Imaging. Adaptive Imaging provides an alternative approach for the investigator desiring results similar to those obtainable with adaptive optics, however on a minimal budget. The CBM method is also used in a backprojected filtered image reconstruction method for Positron Emission Tomography. In addition, we can use information theoretic methods to aid in the determination of model instrumental response function parameters for images having an unknown origin. Another application presented herein involves the use of the CBM method for the determination of the continuum level of a Fourier transform spectrometer observation of ethylene, which provides a means for obtaining reliable intensity measurements in an automated manner. We also present the application of CBM to hyperspectral image data of the comet Shoemaker-Levy 9 impact with Jupiter taken with an acousto-optical tunable filter equipped CCD camera to an adaptive optics telescope.
The spatial resolution of a rotating gamma camera tomographic facility.
Webb, S; Flower, M A; Ott, R J; Leach, M O; Inamdar, R
1983-12-01
An important feature determining the spatial resolution in transverse sections reconstructed by convolution and back-projection is the frequency filter corresponding to the convolution kernel. Equations have been derived giving the theoretical spatial resolution, for a perfect detector and noise-free data, using four filter functions. Experiments have shown that physical constraints will always limit the resolution that can be achieved with a given system. The experiments indicate that the region of the frequency spectrum between KN/2 and KN where KN is the Nyquist frequency does not contribute significantly to resolution. In order to investigate the physical effect of these filter functions, the spatial resolution of reconstructed images obtained with a GE 400T rotating gamma camera has been measured. The results obtained serve as an aid to choosing appropriate reconstruction filters for use with a rotating gamma camera system.
Extended volume coverage in helical cone-beam CT by using PI-line based BPF algorithm
NASA Astrophysics Data System (ADS)
Cho, Seungryong; Pan, Xiaochuan
2007-03-01
We compared data requirements of filtered-backprojection (FBP) and backprojection-filtration (BPF) algorithms based on PI-lines in helical cone-beam CT. Since the filtration process in FBP algorithm needs all the projection data of PI-lines for each view, the required detector size should be bigger than the size that can cover Tam-Danielsson (T-D) window to avoid data truncation. BPF algorithm, however, requires the projection data only within the T-D window, which means smaller detector size can be used to reconstruct the same image than that in FBP. In other words, a longer helical pitch can be obtained by using BPF algorithm without any truncation artifacts when a fixed detector size is given. The purpose of the work is to demonstrate numerically that extended volume coverage in helical cone-beam CT by using PI-line-based BPF algorithm can be achieved.
High spatial resolution technique for SPECT using a fan-beam collimator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ichihar, T.; Nambu, K.; Motomura, N.
1993-08-01
The physical characteristics of the collimator cause degradation of resolution with increasing distance from the collimator surface. A new convolutional backprojection algorithm has been derived for fanbeam SPECT data without rebinding into parallel beam geometry. The projections are filtered and then backprojected into the area within an isosceles triangle whose vertex is the focal point of the fan-beam and whose base is the fan-beam collimator face, and outside of the circle whose center is located midway between the focal point and the center of rotation and whose diameter is the distance between the focal point and the center of rotation.more » Consequently the backprojected area is close to the collimator surface. This algorithm has been implemented on a GCA-9300A SPECT system showing good results with both phantom and patient studies. The SPECT transaxial resolution was 4.6mm FWHM (reconstructed image matrix size of 256x256) at the center of SPECT FOV using UHR (ultra-high-resolution) fan beam collimators for brain study. Clinically, Tc-99m HMPAO and Tc-99m ECD brain data were reconstructed using this algorithm. The reconstruction results were compared with MRI images of the same slice position and showed significantly improved over results obtained with standard reconstruction algorithms.« less
NASA Astrophysics Data System (ADS)
King, Martin; Xia, Dan; Yu, Lifeng; Pan, Xiaochuan; Giger, Maryellen
2006-03-01
Usage of the backprojection filtration (BPF) algorithm for reconstructing images from motion-contaminated fan-beam data may result in motion-induced streak artifacts, which appear in the direction of the chords on which images are reconstructed. These streak artifacts, which are most pronounced along chords tangent to the edges of the moving object, may be suppressed by use of the weighted BPF (WBPF) algorithm, which can exploit the inherent redundancies in fan-beam data. More specifically, reconstructions using full-scan and short-scan data can allow for substantial suppression of these streaks, whereas those using reduced-scan data can allow for partial suppression. Since multiple different reconstructions of the same chord can be obtained by varying the amount of redundant data used, we have laid the groundwork for a possible method to characterize the amount of motion encoded within the data used for reconstructing an image on a particular chord. Furthermore, since motion artifacts in WBPF reconstructions using full-scan and short-scan data appear similar to those in corresponding fan-beam filtered backprojection (FFBP) reconstructions for the cases performed in this study, the BPF and WBPF algorithms potentially may be used to arrive at a more fundamental characterization of how motion artifacts appear in FFBP reconstructions.
Monte Carlo modeling of a conventional X-ray computed tomography scanner for gel dosimetry purposes.
Hayati, Homa; Mesbahi, Asghar; Nazarpoor, Mahmood
2016-01-01
Our purpose in the current study was to model an X-ray CT scanner with the Monte Carlo (MC) method for gel dosimetry. In this study, a conventional CT scanner with one array detector was modeled with use of the MCNPX MC code. The MC calculated photon fluence in detector arrays was used for image reconstruction of a simple water phantom as well as polyacrylamide polymer gel (PAG) used for radiation therapy. Image reconstruction was performed with the filtered back-projection method with a Hann filter and the Spline interpolation method. Using MC results, we obtained the dose-response curve for images of irradiated gel at different absorbed doses. A spatial resolution of about 2 mm was found for our simulated MC model. The MC-based CT images of the PAG gel showed a reliable increase in the CT number with increasing absorbed dose for the studied gel. Also, our results showed that the current MC model of a CT scanner can be used for further studies on the parameters that influence the usability and reliability of results, such as the photon energy spectra and exposure techniques in X-ray CT gel dosimetry.
Zhang, Hao; Zeng, Dong; Zhang, Hua; Wang, Jing; Liang, Zhengrong
2017-01-01
Low-dose X-ray computed tomography (LDCT) imaging is highly recommended for use in the clinic because of growing concerns over excessive radiation exposure. However, the CT images reconstructed by the conventional filtered back-projection (FBP) method from low-dose acquisitions may be severely degraded with noise and streak artifacts due to excessive X-ray quantum noise, or with view-aliasing artifacts due to insufficient angular sampling. In 2005, the nonlocal means (NLM) algorithm was introduced as a non-iterative edge-preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance. It has since been adapted and applied to many other image types and various inverse problems. This paper specifically reviews the applications of the NLM algorithm in LDCT image processing and reconstruction, and explicitly demonstrates its improving effects on the reconstructed CT image quality from low-dose acquisitions. The effectiveness of these applications on LDCT and their relative performance are described in detail. PMID:28303644
Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.
Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F
1980-01-01
Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, S; Wang, W; Tang, X
2014-06-15
Purpose: With the major benefit in dealing with data truncation for ROI reconstruction, the algorithm of differentiated backprojection followed by Hilbert filtering (DBPF) is originally derived for image reconstruction from parallel- or fan-beam data. To extend its application for axial CB scan, we proposed the integration of the DBPF algorithm with 3-D weighting. In this work, we further propose the incorporation of Butterfly filtering into the 3-D weighted axial CB-DBPF algorithm and conduct an evaluation to verify its performance. Methods: Given an axial scan, tomographic images are reconstructed by the DBPF algorithm with 3-D weighting, in which streak artifacts existmore » along the direction of Hilbert filtering. Recognizing this orientation-specific behavior, a pair of orthogonal Butterfly filtering is applied on the reconstructed images with the horizontal and vertical Hilbert filtering correspondingly. In addition, the Butterfly filtering can also be utilized for streak artifact suppression in the scenarios wherein only partial scan data with an angular range as small as 270° are available. Results: Preliminary data show that, with the correspondingly applied Butterfly filtering, the streak artifacts existing in the images reconstructed by the 3-D weighted DBPF algorithm can be suppressed to an unnoticeable level. Moreover, the Butterfly filtering also works at the scenarios of partial scan, though the 3-D weighting scheme may have to be dropped because of no sufficient projection data are available. Conclusion: As an algorithmic step, the incorporation of Butterfly filtering enables the DBPF algorithm for CB image reconstruction from data acquired along either a full or partial axial scan.« less
Allner, S; Koehler, T; Fehringer, A; Birnbacher, L; Willner, M; Pfeiffer, F; Noël, P B
2016-05-21
The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.
MR Guided PET Image Reconstruction
Bai, Bing; Li, Quanzheng; Leahy, Richard M.
2013-01-01
The resolution of PET images is limited by the physics of positron-electron annihilation and instrumentation for photon coincidence detection. Model based methods that incorporate accurate physical and statistical models have produced significant improvements in reconstructed image quality when compared to filtered backprojection reconstruction methods. However, it has often been suggested that by incorporating anatomical information, the resolution and noise properties of PET images could be improved, leading to better quantitation or lesion detection. With the recent development of combined MR-PET scanners, it is possible to collect intrinsically co-registered MR images. It is therefore now possible to routinely make use of anatomical information in PET reconstruction, provided appropriate methods are available. In this paper we review research efforts over the past 20 years to develop these methods. We discuss approaches based on the use of both Markov random field priors and joint information or entropy measures. The general framework for these methods is described and their performance and longer term potential and limitations discussed. PMID:23178087
Variability in CT lung-nodule volumetry: Effects of dose reduction and reconstruction methods.
Young, Stefano; Kim, Hyun J Grace; Ko, Moe Moe; Ko, War War; Flores, Carlos; McNitt-Gray, Michael F
2015-05-01
Measuring the size of nodules on chest CT is important for lung cancer staging and measuring therapy response. 3D volumetry has been proposed as a more robust alternative to 1D and 2D sizing methods. There have also been substantial advances in methods to reduce radiation dose in CT. The purpose of this work was to investigate the effect of dose reduction and reconstruction methods on variability in 3D lung-nodule volumetry. Reduced-dose CT scans were simulated by applying a noise-addition tool to the raw (sinogram) data from clinically indicated patient scans acquired on a multidetector-row CT scanner (Definition Flash, Siemens Healthcare). Scans were simulated at 25%, 10%, and 3% of the dose of their clinical protocol (CTDIvol of 20.9 mGy), corresponding to CTDIvol values of 5.2, 2.1, and 0.6 mGy. Simulated reduced-dose data were reconstructed with both conventional filtered backprojection (B45 kernel) and iterative reconstruction methods (SAFIRE: I44 strength 3 and I50 strength 3). Three lab technologist readers contoured "measurable" nodules in 33 patients under each of the different acquisition/reconstruction conditions in a blinded study design. Of the 33 measurable nodules, 17 were used to estimate repeatability with their clinical reference protocol, as well as interdose and inter-reconstruction-method reproducibilities. The authors compared the resulting distributions of proportional differences across dose and reconstruction methods by analyzing their means, standard deviations (SDs), and t-test and F-test results. The clinical-dose repeatability experiment yielded a mean proportional difference of 1.1% and SD of 5.5%. The interdose reproducibility experiments gave mean differences ranging from -5.6% to -1.7% and SDs ranging from 6.3% to 9.9%. The inter-reconstruction-method reproducibility experiments gave mean differences of 2.0% (I44 strength 3) and -0.3% (I50 strength 3), and SDs were identical at 7.3%. For the subset of repeatability cases, inter-reconstruction-method mean/SD pairs were (1.4%, 6.3%) and (-0.7%, 7.2%) for I44 strength 3 and I50 strength 3, respectively. Analysis of representative nodules confirmed that reader variability appeared unaffected by dose or reconstruction method. Lung-nodule volumetry was extremely robust to the radiation-dose level, down to the minimum scanner-supported dose settings. In addition, volumetry was robust to the reconstruction methods used in this study, which included both conventional filtered backprojection and iterative methods.
NASA Astrophysics Data System (ADS)
Lu, Tong; Wang, Yihan; Gao, Feng; Zhao, Huijuan; Ntziachristos, Vasilis; Li, Jiao
2018-02-01
Photoacoustic mesoscopy (PAMe), offering high-resolution (sub-100-μm) and high optical contrast imaging at the depth of 1-10 mm, generally obtains massive collection data using a high-frequency focused ultrasonic transducer. The spatial impulse response (SIR) of this focused transducer causes the distortion of measured signals in both duration and amplitude. Thus, the reconstruction method considering the SIR needs to be investigated in the computation-economic way for PAMe. Here, we present a modified back-projection algorithm, by introducing a SIR-dependent calibration process using a non-satationary convolution method. The proposed method is performed on numerical simulations and phantom experiments of microspheres with diameter of both 50 μm and 100 μm, and the improvement of image fidelity of this method is proved to be evident by methodology parameters. The results demonstrate that, the images reconstructed when the SIR of transducer is accounted for have higher contrast-to-noise ratio and more reasonable spatial resolution, compared to the common back-projection algorithm.
1986-08-01
SECURITY CLASSIFICATION AUTHORITY 3 DISTRIBUTIONAVAILABILITY OF REPORT N/A \\pproved for public release, 21b. OECLASS FI) CAT ) ON/OOWNGRAOING SCMEOLLE...from this set of projections. The Convolution Back-Projection (CBP) algorithm is widely used technique in Computer Aide Tomography ( CAT ). In this work...University of Illinois at Urbana-Champaign. 1985 Ac % DTICEl_ FCTE " AUG 1 11986 Urbana. Illinois U,) I A NEW METHOD OF SYNTHETIC APERTURE RADAR IMAGE
1991-01-01
Auvt’r discordaint lairs . p’ f~~~Ilit’ trtthailitY if ;i Ittili x’ ii IHit’ s,;tilct sIpac.’ is fujr s~,re ’’uristartt A1 and 0t < 1, A ’tark...reconstruction algorithms, usually of the filtered back-projection type, do 99mTcIIMPAO Thallium-201 not correct for nonuniform photon attenuation and depth
NASA Astrophysics Data System (ADS)
Xie, Lizhe; Hu, Yining; Chen, Yang; Shi, Luyao
2015-03-01
Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.
In-line phase contrast micro-CT reconstruction for biomedical specimens.
Fu, Jian; Tan, Renbo
2014-01-01
X-ray phase contrast micro computed tomography (micro-CT) can non-destructively provide the internal structure information of soft tissues and low atomic number materials. It has become an invaluable analysis tool for biomedical specimens. Here an in-line phase contrast micro-CT reconstruction technique is reported, which consists of a projection extraction method and the conventional filter back-projection (FBP) reconstruction algorithm. The projection extraction is implemented by applying the Fourier transform to the forward projections of in-line phase contrast micro-CT. This work comprises a numerical study of the method and its experimental verification using a biomedical specimen dataset measured at an X-ray tube source micro-CT setup. The numerical and experimental results demonstrate that the presented technique can improve the imaging contrast of biomedical specimens. It will be of interest for a wide range of in-line phase contrast micro-CT applications in medicine and biology.
High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI
NASA Astrophysics Data System (ADS)
Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer
2011-03-01
Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.
NASA Astrophysics Data System (ADS)
Uchide, T.; Shearer, P. M.
2009-12-01
Introduction Uchide and Ide [SSA Spring Meeting, 2009] proposed a new framework for studying the scaling and overall nature of earthquake rupture growth in terms of cumulative moment functions. For better understanding of rupture growth processes, spatiotemporally local processes are also important. The nature of high-frequency (HF) radiation has been investigated for some time, but its role in the earthquake rupture process is still unclear. A wavelet analysis reveals that the HF radiation (e.g., 4 - 32 Hz) of the 2004 Parkfield earthquake is peaky, which implies that the sources of the HF radiation are isolated in space and time. We experiment with applying a matched filter analysis using small template events occurring near the target event rupture area to test whether it can reveal the HF radiation sources for a regular large earthquake. Method We design a matched filter for multiple components and stations. Shelly et al. [2007] attempted identifying low-frequency earthquakes (LFE) in non-volcanic tremor waveforms by stacking the correlation coefficients (CC) between the seismograms of the tremor and the LFE. Differing from their method, our event detection indicator is the CC between the seismograms of the target and template events recorded at the same stations, since the key information for detecting the sources will be the arrival-time differences and the amplitude ratios among stations. Data from both the target and template events are normalized by the maximum amplitude of the seismogram of the template event in the cross-correlation time window. This process accounts for the radiation pattern and distance between the source and stations. At each small earthquake target, high values in the CC time series suggest the possibility of HF radiation during the mainshock rupture from a similar location to the target event. Application to the 2004 Parkfield earthquake We apply the matched filter method to the 2004 Parkfield earthquake (Mw 6.0). We use seismograms recorded at the 13 stations of UPSAR [Fletcher et al, 1992]. At each station, both acceleration and velocity sensors are installed, therefore both large and small earthquakes are observable. We employ 184 earthquakes (M 2.0 - 3.5) as template events, and 0.5 s of the P waves on the vertical components and the S waves on all three components. The data are bandpass-filtered between 4 and 16 Hz. One source is detected at 4 s and 12 km northwest from the hypocenter. Although the CC has generally low values, its peak is more than five times larger than its standard deviation and thus remarkably high. This source is close to the secondary onset revealed by a back-projection analysis of 2 - 8 Hz data from Parkfield strong motion stations [Allmann and Shearer, 2007]. While the back-projection approach images the peak of HF radiation, our method detects the onset time, which is slightly different. Another source is located at 1.2 s and 2 km southeast from the hypocenter, which may correspond to deceleration of the initial rupture. Comparisons of the derived HF radiation sources to the whole rupture process will help us reveal general earthquake source dynamics.
Real-time quasi-3D tomographic reconstruction
NASA Astrophysics Data System (ADS)
Buurlage, Jan-Willem; Kohr, Holger; Palenstijn, Willem Jan; Joost Batenburg, K.
2018-06-01
Developments in acquisition technology and a growing need for time-resolved experiments pose great computational challenges in tomography. In addition, access to reconstructions in real time is a highly demanded feature but has so far been out of reach. We show that by exploiting the mathematical properties of filtered backprojection-type methods, having access to real-time reconstructions of arbitrarily oriented slices becomes feasible. Furthermore, we present , software for visualization and on-demand reconstruction of slices. A user of can interactively shift and rotate slices in a GUI, while the software updates the slice in real time. For certain use cases, the possibility to study arbitrarily oriented slices in real time directly from the measured data provides sufficient visual and quantitative insight. Two such applications are discussed in this article.
Frequency-radial duality based photoacoustic image reconstruction.
Akramus Salehin, S M; Abhayapala, Thushara D
2012-07-01
Photoacoustic image reconstruction algorithms are usually slow due to the large sizes of data that are processed. This paper proposes a method for exact photoacoustic reconstruction for the spherical geometry in the limiting case of a continuous aperture and infinite measurement bandwidth that is faster than existing methods namely (1) backprojection method and (2) the Norton-Linzer method [S. J. Norton and M. Linzer, "Ultrasonic reflectivity imaging in three dimensions: Exact inverse scattering solution for plane, cylindrical and spherical apertures," Biomedical Engineering, IEEE Trans. BME 28, 202-220 (1981)]. The initial pressure distribution is expanded using a spherical Fourier Bessel series. The proposed method estimates the Fourier Bessel coefficients and subsequently recovers the pressure distribution. A concept of frequency-radial duality is introduced that separates the information from the different radial basis functions by using frequencies corresponding to the Bessel zeros. This approach provides a means to analyze the information obtained given a measurement bandwidth. Using order analysis and numerical experiments, the proposed method is shown to be faster than both the backprojection and the Norton-Linzer methods. Further, the reconstructed images using the proposed methodology were of similar quality to the Norton-Linzer method and were better than the approximate backprojection method.
NASA Astrophysics Data System (ADS)
Meng, Lingsen; Ampuero, Jean-Paul; Luo, Yingdi; Wu, Wenbo; Ni, Sidao
2012-12-01
Comparing teleseismic array back-projection source images of the 2011 Tohoku-Oki earthquake with results from static and kinematic finite source inversions has revealed little overlap between the regions of high- and low-frequency slip. Motivated by this interesting observation, back-projection studies extended to intermediate frequencies, down to about 0.1 Hz, have suggested that a progressive transition of rupture properties as a function of frequency is observable. Here, by adapting the concept of array response function to non-stationary signals, we demonstrate that the "swimming artifact", a systematic drift resulting from signal non-stationarity, induces significant bias on beamforming back-projection at low frequencies. We introduce a "reference window strategy" into the multitaper-MUSIC back-projection technique and significantly mitigate the "swimming artifact" at high frequencies (1 s to 4 s). At lower frequencies, this modification yields notable, but significantly smaller, artifacts than time-domain stacking. We perform extensive synthetic tests that include a 3D regional velocity model for Japan. We analyze the recordings of the Tohoku-Oki earthquake at the USArray and at the European array at periods from 1 s to 16 s. The migration of the source location as a function of period, regardless of the back-projection methods, has characteristics that are consistent with the expected effect of the "swimming artifact". In particular, the apparent up-dip migration as a function of frequency obtained with the USArray can be explained by the "swimming artifact". This indicates that the most substantial frequency-dependence of the Tohoku-Oki earthquake source occurs at periods longer than 16 s. Thus, low-frequency back-projection needs to be further tested and validated in order to contribute to the characterization of frequency-dependent rupture properties.
Yoo, Boyeol; Son, Kihong; Pua, Rizza; Kim, Jinsung; Solodov, Alexander; Cho, Seungryong
2016-10-01
With the increased use of computed tomography (CT) in clinics, dose reduction is the most important feature people seek when considering new CT techniques or applications. We developed an intensity-weighted region-of-interest (IWROI) imaging method in an exact half-fan geometry to reduce the imaging radiation dose to patients in cone-beam CT (CBCT) for image-guided radiation therapy (IGRT). While dose reduction is highly desirable, preserving the high-quality images of the ROI is also important for target localization in IGRT. An intensity-weighting (IW) filter made of copper was mounted in place of a bowtie filter on the X-ray tube unit of an on-board imager (OBI) system such that the filter can substantially reduce radiation exposure to the outer ROI. In addition to mounting the IW filter, the lead-blade collimation of the OBI was adjusted to produce an exact half-fan scanning geometry for a further reduction of the radiation dose. The chord-based rebinned backprojection-filtration (BPF) algorithm in circular CBCT was implemented for image reconstruction, and a humanoid pelvis phantom was used for the IWROI imaging experiment. The IWROI image of the phantom was successfully reconstructed after beam-quality correction, and it was registered to the reference image within an acceptable level of tolerance. Dosimetric measurements revealed that the dose is reduced by approximately 61% in the inner ROI and by 73% in the outer ROI compared to the conventional bowtie filter-based half-fan scan. The IWROI method substantially reduces the imaging radiation dose and provides reconstructed images with an acceptable level of quality for patient setup and target localization. The proposed half-fan-based IWROI imaging technique can add a valuable option to CBCT in IGRT applications.
Interior region-of-interest reconstruction using a small, nearly piecewise constant subregion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taguchi, Katsuyuki; Xu Jingyan; Srivastava, Somesh
2011-03-15
Purpose: To develop a method to reconstruct an interior region-of-interest (ROI) image with sufficient accuracy that uses differentiated backprojection (DBP) projection onto convex sets (POCS) [H. Kudo et al., ''Tiny a priori knowledge solves the interior problem in computed tomography'', Phys. Med. Biol. 53, 2207-2231 (2008)] and a tiny knowledge that there exists a nearly piecewise constant subregion. Methods: The proposed method first employs filtered backprojection to reconstruct an image on which a tiny region P with a small variation in the pixel values is identified inside the ROI. Total variation minimization [H. Yu and G. Wang, ''Compressed sensing basedmore » interior tomography'', Phys. Med. Biol. 54, 2791-2805 (2009); W. Han et al., ''A general total variation minimization theorem for compressed sensing based interior tomography'', Int. J. Biomed. Imaging 2009, Article 125871 (2009)] is then employed to obtain pixel values in the subregion P, which serve as a priori knowledge in the next step. Finally, DBP-POCS is performed to reconstruct f(x,y) inside the ROI. Clinical data and the reconstructed image obtained by an x-ray computed tomography system (SOMATOM Definition; Siemens Healthcare) were used to validate the proposed method. The detector covers an object with a diameter of {approx}500 mm. The projection data were truncated either moderately to limit the detector coverage to diameter 350 mm of the object or severely to cover diameter 199 mm. Images were reconstructed using the proposed method. Results: The proposed method provided ROI images with correct pixel values in all areas except near the edge of the ROI. The coefficient of variation, i.e., the root mean square error divided by the mean pixel values, was less than 2.0% or 4.5% with the moderate or severe truncation cases, respectively, except near the boundary of the ROI. Conclusions: The proposed method allows for reconstructing interior ROI images with sufficient accuracy with a tiny knowledge that there exists a nearly piecewise constant subregion.« less
de Lima, Camila; Salomão Helou, Elias
2018-01-01
Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, F; Park, J; Barraclough, B
2016-06-15
Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less
Hoffman, John; Young, Stefano; Noo, Frédéric; McNitt-Gray, Michael
2016-03-01
With growing interest in quantitative imaging, radiomics, and CAD using CT imaging, the need to explore the impacts of acquisition and reconstruction parameters has grown. This usually requires extensive access to the scanner on which the data were acquired and its workflow is not designed for large-scale reconstruction projects. Therefore, the authors have developed a freely available, open-source software package implementing a common reconstruction method, weighted filtered backprojection (wFBP), for helical fan-beam CT applications. FreeCT_wFBP is a low-dependency, GPU-based reconstruction program utilizing c for the host code and Nvidia CUDA C for GPU code. The software is capable of reconstructing helical scans acquired with arbitrary pitch-values, and sampling techniques such as flying focal spots and a quarter-detector offset. In this work, the software has been described and evaluated for reconstruction speed, image quality, and accuracy. Speed was evaluated based on acquisitions of the ACR CT accreditation phantom under four different flying focal spot configurations. Image quality was assessed using the same phantom by evaluating CT number accuracy, uniformity, and contrast to noise ratio (CNR). Finally, reconstructed mass-attenuation coefficient accuracy was evaluated using a simulated scan of a FORBILD thorax phantom and comparing reconstructed values to the known phantom values. The average reconstruction time evaluated under all flying focal spot configurations was found to be 17.4 ± 1.0 s for a 512 row × 512 column × 32 slice volume. Reconstructions of the ACR phantom were found to meet all CT Accreditation Program criteria including CT number, CNR, and uniformity tests. Finally, reconstructed mass-attenuation coefficient values of water within the FORBILD thorax phantom agreed with original phantom values to within 0.0001 mm(2)/g (0.01%). FreeCT_wFBP is a fast, highly configurable reconstruction package for third-generation CT available under the GNU GPL. It shows good performance with both clinical and simulated data.
X-ray microtomography experiments using a diffraction tube and a focusing multilayer-mirror
NASA Astrophysics Data System (ADS)
Gurker, N.; Nell, R.; Backfrieder, W.; Kandutsch, J.; Sarg, K.; Prevrhal, S.; Nentwich, C.
1994-10-01
A first-generation (i.e. translate-rotate) micro X-ray transmission computed tomography system has been developed, which utilizes a standard 2.2 kW long-fine-focus diffraction tube with Cu-anode as the X-ray source, a spherical W/C multilayer-mirror to condense and spectrally select the CuKα-radiation (8.04 keV) from the tube and a scintillation counter to detect the X-ray photons; in the present configuration the optical system demagnifies the original source size in the direction parallel to the imaged object slice by a factor of 5, where a small slit captures the radiation and thus gives an intense microscopic (pseudo-) source of monochromatic X-radiation in close vicinity of the scanned specimen. The system provides tomographic images of small objects (up to 25 mm in diameter) reconstructed as 128 × 128 matrices with resolutions between ˜ 20 and 200 μm in ≥ 10 min. The software package which is available for image reconstruction includes filtered backprojection, correcting backprojection (ART, MART) and a new type of weighted backprojection, which turns out to be a simplified version of MART (SMART). A dedicated scan- and reconstruction-procedure demonstrates the feasibility to image selected regions-of-interest within the investigated specimen slice with (up to 1 order of magnitude) higher spatial resolution than their surroundings without major artefacts (Zoom-CT). The hard-and software-components of this CT-system are discussed, several examples are given and perspectives of further development are outlined.
Gamma-ray momentum reconstruction from Compton electron trajectories by filtered back-projection
Haefner, A.; Gunter, D.; Plimley, B.; ...
2014-11-03
Gamma-ray imaging utilizing Compton scattering has traditionally relied on measuring coincident gamma-ray interactions to map directional information of the source distribution. This coincidence requirement makes it an inherently inefficient process. We present an approach to gamma-ray reconstruction from Compton scattering that requires only a single electron tracking detector, thus removing the coincidence requirement. From the Compton scattered electron momentum distribution, our algorithm analytically computes the incident photon's correlated direction and energy distributions. Because this method maps the source energy and location, it is useful in applications, where prior information about the source distribution is unknown. We demonstrate this method withmore » electron tracks measured in a scientific Si charge coupled device. While this method was demonstrated with electron tracks in a Si-based detector, it is applicable to any detector that can measure electron direction and energy, or equivalently the electron momentum. For example, it can increase the sensitivity to obtain energy and direction in gas-based systems that suffer from limited efficiency.« less
Enhancing the pictorial content of digital holograms at 100 frames per second.
Tsang, P W M; Poon, T-C; Cheung, K W K
2012-06-18
We report a low complexity, non-iterative method for enhancing the sharpness, brightness, and contrast of the pictorial content that is recorded in a digital hologram, without the need of re-generating the latter from the original object scene. In our proposed method, the hologram is first back-projected to a 2-D virtual diffraction plane (VDP) which is located at close proximity to the original object points. Next the field distribution on the VDP, which shares similar optical properties as the object scene, is enhanced. Subsequently, the processed VDP is expanded into a full hologram. We demonstrate two types of enhancement: a modified histogram equalization to improve the brightness and contrast, and localized high-boost-filtering (LHBF) to increase the sharpness. Experiment results have demonstrated that our proposed method is capable of enhancing a 2048x2048 hologram at a rate of around 100 frames per second. To the best of our knowledge, this is the first time real-time image enhancement is considered in the context of digital holography.
NASA Astrophysics Data System (ADS)
Guo, Zhenyan; Song, Yang; Yuan, Qun; Wulan, Tuya; Chen, Lei
2017-06-01
In this paper, a transient multi-parameter three-dimensional (3D) reconstruction method is proposed to diagnose and visualize a combustion flow field. Emission and transmission tomography based on spatial phase-shifted technology are combined to reconstruct, simultaneously, the various physical parameter distributions of a propane flame. Two cameras triggered by the internal trigger mode capture the projection information of the emission and moiré tomography, respectively. A two-step spatial phase-shifting method is applied to extract the phase distribution in the moiré fringes. By using the filtered back-projection algorithm, we reconstruct the 3D refractive-index distribution of the combustion flow field. Finally, the 3D temperature distribution of the flame is obtained from the refractive index distribution using the Gladstone-Dale equation. Meanwhile, the 3D intensity distribution is reconstructed based on the radiation projections from the emission tomography. Therefore, the structure and edge information of the propane flame are well visualized.
Tensor-based Dictionary Learning for Spectral CT Reconstruction
Zhang, Yanbo; Wang, Ge
2016-01-01
Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628
Mitra, Ayan; Politte, David G; Whiting, Bruce R; Williamson, Jeffrey F; O'Sullivan, Joseph A
2017-01-01
Model-based image reconstruction (MBIR) techniques have the potential to generate high quality images from noisy measurements and a small number of projections which can reduce the x-ray dose in patients. These MBIR techniques rely on projection and backprojection to refine an image estimate. One of the widely used projectors for these modern MBIR based technique is called branchless distance driven (DD) projection and backprojection. While this method produces superior quality images, the computational cost of iterative updates keeps it from being ubiquitous in clinical applications. In this paper, we provide several new parallelization ideas for concurrent execution of the DD projectors in multi-GPU systems using CUDA programming tools. We have introduced some novel schemes for dividing the projection data and image voxels over multiple GPUs to avoid runtime overhead and inter-device synchronization issues. We have also reduced the complexity of overlap calculation of the algorithm by eliminating the common projection plane and directly projecting the detector boundaries onto image voxel boundaries. To reduce the time required for calculating the overlap between the detector edges and image voxel boundaries, we have proposed a pre-accumulation technique to accumulate image intensities in perpendicular 2D image slabs (from a 3D image) before projection and after backprojection to ensure our DD kernels run faster in parallel GPU threads. For the implementation of our iterative MBIR technique we use a parallel multi-GPU version of the alternating minimization (AM) algorithm with penalized likelihood update. The time performance using our proposed reconstruction method with Siemens Sensation 16 patient scan data shows an average of 24 times speedup using a single TITAN X GPU and 74 times speedup using 3 TITAN X GPUs in parallel for combined projection and backprojection.
GPU-based Branchless Distance-Driven Projection and Backprojection
Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong
2017-01-01
Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm. PMID:29333480
GPU-based Branchless Distance-Driven Projection and Backprojection.
Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong
2017-12-01
Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.
Analytic reconstruction algorithms for triple-source CT with horizontal data truncation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ming; Yu, Hengyong, E-mail: hengyong-yu@ieee.org
2015-10-15
Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle tomore » cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.« less
Terahertz wide aperture reflection tomography.
Pearce, Jeremy; Choi, Hyeokho; Mittleman, Daniel M; White, Jeff; Zimdars, David
2005-07-01
We describe a powerful imaging modality for terahertz (THz) radiation, THz wide aperture reflection tomography (WART). Edge maps of an object's cross section are reconstructed from a series of time-domain reflection measurements at different viewing angles. Each measurement corresponds to a parallel line projection of the object's cross section. The filtered backprojection algorithm is applied to recover the image from the projection data. To our knowledge, this is the first demonstration of a reflection computed tomography technique using electromagnetic waves. We demonstrate the capabilities of THz WART by imaging the cross sections of two test objects.
Comparison of pulse sequences for R1-based electron paramagnetic resonance oxygen imaging.
Epel, Boris; Halpern, Howard J
2015-05-01
Electron paramagnetic resonance (EPR) spin-lattice relaxation (SLR) oxygen imaging has proven to be an indispensable tool for assessing oxygen partial pressure in live animals. EPR oxygen images show remarkable oxygen accuracy when combined with high precision and spatial resolution. Developing more effective means for obtaining SLR rates is of great practical, biological and medical importance. In this work we compared different pulse EPR imaging protocols and pulse sequences to establish advantages and areas of applicability for each method. Tests were performed using phantoms containing spin probes with oxygen concentrations relevant to in vivo oxymetry. We have found that for small animal size objects the inversion recovery sequence combined with the filtered backprojection reconstruction method delivers the best accuracy and precision. For large animals, in which large radio frequency energy deposition might be critical, free induction decay and three pulse stimulated echo sequences might find better practical usage. Copyright © 2015 Elsevier Inc. All rights reserved.
Tang, Shaojie; Yang, Yi; Tang, Xiangyang
2012-01-01
Interior tomography problem can be solved using the so-called differentiated backprojection-projection onto convex sets (DBP-POCS) method, which requires a priori knowledge within a small area interior to the region of interest (ROI) to be imaged. In theory, the small area wherein the a priori knowledge is required can be in any shape, but most of the existing implementations carry out the Hilbert filtering either horizontally or vertically, leading to a vertical or horizontal strip that may be across a large area in the object. In this work, we implement a practical DBP-POCS method with radial Hilbert filtering and thus the small area with the a priori knowledge can be roughly round (e.g., a sinus or ventricles among other anatomic cavities in human or animal body). We also conduct an experimental evaluation to verify the performance of this practical implementation. We specifically re-derive the reconstruction formula in the DBP-POCS fashion with radial Hilbert filtering to assure that only a small round area with the a priori knowledge be needed (namely radial DBP-POCS method henceforth). The performance of the practical DBP-POCS method with radial Hilbert filtering and a priori knowledge in a small round area is evaluated with projection data of the standard and modified Shepp-Logan phantoms simulated by computer, followed by a verification using real projection data acquired by a computed tomography (CT) scanner. The preliminary performance study shows that, if a priori knowledge in a small round area is available, the radial DBP-POCS method can solve the interior tomography problem in a more practical way at high accuracy. In comparison to the implementations of DBP-POCS method demanding the a priori knowledge in horizontal or vertical strip, the radial DBP-POCS method requires the a priori knowledge within a small round area only. Such a relaxed requirement on the availability of a priori knowledge can be readily met in practice, because a variety of small round areas (e.g., air-filled sinuses or fluid-filled ventricles among other anatomic cavities) exist in human or animal body. Therefore, the radial DBP-POCS method with a priori knowledge in a small round area is more feasible in clinical and preclinical practice.
NASA Astrophysics Data System (ADS)
Di, K.; Liu, Y.; Liu, B.; Peng, M.
2012-07-01
Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.
Hsieh, Jiang; Nilsen, Roy A.; McOlash, Scott M.
2006-01-01
A three-dimensional (3D) weighted helical cone beam filtered backprojection (CB-FBP) algorithm (namely, original 3D weighted helical CB-FBP algorithm) has already been proposed to reconstruct images from the projection data acquired along a helical trajectory in angular ranges up to [0, 2 π]. However, an overscan is usually employed in the clinic to reconstruct tomographic images with superior noise characteristics at the most challenging anatomic structures, such as head and spine, extremity imaging, and CT angiography as well. To obtain the most achievable noise characteristics or dose efficiency in a helical overscan, we extended the 3D weighted helical CB-FBP algorithm to handle helical pitches that are smaller than 1: 1 (namely extended 3D weighted helical CB-FBP algorithm). By decomposing a helical over scan with an angular range of [0, 2π + Δβ] into a union of full scans corresponding to an angular range of [0, 2π], the extended 3D weighted function is a summation of all 3D weighting functions corresponding to each full scan. An experimental evaluation shows that the extended 3D weighted helical CB-FBP algorithm can improve noise characteristics or dose efficiency of the 3D weighted helical CB-FBP algorithm at a helical pitch smaller than 1: 1, while its reconstruction accuracy and computational efficiency are maintained. It is believed that, such an efficient CB reconstruction algorithm that can provide superior noise characteristics or dose efficiency at low helical pitches may find its extensive applications in CT medical imaging. PMID:23165031
Solving ill-posed inverse problems using iterative deep neural networks
NASA Astrophysics Data System (ADS)
Adler, Jonas; Öktem, Ozan
2017-12-01
We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).
Geometry-constraint-scan imaging for in-line phase contrast micro-CT.
Fu, Jian; Yu, Guangyuan; Fan, Dekai
2014-01-01
X-ray phase contrast computed tomography (CT) uses the phase shift that x-rays undergo when passing through matter, rather than their attenuation, as the imaging signal and may provide better image quality in soft-tissue and biomedical materials with low atomic number. Here a geometry-constraint-scan imaging technique for in-line phase contrast micro-CT is reported. It consists of two circular-trajectory scans with x-ray detector at different positions, the phase projection extraction method with the Fresnel free-propagation theory and the filter back-projection reconstruction algorithm. This method removes the contact-detector scan and the pure phase object assumption in classical in-line phase contrast Micro-CT. Consequently it relaxes the experimental conditions and improves the image contrast. This work comprises a numerical study of this technique and its experimental verification using a biomedical composite dataset measured at an x-ray tube source Micro-CT setup. The numerical and experimental results demonstrate the validity of the presented method. It will be of interest for a wide range of in-line phase contrast Micro-CT applications in biology and medicine.
NASA Astrophysics Data System (ADS)
Tang, Shaojie; Tang, Xiangyang
2016-03-01
Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).
Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.
Chen, Ming; Yu, Hengyong
2015-10-01
This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.
Compensating the intensity fall-off effect in cone-beam tomography by an empirical weight formula.
Chen, Zikuan; Calhoun, Vince D; Chang, Shengjiang
2008-11-10
The Feldkamp-David-Kress (FDK) algorithm is widely adopted for cone-beam reconstruction due to its one-dimensional filtered backprojection structure and parallel implementation. In a reconstruction volume, the conspicuous cone-beam artifact manifests as intensity fall-off along the longitudinal direction (the gantry rotation axis). This effect is inherent to circular cone-beam tomography due to the fact that a cone-beam dataset acquired from circular scanning fails to meet the data sufficiency condition for volume reconstruction. Upon observations of the intensity fall-off phenomenon associated with the FDK reconstruction of a ball phantom, we propose an empirical weight formula to compensate for the fall-off degradation. Specifically, a reciprocal cosine can be used to compensate the voxel values along longitudinal direction during three-dimensional backprojection reconstruction, in particular for boosting the values of voxels at positions with large cone angles. The intensity degradation within the z plane, albeit insignificant, can also be compensated by using the same weight formula through a parameter for radial distance dependence. Computer simulations and phantom experiments are presented to demonstrate the compensation effectiveness of the fall-off effect inherent in circular cone-beam tomography.
Earthquake Monitoring with the MyShake Global Smartphone Seismic Network
NASA Astrophysics Data System (ADS)
Inbal, A.; Kong, Q.; Allen, R. M.; Savran, W. H.
2017-12-01
Smartphone arrays have the potential for significantly improving seismic monitoring in sparsely instrumented urban areas. This approach benefits from the dense spatial coverage of users, as well as from communication and computational capabilities built into smartphones, which facilitate big seismic data transfer and analysis. Advantages in data acquisition with smartphones trade-off with factors such as the low-quality sensors installed in phones, high noise levels, and strong network heterogeneity, all of which limit effective seismic monitoring. Here we utilize network and array-processing schemes to asses event detectability with the MyShake global smartphone network. We examine the benefits of using this network in either triggered or continuous modes of operation. A global database of ground motions measured on stationary phones triggered by M2-6 events is used to establish detection probabilities. We find that the probability of detecting an M=3 event with a single phone located <10 km from the epicenter exceeds 70%. Due to the sensor's self-noise, smaller magnitude events at short epicentral distances are very difficult to detect. To increase the signal-to-noise ratio, we employ array back-projection techniques on continuous data recorded by thousands of phones. In this class of methods, the array is used as a spatial filter that suppresses signals emitted from shallow noise sources. Filtered traces are stacked to further enhance seismic signals from deep sources. We benchmark our technique against traditional location algorithms using recordings from California, a region with large MyShake user database. We find that locations derived from back-projection images of M 3 events recorded by >20 nearby phones closely match the regional catalog locations. We use simulated broadband seismic data to examine how location uncertainties vary with user distribution and noise levels. To this end, we have developed an empirical noise model for the metropolitan Los-Angeles (LA) area. We find that densities larger than 100 stationary phones/km2 are required to accurately locate M 2 events in the LA basin. Given the projected MyShake user distribution, that condition may be met within the next few years.
X-ray computed tomography using curvelet sparse regularization.
Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias
2015-04-01
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
A framework for directional and higher-order reconstruction in photoacoustic tomography
NASA Astrophysics Data System (ADS)
Boink, Yoeri E.; Lagerwerf, Marinus J.; Steenbergen, Wiendelt; van Gils, Stephan A.; Manohar, Srirang; Brune, Christoph
2018-02-01
Photoacoustic tomography is a hybrid imaging technique that combines high optical tissue contrast with high ultrasound resolution. Direct reconstruction methods such as filtered back-projection, time reversal and least squares suffer from curved line artefacts and blurring, especially in the case of limited angles or strong noise. In recent years, there has been great interest in regularised iterative methods. These methods employ prior knowledge of the image to provide higher quality reconstructions. However, easy comparisons between regularisers and their properties are limited, since many tomography implementations heavily rely on the specific regulariser chosen. To overcome this bottleneck, we present a modular reconstruction framework for photoacoustic tomography, which enables easy comparisons between regularisers with different properties, e.g. nonlinear, higher-order or directional. We solve the underlying minimisation problem with an efficient first-order primal-dual algorithm. Convergence rates are optimised by choosing an operator-dependent preconditioning strategy. A variety of reconstruction methods are tested on challenging 2D synthetic and experimental data sets. They outperform direct reconstruction approaches for strong noise levels and limited angle measurements, offering immediate benefits in terms of acquisition time and quality. This work provides a basic platform for the investigation of future advanced regularisation methods in photoacoustic tomography.
Sinogram noise reduction for low-dose CT by statistics-based nonlinear filters
NASA Astrophysics Data System (ADS)
Wang, Jing; Lu, Hongbing; Li, Tianfang; Liang, Zhengrong
2005-04-01
Low-dose CT (computed tomography) sinogram data have been shown to be signal-dependent with an analytical relationship between the sample mean and sample variance. Spatially-invariant low-pass linear filters, such as the Butterworth and Hanning filters, could not adequately handle the data noise and statistics-based nonlinear filters may be an alternative choice, in addition to other choices of minimizing cost functions on the noisy data. Anisotropic diffusion filter and nonlinear Gaussian filters chain (NLGC) are two well-known classes of nonlinear filters based on local statistics for the purpose of edge-preserving noise reduction. These two filters can utilize the noise properties of the low-dose CT sinogram for adaptive noise reduction, but can not incorporate signal correlative information for an optimal regularized solution. Our previously-developed Karhunen-Loeve (KL) domain PWLS (penalized weighted least square) minimization considers the signal correlation via the KL strategy and seeks the PWLS cost function minimization for an optimal regularized solution for each KL component, i.e., adaptive to the KL components. This work compared the nonlinear filters with the KL-PWLS framework for low-dose CT application. Furthermore, we investigated the nonlinear filters for post KL-PWLS noise treatment in the sinogram space, where the filters were applied after ramp operation on the KL-PWLS treated sinogram data prior to backprojection operation (for image reconstruction). By both computer simulation and experimental low-dose CT data, the nonlinear filters could not outperform the KL-PWLS framework. The gain of post KL-PWLS edge-preserving noise filtering in the sinogram space is not significant, even the noise has been modulated by the ramp operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leng, Shuai; Yu, Lifeng; Wang, Jia
Purpose: Our purpose was to reduce image noise in spectral CT by exploiting data redundancies in the energy domain to allow flexible selection of the number, width, and location of the energy bins. Methods: Using a variety of spectral CT imaging methods, conventional filtered backprojection (FBP) reconstructions were performed and resulting images were compared to those processed using a Local HighlY constrained backPRojection Reconstruction (HYPR-LR) algorithm. The mean and standard deviation of CT numbers were measured within regions of interest (ROIs), and results were compared between FBP and HYPR-LR. For these comparisons, the following spectral CT imaging methods were used:(i)more » numerical simulations based on a photon-counting, detector-based CT system, (ii) a photon-counting, detector-based micro CT system using rubidium and potassium chloride solutions, (iii) a commercial CT system equipped with integrating detectors utilizing tube potentials of 80, 100, 120, and 140 kV, and (iv) a clinical dual-energy CT examination. The effects of tube energy and energy bin width were evaluated appropriate to each CT system. Results: The mean CT number in each ROI was unchanged between FBP and HYPR-LR images for each of the spectral CT imaging scenarios, irrespective of bin width or tube potential. However, image noise, as represented by the standard deviation of CT numbers in each ROI, was reduced by 36%-76%. In all scenarios, image noise after HYPR-LR algorithm was similar to that of composite images, which used all available photons. No difference in spatial resolution was observed between HYPR-LR processing and FBP. Dual energy patient data processed using HYPR-LR demonstrated reduced noise in the individual, low- and high-energy images, as well as in the material-specific basis images. Conclusions: Noise reduction can be accomplished for spectral CT by exploiting data redundancies in the energy domain. HYPR-LR is a robust method for reducing image noise in a variety of spectral CT imaging systems without losing spatial resolution or CT number accuracy. This method improves the flexibility to select energy bins in the manner that optimizes material identification and separation without paying the penalty of increased image noise or its corollary, increased patient dose.« less
Beyond filtered backprojection: A reconstruction software package for ion beam microtomography data
NASA Astrophysics Data System (ADS)
Habchi, C.; Gordillo, N.; Bourret, S.; Barberet, Ph.; Jovet, C.; Moretto, Ph.; Seznec, H.
2013-01-01
A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.
Metal artifact reduction using a patch-based reconstruction for digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Borges, Lucas R.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.
2017-03-01
Digital breast tomosynthesis (DBT) is rapidly emerging as the main clinical tool for breast cancer screening. Although several reconstruction methods for DBT are described by the literature, one common issue is the interplane artifacts caused by out-of-focus features. For breasts containing highly attenuating features, such as surgical clips and large calcifications, the artifacts are even more apparent and can limit the detection and characterization of lesions by the radiologist. In this work, we propose a novel method of combining backprojected data into tomographic slices using a patch-based approach, commonly used in denoising. Preliminary tests were performed on a geometry phantom and on an anthropomorphic phantom containing metal inserts. The reconstructed images were compared to a commercial reconstruction solution. Qualitative assessment of the reconstructed images provides evidence that the proposed method reduces artifacts while maintaining low noise levels. Objective assessment supports the visual findings. The artifact spread function shows that the proposed method is capable of suppressing artifacts generated by highly attenuating features. The signal difference to noise ratio shows that the noise levels of the proposed and commercial methods are comparable, even though the commercial method applies post-processing filtering steps, which were not implemented on the proposed method. Thus, the proposed method can produce tomosynthesis reconstructions with reduced artifacts and low noise levels.
NASA Astrophysics Data System (ADS)
Kingston, Andrew M.; Myers, Glenn R.; Latham, Shane J.; Li, Heyang; Veldkamp, Jan P.; Sheppard, Adrian P.
2016-10-01
With the GPU computing becoming main-stream, iterative tomographic reconstruction (IR) is becoming a com- putationally viable alternative to traditional single-shot analytical methods such as filtered back-projection. IR liberates one from the continuous X-ray source trajectories required for analytical reconstruction. We present a family of novel X-ray source trajectories for large-angle CBCT. These discrete (sparsely sampled) trajectories optimally fill the space of possible source locations by maximising the degree of mutually independent information. They satisfy a discrete equivalent of Tuy's sufficiency condition and allow high cone-angle (high-flux) tomog- raphy. The highly isotropic nature of the trajectory has several advantages: (1) The average source distance is approximately constant throughout the reconstruction volume, thus avoiding the differential-magnification artefacts that plague high cone-angle helical computed tomography; (2) Reduced streaking artifacts due to e.g. X-ray beam-hardening; (3) Misalignment and component motion manifests as blur in the tomogram rather than double-edges, which is easier to automatically correct; (4) An approximately shift-invariant point-spread-function which enables filtering as a pre-conditioner to speed IR convergence. We describe these space-filling trajectories and demonstrate their above-mentioned properties compared with a traditional helical trajectories.
Using an external gating signal to estimate noise in PET with an emphasis on tracer avid tumors
NASA Astrophysics Data System (ADS)
Schmidtlein, C. R.; Beattie, B. J.; Bailey, D. L.; Akhurst, T. J.; Wang, W.; Gönen, M.; Kirov, A. S.; Humm, J. L.
2010-10-01
The purpose of this study is to establish and validate a methodology for estimating the standard deviation of voxels with large activity concentrations within a PET image using replicate imaging that is immediately available for use in the clinic. To do this, ensembles of voxels in the averaged replicate images were compared to the corresponding ensembles in images derived from summed sinograms. In addition, the replicate imaging noise estimate was compared to a noise estimate based on an ensemble of voxels within a region. To make this comparison two phantoms were used. The first phantom was a seven-chamber phantom constructed of 1 liter plastic bottles. Each chamber of this phantom was filled with a different activity concentration relative to the lowest activity concentration with ratios of 1:1, 1:1, 2:1, 2:1, 4:1, 8:1 and 16:1. The second phantom was a GE Well-Counter phantom. These phantoms were imaged and reconstructed on a GE DSTE PET/CT scanner with 2D and 3D reprojection filtered backprojection (FBP), and with 2D- and 3D-ordered subset expectation maximization (OSEM). A series of tests were applied to the resulting images that showed that the region and replicate imaging methods for estimating standard deviation were equivalent for backprojection reconstructions. Furthermore, the noise properties of the FBP algorithms allowed scaling the replicate estimates of the standard deviation by a factor of 1/\\sqrt{N}, where N is the number of replicate images, to obtain the standard deviation of the full data image. This was not the case for OSEM image reconstruction. Due to nonlinearity of the OSEM algorithm, the noise is shown to be both position and activity concentration dependent in such a way that no simple scaling factor can be used to extrapolate noise as a function of counts. The use of the Well-Counter phantom contributed to the development of a heuristic extrapolation of the noise as a function of radius in FBP. In addition, the signal-to-noise ratio for high uptake objects was confirmed to be higher with backprojection image reconstruction methods. These techniques were applied to several patient data sets acquired in either 2D or 3D mode, with 18F (FLT and FDG). Images of the standard deviation and signal-to-noise ratios were constructed and the standard deviations of the tumors' uptake were determined. Finally, a radial noise extrapolation relationship deduced in this paper was applied to patient data.
Technical note: RabbitCT--an open platform for benchmarking 3D cone-beam reconstruction algorithms.
Rohkohl, C; Keck, B; Hofmann, H G; Hornegger, J
2009-09-01
Fast 3D cone beam reconstruction is mandatory for many clinical workflows. For that reason, researchers and industry work hard on hardware-optimized 3D reconstruction. Backprojection is a major component of many reconstruction algorithms that require a projection of each voxel onto the projection data, including data interpolation, before updating the voxel value. This step is the bottleneck of most reconstruction algorithms and the focus of optimization in recent publications. A crucial limitation, however, of these publications is that the presented results are not comparable to each other. This is mainly due to variations in data acquisitions, preprocessing, and chosen geometries and the lack of a common publicly available test dataset. The authors provide such a standardized dataset that allows for substantial comparison of hardware accelerated backprojection methods. They developed an open platform RabbitCT (www.rabbitCT.com) for worldwide comparison in backprojection performance and ranking on different architectures using a specific high resolution C-arm CT dataset of a rabbit. This includes a sophisticated benchmark interface, a prototype implementation in C++, and image quality measures. At the time of writing, six backprojection implementations are already listed on the website. Optimizations include multithreading using Intel threading building blocks and OpenMP, vectorization using SSE, and computation on the GPU using CUDA 2.0. There is a need for objectively comparing backprojection implementations for reconstruction algorithms. RabbitCT aims to provide a solution to this problem by offering an open platform with fair chances for all participants. The authors are looking forward to a growing community and await feedback regarding future evaluations of novel software- and hardware-based acceleration schemes.
Piecewise-Constant-Model-Based Interior Tomography Applied to Dentin Tubules
He, Peng; Wei, Biao; Wang, Steve; ...
2013-01-01
Dentin is a hierarchically structured biomineralized composite material, and dentin’s tubules are difficult to study in situ. Nano-CT provides the requisite resolution, but the field of view typically contains only a few tubules. Using a plate-like specimen allows reconstruction of a volume containing specific tubules from a number of truncated projections typically collected over an angular range of about 140°, which is practically accessible. Classical computed tomography (CT) theory cannot exactly reconstruct an object only from truncated projections, needless to say a limited angular range. Recently, interior tomography was developed to reconstruct a region-of-interest (ROI) from truncated data in amore » theoretically exact fashion via the total variation (TV) minimization under the condition that the ROI is piecewise constant. In this paper, we employ a TV minimization interior tomography algorithm to reconstruct interior microstructures in dentin from truncated projections over a limited angular range. Compared to the filtered backprojection (FBP) reconstruction, our reconstruction method reduces noise and suppresses artifacts. Volume rendering confirms the merits of our method in terms of preserving the interior microstructure of the dentin specimen.« less
LETTER TO THE EDITOR: Free-response operator characteristic models for visual search
NASA Astrophysics Data System (ADS)
Hutchinson, T. P.
2007-05-01
Computed tomography of diffraction enhanced imaging (DEI-CT) is a novel x-ray phase-contrast computed tomography which is applied to inspect weakly absorbing low-Z samples. Refraction-angle images which are extracted from a series of raw DEI images measured in different positions of the rocking curve of the analyser can be regarded as projections of DEI-CT. Based on them, the distribution of refractive index decrement in the sample can be reconstructed according to the principles of CT. How to combine extraction methods and reconstruction algorithms to obtain the most accurate reconstructed results is investigated in detail in this paper. Two kinds of comparison, the comparison of different extraction methods and the comparison between 'two-step' algorithms and the Hilbert filtered backprojection (HFBP) algorithm, draw the conclusion that the HFBP algorithm based on the maximum refraction-angle (MRA) method may be the best combination at present. Though all current extraction methods including the MRA method are approximate methods and cannot calculate very large refraction-angle values, the HFBP algorithm based on the MRA method is able to provide quite acceptable estimations of the distribution of refractive index decrement of the sample. The conclusion is proved by the experimental results at the Beijing Synchrotron Radiation Facility.
Investigation of Backprojection Uncertainties With M6 Earthquakes
NASA Astrophysics Data System (ADS)
Fan, Wenyuan; Shearer, Peter M.
2017-10-01
We investigate possible biasing effects of inaccurate timing corrections on teleseismic P wave backprojection imaging of large earthquake ruptures. These errors occur because empirically estimated time shifts based on aligning P wave first arrivals are exact only at the hypocenter and provide approximate corrections for other parts of the rupture. Using the Japan subduction zone as a test region, we analyze 46 M6-M7 earthquakes over a 10 year period, including many aftershocks of the 2011 M9 Tohoku earthquake, performing waveform cross correlation of their initial P wave arrivals to obtain hypocenter timing corrections to global seismic stations. We then compare backprojection images for each earthquake using its own timing corrections with those obtained using the time corrections from other earthquakes. This provides a measure of how well subevents can be resolved with backprojection of a large rupture as a function of distance from the hypocenter. Our results show that backprojection is generally very robust and that the median subevent location error is about 25 km across the entire study region (˜700 km). The backprojection coherence loss and location errors do not noticeably converge to zero even when the event pairs are very close (<20 km). This indicates that most of the timing differences are due to 3-D structure close to each of the hypocenter regions, which limits the effectiveness of attempts to refine backprojection images using aftershock calibration, at least in this region.
THz computed tomography system with zero-order Bessel beam
NASA Astrophysics Data System (ADS)
Niu, Liting; Wu, Qiao; Wang, Kejia; Liu, Jinsong; Yang, Zhengang
2018-01-01
Terahertz (THz) waves can penetrate many optically opaque dielectric materials such as plastics, ceramics and colorants. It is effective to reveal the internal structures of these materials. We have built a THz Computed Tomography (CT) system with 0.3 THz zero-order Bessel beam to improve the depth of focus of this imaging system for the non-diffraction property of Bessel beam. The THz CT system has been used to detect a paper cup with a metal rod inside. Finally, the acquired projection data have been processed by the filtered back-projection algorithm and the reconstructed image of the sample has been obtained.
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A
2012-01-01
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062
Geometrical study on two tilting arcs based exact cone-beam CT for breast imaging
NASA Astrophysics Data System (ADS)
Zeng, Kai; Yu, Hengyong; Fajardo, Laurie L.; Wang, Ge
2006-08-01
Breast cancer is the second leading cause of cancer death in women in the United States. Currently, X-ray mammography is the method of choice for screening and diagnosing breast cancer. However, this 2D projective modality is far from perfect; with up to 17% breast cancer going unidentified. Over past several years, there has been an increasing interest in cone-beam CT for breast imaging. However, previous methods utilizing cone-beam CT only produce approximate reconstructions. Following Katsevich's recent work, we propose a new scanning mode and associated exact cone-beam CT method for breast imaging. In our design, cone-beam scans are performed along two tilting arcs for collection of a sufficient amount of data for exact reconstruction. In our Katsevich-type algorithm, conebeam data is filtered in a shift-invariant fashion and then backprojected in 3D for the final reconstruction. This approach has several desirable features. First, it allows data truncation unavoidable in practice. Second, it optimizes image quality for quantitative analysis. Third, it is efficient for sequential/parallel computation. Furthermore, we analyze the reconstruction region and the detection window in detail, which are important for numerical implementation.
Investigation of Back-Projection Uncertainties with M6 Earthquakes
NASA Astrophysics Data System (ADS)
Fan, W.; Shearer, P. M.
2017-12-01
We investigate possible biasing effects of inaccurate timing corrections on teleseismic P-wave back-projection imaging of large earthquake ruptures. These errors occur because empirically-estimated time shifts based on aligning P-wave first arrivals are exact only at the hypocenter and provide approximate corrections for other parts of the rupture. Using the Japan subduction zone as a test region, we analyze 46 M6-7 earthquakes over a ten-year period, including many aftershocks of the 2011 M9 Tohoku earthquake, performing waveform cross-correlation of their initial P-wave arrivals to obtain hypocenter timing corrections to global seismic stations. We then compare back-projection images for each earthquake using its own timing corrections with those obtained using the time corrections for other earthquakes. This provides a measure of how well sub-events can be resolved with back-projection of a large rupture as a function of distance from the hypocenter. Our results show that back-projection is generally very robust and that sub-event location errors average about 20 km across the entire study region ( 700 km). The back-projection coherence loss and location errors do not noticeably converge to zero even when the event pairs are very close (<20 km). This indicates that most of the timing differences are due to 3D structure close to each of the hypocenter regions, which limits the effectiveness of attempts to refine back-projection images using aftershock calibration, at least in this region.
Evaluation of a Fully 3-D Bpf Method for Small Animal PET Images on Mimd Architectures
NASA Astrophysics Data System (ADS)
Bevilacqua, A.
Positron Emission Tomography (PET) images can be reconstructed using Fourier transform methods. This paper describes the performance of a fully 3-D Backprojection-Then-Filter (BPF) algorithm on the Cray T3E machine and on a cluster of workstations. PET reconstruction of small animals is a class of problems characterized by poor counting statistics. The low-count nature of these studies necessitates 3-D reconstruction in order to improve the sensitivity of the PET system: by including axially oblique Lines Of Response (LORs), the sensitivity of the system can be significantly improved by the 3-D acquisition and reconstruction. The BPF method is widely used in clinical studies because of its speed and easy implementation. Moreover, the BPF method is suitable for on-time 3-D reconstruction as it does not need any sinogram or rearranged data. In order to investigate the possibility of on-line processing, we reconstruct a phantom using the data stored in the list-mode format by the data acquisition system. We show how the intrinsically parallel nature of the BPF method makes it suitable for on-line reconstruction on a MIMD system such as the Cray T3E. Lastly, we analyze the performance of this algorithm on a cluster of workstations.
Gabler, Anja S; Kühnel, Christian; Winkens, Thomas; Freesmeyer, Martin
2016-08-01
This study aimed to assess a hypothetical minimum administered activity of (124)I required to achieve comparability between pretherapeutic radioiodine uptake (RAIU) measurements by (124)I PET/CT and by (131)I RAIU probe, the clinical standard. In addition, the impact of different reconstruction algorithms on (124)I RAIU and the evaluation of pixel noise as a parameter for image quality were investigated. Different scan durations were simulated by different reconstruction intervals of 600-s list-mode PET datasets (including 15 intervals up to 600 s and 5 different reconstruction algorithms: filtered-backprojection and 4 iterative techniques) acquired 30 h after administration of 1 MBq of (124)I. The Bland-Altman method was used to compare mean (124)I RAIU levels versus mean 3-MBq (131)I RAIU levels (clinical standard). The data of 37 patients with benign thyroid diseases were assessed. The impact of different reconstruction lengths on pixel noise was investigated for all 5 of the (124)I PET reconstruction algorithms. A hypothetical minimum activity was sought by means of a proportion equation, considering that the length of a reconstruction interval equates to a hypothetical activity. Mean (124)I RAIU and (131)I RAIU already showed high levels of agreement for reconstruction intervals of as short as 10 s, corresponding to a hypothetical minimum activity of 0.017 MBq of (124)I. The iterative algorithms proved generally superior to the filtered-backprojection algorithm. (124)I RAIU showed a trend toward higher levels than (131)I RAIU if the influence of retrosternal tissue was not considered, which was proven to be the cause of a slight overestimation by (124)I RAIU measurement. A hypothetical minimum activity of 0.5 MBq of (124)I obtained with iterative reconstruction appeared sufficient both visually and with regard to pixel noise. This study confirms the potential of (124)I RAIU measurement as an alternative method for (131)I RAIU measurement in benign thyroid disease and suggests that reducing the administered activity is an option. CT information is particularly important in cases of retrosternal expansion. The results are relevant because (124)I PET/CT allows additional diagnostic means, that is, the possibility of performing fusion imaging with ultrasound. (124)I PET/CT might be an alternative, especially when hybrid (123)I SPECT/CT is not available. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Optical diffraction tomography: accuracy of an off-axis reconstruction
NASA Astrophysics Data System (ADS)
Kostencka, Julianna; Kozacki, Tomasz
2014-05-01
Optical diffraction tomography is an increasingly popular method that allows for reconstruction of three-dimensional refractive index distribution of semi-transparent samples using multiple measurements of an optical field transmitted through the sample for various illumination directions. The process of assembly of the angular measurements is usually performed with one of two methods: filtered backprojection (FBPJ) or filtered backpropagation (FBPP) tomographic reconstruction algorithm. The former approach, although conceptually very simple, provides an accurate reconstruction for the object regions located close to the plane of focus. However, since FBPJ ignores diffraction, its use for spatially extended structures is arguable. According to the theory of scattering, more precise restoration of a 3D structure shall be achieved with the FBPP algorithm, which unlike the former approach incorporates diffraction. It is believed that with this method one is allowed to obtain a high accuracy reconstruction in a large measurement volume exceeding depth of focus of an imaging system. However, some studies have suggested that a considerable improvement of the FBPP results can be achieved with prior propagation of the transmitted fields back to the centre of the object. This, supposedly, enables reduction of errors due to approximated diffraction formulas used in FBPP. In our view this finding casts doubt on quality of the FBPP reconstruction in the regions far from the rotation axis. The objective of this paper is to investigate limitation of the FBPP algorithm in terms of an off-axis reconstruction and compare its performance with the FBPJ approach. Moreover, in this work we propose some modifications to the FBPP algorithm that allow for more precise restoration of a sample structure in off-axis locations. The research is based on extensive numerical simulations supported with wave-propagation method.
Fast local reconstruction by selective backprojection for low dose in dental computed tomography
NASA Astrophysics Data System (ADS)
Yan, Bin; Deng, Lin; Han, Yu; Zhang, Feng; Wang, Xian-Chao; Li, Lei
2014-10-01
The high radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer, which becomes a major clinical concern. The backprojection-filtration (BPF) algorithm could reduce the radiation dose by reconstructing the images from truncated data in a short scan. In a dental CT, it could reduce the radiation dose for the teeth by using the projection acquired in a short scan, and could avoid irradiation to the other part by using truncated projection. However, the limit of integration for backprojection varies per PI-line, resulting in low calculation efficiency and poor parallel performance. Recently, a tent BPF has been proposed to improve the calculation efficiency by rearranging the projection. However, the memory-consuming data rebinning process is included. Accordingly, the selective BPF (S-BPF) algorithm is proposed in this paper. In this algorithm, the derivative of the projection is backprojected to the points whose x coordinate is less than that of the source focal spot to obtain the differentiated backprojection. The finite Hilbert inverse is then applied to each PI-line segment. S-BPF avoids the influence of the variable limit of integration by selective backprojection without additional time cost or memory cost. The simulation experiment and the real experiment demonstrated the higher reconstruction efficiency of S-BPF.
NASA Astrophysics Data System (ADS)
Qin, W.; Yin, J.; Yao, H.
2013-12-01
On May 24th 2013 a Mw 8.3 normal faulting earthquake occurred at a depth of approximately 600 km beneath the sea of Okhotsk, Russia. It is a rare mega earthquake that ever occurred at such a great depth. We use the time-domain iterative backprojection (IBP) method [1] and also the frequency-domain compressive sensing (CS) technique[2] to investigate the rupture process and energy radiation of this mega earthquake. We currently use the teleseismic P-wave data from about 350 stations of USArray. IBP is an improved method of the traditional backprojection method, which more accurately locates subevents (energy burst) during earthquake rupture and determines the rupture speeds. The total rupture duration of this earthquake is about 35 s with a nearly N-S rupture direction. We find that the rupture is bilateral in the beginning 15 seconds with slow rupture speeds: about 2.5km/s for the northward rupture and about 2 km/s for the southward rupture. After that, the northward rupture stopped while the rupture towards south continued. The average southward rupture speed between 20-35 s is approximately 5 km/s, lower than the shear wave speed (about 5.5 km/s) at the hypocenter depth. The total rupture length is about 140km, in a nearly N-S direction, with a southward rupture length about 100 km and a northward rupture length about 40 km. We also use the CS method, a sparse source inversion technique, to study the frequency-dependent seismic radiation of this mega earthquake. We observe clear along-strike frequency dependence of the spatial and temporal distribution of seismic radiation and rupture process. The results from both methods are generally similar. In the next step, we'll use data from dense arrays in southwest China and also global stations for further analysis in order to more comprehensively study the rupture process of this deep mega earthquake. Reference [1] Yao H, Shearer P M, Gerstoft P. Subevent location and rupture imaging using iterative backprojection for the 2011 Tohoku Mw 9.0 earthquake. Geophysical Journal International, 2012, 190(2): 1152-1168. [2]Yao H, Gerstoft P, Shearer P M, et al. Compressive sensing of the Tohoku-Oki Mw 9.0 earthquake: Frequency-dependent rupture modes. Geophysical Research Letters, 2011, 38(20).
A fast rebinning algorithm for 3D positron emission tomography using John's equation
NASA Astrophysics Data System (ADS)
Defrise, Michel; Liu, Xuan
1999-08-01
Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.
Finite element method framework for RF-based through-the-wall mapping
NASA Astrophysics Data System (ADS)
Campos, Rafael Saraiva; Lovisolo, Lisandro; de Campos, Marcello Luiz R.
2017-05-01
Radiofrequency (RF) Through-the-Wall Mapping (TWM) employs techniques originally applied in X-Ray Computerized Tomographic Imaging to map obstacles behind walls. It aims to provide valuable information for rescuing efforts in damaged buildings, as well as for military operations in urban scenarios. This work defines a Finite Element Method (FEM) based framework to allow fast and accurate simulations of the reconstruction of floors blueprints, using Ultra High-Frequency (UHF) signals at three different frequencies (500 MHz, 1 GHz and 2 GHz). To the best of our knowledge, this is the first use of FEM in a TWM scenario. This framework allows quick evaluation of different algorithms without the need to assemble a full test setup, which might not be available due to budgetary and time constraints. Using this, the present work evaluates a collection of reconstruction methods (Filtered Backprojection Reconstruction, Direct Fourier Reconstruction, Algebraic Reconstruction and Simultaneous Iterative Reconstruction) under a parallel-beam acquisition geometry for different spatial sampling rates, number of projections, antenna gains and operational frequencies. The use of multiple frequencies assesses the trade-off between higher resolution at shorter wavelengths and lower through-the-wall penetration. Considering all the drawbacks associated with such a complex problem, a robust and reliable computational setup based on a flexible method such as FEM can be very useful.
CT reconstruction from portal images acquired during volumetric-modulated arc therapy
NASA Astrophysics Data System (ADS)
Poludniowski, G.; Thomas, M. D. R.; Evans, P. M.; Webb, S.
2010-10-01
Volumetric-modulated arc therapy (VMAT), a form of intensity-modulated arc therapy (IMAT), has become a topic of research and clinical activity in recent years. As a form of arc therapy, portal images acquired during the treatment fraction form a (partial) Radon transform of the patient. We show that these portal images, when used in a modified global cone-beam filtered backprojection (FBP) algorithm, allow a surprisingly recognizable CT-volume to be reconstructed. The possibility of distinguishing anatomy in such VMAT-CT reconstructions suggests that this could prove to be a valuable treatment position-verification tool. Further, some potential for local-tomography techniques to improve image quality is shown.
Yun, Sungdae; Kyriakos, Walid E; Chung, Jun-Young; Han, Yeji; Yoo, Seung-Schik; Park, Hyunwook
2007-03-01
To develop a novel approach for calculating the accurate sensitivity profiles of phased-array coils, resulting in correction of nonuniform intensity in parallel MRI. The proposed intensity-correction method estimates the accurate sensitivity profile of each channel of the phased-array coil. The sensitivity profile is estimated by fitting a nonlinear curve to every projection view through the imaged object. The nonlinear curve-fitting efficiently obtains the low-frequency sensitivity profile by eliminating the high-frequency image contents. Filtered back-projection (FBP) is then used to compute the estimates of the sensitivity profile of each channel. The method was applied to both phantom and brain images acquired from the phased-array coil. Intensity-corrected images from the proposed method had more uniform intensity than those obtained by the commonly used sum-of-squares (SOS) approach. With the use of the proposed correction method, the intensity variation was reduced to 6.1% from 13.1% of the SOS. When the proposed approach was applied to the computation of the sensitivity maps during sensitivity encoding (SENSE) reconstruction, it outperformed the SOS approach in terms of the reconstructed image uniformity. The proposed method is more effective at correcting the intensity nonuniformity of phased-array surface-coil images than the conventional SOS method. In addition, the method was shown to be resilient to noise and was successfully applied for image reconstruction in parallel imaging.
Rolls, Edmund T
2015-01-01
The recall of information stored in the hippocampus involves a series of corticocortical backprojections via the entorhinal cortex, parahippocampal gyrus, and one or more neocortical stages. Each stage is considered to be a pattern association network, with the retrieval cue at each stage the firing of neurons in the previous stage. The leading factor that determines the capacity of this multistage pattern association backprojection pathway is the number of connections onto any one neuron, which provides a quantitative basis for why there are as many backprojections between adjacent stages in the hierarchy as forward projections. The issue arises of why this multistage backprojection system uses diluted connectivity. One reason is that a multistage backprojection system with expansion of neuron numbers at each stage enables the hippocampus to address during recall the very large numbers of neocortical neurons, which would otherwise require hippocampal neurons to make very large numbers of synapses if they were directly onto neocortical neurons. The second reason is that as shown here, diluted connectivity in the backprojection pathways reduces the probability of more than one connection onto a receiving neuron in the backprojecting pathways, which otherwise reduces the capacity of the system, that is the number of memories that can be recalled from the hippocampus to the neocortex. For similar reasons, diluted connectivity is advantageous in pattern association networks in other brain systems such as the orbitofrontal cortex and amygdala; for related reasons, in autoassociation networks in, for example, the hippocampal CA3 and the neocortex; and for the different reason that diluted connectivity facilitates the operation of competitive networks in forward-connected cortical systems. © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud
2017-11-01
Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.
Coe, Ryan L; Seibel, Eric J
2012-12-01
We present a method for modeling image formation in optical projection tomographic microscopy (OPTM) using high numerical aperture (NA) condensers and objectives. Similar to techniques used in computed tomography, OPTM produces three-dimensional, reconstructed images of single cells from two-dimensional projections. The model is capable of simulating axial scanning of a microscope objective to produce projections, which are reconstructed using filtered backprojection. Simulation of optical scattering in transmission optical microscopy is designed to analyze all aspects of OPTM image formation, such as degree of specimen staining, refractive-index matching, and objective scanning. In this preliminary work, a set of simulations is performed to examine the effect of changing the condenser NA, objective scan range, and complex refractive index on the final reconstruction of a microshell with an outer radius of 1.5 μm and an inner radius of 0.9 μm. The model lays the groundwork for optimizing OPTM imaging parameters and triaging efforts to further improve the overall system design. As the model is expanded in the future, it will be used to simulate a more realistic cell, which could lead to even greater impact.
Multistatic synthetic aperture radar image formation.
Krishnan, V; Swoboda, J; Yarman, C E; Yazici, B
2010-05-01
In this paper, we consider a multistatic synthetic aperture radar (SAR) imaging scenario where a swarm of airborne antennas, some of which are transmitting, receiving or both, are traversing arbitrary flight trajectories and transmitting arbitrary waveforms without any form of multiplexing. The received signal at each receiving antenna may be interfered by the scattered signal due to multiple transmitters and additive thermal noise at the receiver. In this scenario, standard bistatic SAR image reconstruction algorithms result in artifacts in reconstructed images due to these interferences. In this paper, we use microlocal analysis in a statistical setting to develop a filtered-backprojection (FBP) type analytic image formation method that suppresses artifacts due to interference while preserving the location and orientation of edges of the scene in the reconstructed image. Our FBP-type algorithm exploits the second-order statistics of the target and noise to suppress the artifacts due to interference in a mean-square sense. We present numerical simulations to demonstrate the performance of our multistatic SAR image formation algorithm with the FBP-type bistatic SAR image reconstruction algorithm. While we mainly focus on radar applications, our image formation method is also applicable to other problems arising in fields such as acoustic, geophysical and medical imaging.
NASA Astrophysics Data System (ADS)
Torres-Xirau, I.; Olaciregui-Ruiz, I.; Rozendaal, R. A.; González, P.; Mijnheer, B. J.; Sonke, J.-J.; van der Heide, U. A.; Mans, A.
2017-08-01
In external beam radiotherapy, electronic portal imaging devices (EPIDs) are frequently used for pre-treatment and for in vivo dose verification. Currently, various MR-guided radiotherapy systems are being developed and clinically implemented. Independent dosimetric verification is highly desirable. For this purpose we adapted our EPID-based dose verification system for use with the MR-Linac combination developed by Elekta in cooperation with UMC Utrecht and Philips. In this study we extended our back-projection method to cope with the presence of an extra attenuating medium between the patient and the EPID. Experiments were performed at a conventional linac, using an aluminum mock-up of the MRI scanner housing between the phantom and the EPID. For a 10 cm square field, the attenuation by the mock-up was 72%, while 16% of the remaining EPID signal resulted from scattered radiation. 58 IMRT fields were delivered to a 20 cm slab phantom with and without the mock-up. EPID reconstructed dose distributions were compared to planned dose distributions using the γ -evaluation method (global, 3%, 3 mm). In our adapted back-projection algorithm the averaged {γmean} was 0.27+/- 0.06 , while in the conventional it was 0.28+/- 0.06 . Dose profiles of several square fields reconstructed with our adapted algorithm showed excellent agreement when compared to TPS.
Gyssels, Elodie; Bohy, Pascale; Cornil, Arnaud; van Muylem, Alain; Howarth, Nigel; Gevenois, Pierre A; Tack, Denis
2016-01-01
The aim of the study was to compare radiation dose and image quality between the "average" and the "very strong" automatic exposure control (AEC) strength curves. Images reconstructed with filtered back-projection techniques and radiation dose data of unenhanced helical chest computed tomography (CT) examinations obtained at 2 hospitals (hospital A, hospital B) using the same scanner devices and acquisition protocols but different AEC strength curves were evaluated over a 3-month period. The selected AEC strength curve applied to "slim" patients (diameter <32 cm estimated from the attenuation automatically measured on the topogram) was "average" and "very strong" in hospital A and hospital B, respectively. Two radiologists with 13 and 24 years of experience scored the image quality of the lung parenchyma and the mediastinum on a 5-point scale. The patients' effective diameter, the delivered CT dose index volume, and dose-length products were recorded. A total of 410 patients were included. The average body mass index was 24.0 kg/m in hospital A and 24.8 kg/m in hospital B. There was no significant difference between hospitals with respect to age, sex ratio, weight, height, body mass index, effective diameters, and image quality scores for each radiologist (P ranging from 0.050 to 1.000). The mean CT dose index volume for the entire population was 2.0 mGy and was significantly lower in hospital B with the "very strong" AEC curve as compared with hospital A (-11%, P=0.001). The mean dose-length product delivered in this 70 kg-weight population was 68 mGy cm, corresponding to an effective dose of 0.95 mSv. Changing the AEC strength curve from "average" to "very strong" for slim patients maintains image quality and reduces the radiation dose to <1 mSv in routine chest CT examinations reconstructed with filtered back-projection techniques.
Incorporating HYPR de-noising within iterative PET reconstruction (HYPR-OSEM)
NASA Astrophysics Data System (ADS)
(Kevin Cheng, Ju-Chieh; Matthews, Julian; Sossi, Vesna; Anton-Rodriguez, Jose; Salomon, André; Boellaard, Ronald
2017-08-01
HighlY constrained back-PRojection (HYPR) is a post-processing de-noising technique originally developed for time-resolved magnetic resonance imaging. It has been recently applied to dynamic imaging for positron emission tomography and shown promising results. In this work, we have developed an iterative reconstruction algorithm (HYPR-OSEM) which improves the signal-to-noise ratio (SNR) in static imaging (i.e. single frame reconstruction) by incorporating HYPR de-noising directly within the ordered subsets expectation maximization (OSEM) algorithm. The proposed HYPR operator in this work operates on the target image(s) from each subset of OSEM and uses the sum of the preceding subset images as the composite which is updated every iteration. Three strategies were used to apply the HYPR operator in OSEM: (i) within the image space modeling component of the system matrix in forward-projection only, (ii) within the image space modeling component in both forward-projection and back-projection, and (iii) on the image estimate after the OSEM update for each subset thus generating three forms: (i) HYPR-F-OSEM, (ii) HYPR-FB-OSEM, and (iii) HYPR-AU-OSEM. Resolution and contrast phantom simulations with various sizes of hot and cold regions as well as experimental phantom and patient data were used to evaluate the performance of the three forms of HYPR-OSEM, and the results were compared to OSEM with and without a post reconstruction filter. It was observed that the convergence in contrast recovery coefficients (CRC) obtained from all forms of HYPR-OSEM was slower than that obtained from OSEM. Nevertheless, HYPR-OSEM improved SNR without degrading accuracy in terms of resolution and contrast. It achieved better accuracy in CRC at equivalent noise level and better precision than OSEM and better accuracy than filtered OSEM in general. In addition, HYPR-AU-OSEM has been determined to be the more effective form of HYPR-OSEM in terms of accuracy and precision based on the studies conducted in this work.
Semiautomated skeletonization of the pulmonary arterial tree in micro-CT images
NASA Astrophysics Data System (ADS)
Hanger, Christopher C.; Haworth, Steven T.; Molthen, Robert C.; Dawson, Christopher A.
2001-05-01
We present a simple and robust approach that utilizes planar images at different angular rotations combined with unfiltered back-projection to locate the central axes of the pulmonary arterial tree. Three-dimensional points are selected interactively by the user. The computer calculates a sub- volume unfiltered back-projection orthogonal to the vector connecting the two points and centered on the first point. Because more x-rays are absorbed at the thickest portion of the vessel, in the unfiltered back-projection, the darkest pixel is assumed to be the center of the vessel. The computer replaces this point with the newly computer-calculated point. A second back-projection is calculated around the original point orthogonal to a vector connecting the newly-calculated first point and user-determined second point. The darkest pixel within the reconstruction is determined. The computer then replaces the second point with the XYZ coordinates of the darkest pixel within this second reconstruction. Following a vector based on a moving average of previously determined 3- dimensional points along the vessel's axis, the computer continues this skeletonization process until stopped by the user. The computer estimates the vessel diameter along the set of previously determined points using a method similar to the full width-half max algorithm. On all subsequent vessels, the process works the same way except that at each point, distances between the current point and all previously determined points along different vessels are determined. If the difference is less than the previously estimated diameter, the vessels are assumed to branch. This user/computer interaction continues until the vascular tree has been skeletonized.
Implementation of a cone-beam backprojection algorithm on the cell broadband engine processor
NASA Astrophysics Data System (ADS)
Bockenbach, Olivier; Knaup, Michael; Kachelrieß, Marc
2007-03-01
Tomographic image reconstruction is computationally very demanding. In all cases the backprojection represents the performance bottleneck due to the high operational count and due to the high demand put on the memory subsystem. In the past, solving this problem has lead to the implementation of specific architectures, connecting Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) to memory through dedicated high speed busses. More recently, there have also been attempt to use Graphic Processing Units (GPUs) to perform the backprojection step. Originally aimed at the gaming market, IBM, Toshiba and Sony have introduced the Cell Broadband Engine (CBE) processor, often considered as a multicomputer on a chip. Clocked at 3 GHz, the Cell allows for a theoretical performance of 192 GFlops and a peak data transfer rate over the internal bus of 200 GB/s. This performance indeed makes the Cell a very attractive architecture for implementing tomographic image reconstruction algorithms. In this study, we investigate the relative performance of a perspective backprojection algorithm when implemented on a standard PC and on the Cell processor. We compare these results to the performance achievable with FPGAs based boards and high end GPUs. The cone-beam backprojection performance was assessed by backprojecting a full circle scan of 512 projections of 1024x1024 pixels into a volume of size 512x512x512 voxels. It took 3.2 minutes on the PC (single CPU) and is as fast as 13.6 seconds on the Cell.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
NASA Astrophysics Data System (ADS)
Wang, Jing; Wang, Su; Li, Lihong; Fan, Yi; Lu, Hongbing; Liang, Zhengrong
2008-10-01
Computed tomography colonography (CTC) or CT-based virtual colonoscopy (VC) is an emerging tool for detection of colonic polyps. Compared to the conventional fiber-optic colonoscopy, VC has demonstrated the potential to become a mass screening modality in terms of safety, cost, and patient compliance. However, current CTC delivers excessive X-ray radiation to the patient during data acquisition. The radiation is a major concern for screening application of CTC. In this work, we performed a simulation study to demonstrate a possible ultra low-dose CT technique for VC. The ultra low-dose abdominal CT images were simulated by adding noise to the sinograms of the patient CTC images acquired with normal dose scans at 100 mA s levels. The simulated noisy sinogram or projection data were first processed by a Karhunen-Loeve domain penalized weighted least-squares (KL-PWLS) restoration method and then reconstructed by a filtered backprojection algorithm for the ultra low-dose CT images. The patient-specific virtual colon lumen was constructed and navigated by a VC system after electronic colon cleansing of the orally-tagged residue stool and fluid. By the KL-PWLS noise reduction, the colon lumen can successfully be constructed and the colonic polyp can be detected in an ultra low-dose level below 50 mA s. Polyp detection can be found more easily by the KL-PWLS noise reduction compared to the results using the conventional noise filters, such as Hanning filter. These promising results indicate the feasibility of an ultra low-dose CTC pipeline for colon screening with less-stressful bowel preparation by fecal tagging with oral contrast.
NASA Astrophysics Data System (ADS)
Baghaei, H.; Wong, Wai-Hoi; Uribe, J.; Li, Hongdi; Wang, Yu; Liu, Yaqiang; Xing, Tao; Ramirez, R.; Xie, Shuping; Kim, Soonseok
2004-10-01
We compared two fully three-dimensional (3-D) image reconstruction algorithms and two 3-D rebinning algorithms followed by reconstruction with a two-dimensional (2-D) filtered-backprojection algorithm for 3-D positron emission tomography (PET) imaging. The two 3-D image reconstruction algorithms were ordered-subsets expectation-maximization (3D-OSEM) and 3-D reprojection (3DRP) algorithms. The two rebinning algorithms were Fourier rebinning (FORE) and single slice rebinning (SSRB). The 3-D projection data used for this work were acquired with a high-resolution PET scanner (MDAPET) with an intrinsic transaxial resolution of 2.8 mm. The scanner has 14 detector rings covering an axial field-of-view of 38.5 mm. We scanned three phantoms: 1) a uniform cylindrical phantom with inner diameter of 21.5 cm; 2) a uniform 11.5-cm cylindrical phantom with four embedded small hot lesions with diameters of 3, 4, 5, and 6 mm; and 3) the 3-D Hoffman brain phantom with three embedded small hot lesion phantoms with diameters of 3, 5, and 8.6 mm in a warm background. Lesions were placed at different radial and axial distances. We evaluated the different reconstruction methods for MDAPET camera by comparing the noise level of images, contrast recovery, and hot lesion detection, and visually compared images. We found that overall the 3D-OSEM algorithm, especially when images post filtered with the Metz filter, produced the best results in terms of contrast-noise tradeoff, and detection of hot spots, and reproduction of brain phantom structures. Even though the MDAPET camera has a relatively small maximum axial acceptance (/spl plusmn/5 deg), images produced with the 3DRP algorithm had slightly better contrast recovery and reproduced the structures of the brain phantom slightly better than the faster 2-D rebinning methods.
Combined algorithmic and GPU acceleration for ultra-fast circular conebeam backprojection
NASA Astrophysics Data System (ADS)
Brokish, Jeffrey; Sack, Paul; Bresler, Yoram
2010-04-01
In this paper, we describe the first implementation and performance of a fast O(N3logN) hierarchical backprojection algorithm for cone beam CT with a circular trajectory1,developed on a modern Graphics Processing Unit (GPU). The resulting tomographic backprojection system for 3D cone beam geometry combines speedup through algorithmic improvements provided by the hierarchical backprojection algorithm with speedup from a massively parallel hardware accelerator. For data parameters typical in diagnostic CT and using a mid-range GPU card, we report reconstruction speeds of up to 360 frames per second, and relative speedup of almost 6x compared to conventional backprojection on the same hardware. The significance of these results is twofold. First, they demonstrate that the reduction in operation counts demonstrated previously for the FHBP algorithm can be translated to a comparable run-time improvement in a massively parallel hardware implementation, while preserving stringent diagnostic image quality. Second, the dramatic speedup and throughput numbers achieved indicate the feasibility of systems based on this technology, which achieve real-time 3D reconstruction for state-of-the art diagnostic CT scanners with small footprint, high-reliability, and affordable cost.
Leng, Shuai; Yu, Lifeng; Wang, Jia; Fletcher, Joel G; Mistretta, Charles A; McCollough, Cynthia H
2011-09-01
Our purpose was to reduce image noise in spectral CT by exploiting data redundancies in the energy domain to allow flexible selection of the number, width, and location of the energy bins. Using a variety of spectral CT imaging methods, conventional filtered backprojection (FBP) reconstructions were performed and resulting images were compared to those processed using a Local HighlY constrained backPRojection Reconstruction (HYPR-LR) algorithm. The mean and standard deviation of CT numbers were measured within regions of interest (ROIs), and results were compared between FBP and HYPR-LR. For these comparisons, the following spectral CT imaging methods were used:(i) numerical simulations based on a photon-counting, detector-based CT system, (ii) a photon-counting, detector-based micro CT system using rubidium and potassium chloride solutions, (iii) a commercial CT system equipped with integrating detectors utilizing tube potentials of 80, 100, 120, and 140 kV, and (iv) a clinical dual-energy CT examination. The effects of tube energy and energy bin width were evaluated appropriate to each CT system. The mean CT number in each ROI was unchanged between FBP and HYPR-LR images for each of the spectral CT imaging scenarios, irrespective of bin width or tube potential. However, image noise, as represented by the standard deviation of CT numbers in each ROI, was reduced by 36%-76%. In all scenarios, image noise after HYPR-LR algorithm was similar to that of composite images, which used all available photons. No difference in spatial resolution was observed between HYPR-LR processing and FBP. Dual energy patient data processed using HYPR-LR demonstrated reduced noise in the individual, low- and high-energy images, as well as in the material-specific basis images. Noise reduction can be accomplished for spectral CT by exploiting data redundancies in the energy domain. HYPR-LR is a robust method for reducing image noise in a variety of spectral CT imaging systems without losing spatial resolution or CT number accuracy. This method improves the flexibility to select energy bins in the manner that optimizes material identification and separation without paying the penalty of increased image noise or its corollary, increased patient dose.
A fast CT reconstruction scheme for a general multi-core PC.
Zeng, Kai; Bai, Erwei; Wang, Ge
2007-01-01
Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.
Sandell, A; Ohlsson, T; Erlandsson, K; Hellborg, R; Strand, S E
1992-01-01
We have developed a comparatively inexpensive PET system, based on a rotating scanner with two scintillation camera heads, and a nearby low energy electrostatic proton accelerator for production of short-lived radionuclides. Using a 6 MeV proton beam of 5 microA, and by optimization of the target geometry for the 18O(p,n)18F reaction, 750 MBq of 2-18FDG can be obtained. The PET scanner shows a spatial resolution of 6 mm (FWHM) and a sensitivity of 80 s-1kBq-1ml-1 (3 kcps/microCi/ml). Various corrections are included in the imaging process, to compensate for spatial and temporal response variations in the detector system. Both filtered backprojection and iterative reconstruction methods are employed. Clinical studies have been performed with acquisition times of 30-40 min. The system will be used for clinical experimental research with short- as well as long-lived positron emitters. Also the possibility of true 3D reconstruction is under evaluation.
A Fast CT Reconstruction Scheme for a General Multi-Core PC
Zeng, Kai; Bai, Erwei; Wang, Ge
2007-01-01
Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731
Digital holographic microtomography of fusion spliced optical fibers
NASA Astrophysics Data System (ADS)
Deng, Yating; Xiao, Wen; Ma, Xichao; Pan, Feng
2017-03-01
In this paper, we report three-dimensional(3D) measurement results of structural parameters of fusion spliced optical fibers using digital holographic microtomography. A holographic setup in microscopy configuration with the sample-fixed and setup-rotating scheme is established. A series of holograms is recorded from various incident angles. Then the filtered backprojection algorithm is applied to reconstruct the 3D refractive index (RI) distributions of the fusion spliced optical fibers inserted in the index-matching liquid. Experimental results exhibit the internal and external shapes of three kinds of fusion splices between different fibers, including a single-mode fiber(SMF) and a multimode fiber, an SMF and a panda polarization maintaining fiber (Panda PMF), and an SMF and a bow-tie polarization maintaining fiber (Bow-Tie PMF). With 3D maps of RI, it is intuitive to observe internal structural details of fused fibers and evaluate the splicing quality. This paper describes a powerful method for non-invasive microscopic measurement of fiber splicing. Furthermore, it provides the possibility of detecting fiber splicing loss by 3D structures.
NASA Astrophysics Data System (ADS)
Jang, Sunyoung; Jaszczak, R. J.; Tsui, B. M. W.; Metz, C. E.; Gilland, D. R.; Turkington, T. G.; Coleman, R. E.
1998-08-01
The purpose of this work was to evaluate lesion detectability with and without nonuniform attenuation compensation (AC) in myocardial perfusion SPECT imaging in women using an anthropomorphic phantom and receiver operating characteristics (ROC) methodology. Breast attenuation causes artifacts in reconstructed images and may increase the difficulty of diagnosis of myocardial perfusion imaging in women. The null hypothesis tested using the ROC study was that nonuniform AC does not change the lesion detectability in myocardial perfusion SPECT imaging in women. The authors used a filtered backprojection (FBP) reconstruction algorithm and Chang's (1978) single iteration method for AC. In conclusion, with the authors' proposed myocardial defect model nuclear medicine physicians demonstrated no significant difference for the detection of the anterior wall defect; however, a greater accuracy for the detection of the inferior wall defect was observed without nonuniform AC than with it (P-value=0.0034). Medical physicists did not demonstrate any statistically significant difference in defect detection accuracy with or without nonuniform AC in the female phantom.
The DataCube Server. Animate Agent Project Working Note 2, Version 1.0
1993-11-01
before this can be called a histogram of all the needed levels must be made and their one band images must be made. Note if a levels backprojection...will not be used then the level does not need to be histogrammed. Any points outside the active region in a levels backprojection will be undefined...this can be called a histogram of all the needed levels must be made and their one band images must be made. Note if a levels backprojection will not
Inverse solutions for electrical impedance tomography based on conjugate gradients methods
NASA Astrophysics Data System (ADS)
Wang, M.
2002-01-01
A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.
Consecutive Short-Scan CT for Geological Structure Analog Models with Large Size on In-Situ Stage.
Yang, Min; Zhang, Wen; Wu, Xiaojun; Wei, Dongtao; Zhao, Yixin; Zhao, Gang; Han, Xu; Zhang, Shunli
2016-01-01
For the analysis of interior geometry and property changes of a large-sized analog model during a loading or other medium (water or oil) injection process with a non-destructive way, a consecutive X-ray computed tomography (XCT) short-scan method is developed to realize an in-situ tomography imaging. With this method, the X-ray tube and detector rotate 270° around the center of the guide rail synchronously by switching positive and negative directions alternately on the way of translation until all the needed cross-sectional slices are obtained. Compared with traditional industrial XCTs, this method well solves the winding problems of high voltage cables and oil cooling service pipes during the course of rotation, also promotes the convenience of the installation of high voltage generator and cooling system. Furthermore, hardware costs are also significantly decreased. This kind of scanner has higher spatial resolution and penetrating ability than medical XCTs. To obtain an effective sinogram which matches rotation angles accurately, a structural similarity based method is applied to elimination of invalid projection data which do not contribute to the image reconstruction. Finally, on the basis of geometrical symmetry property of fan-beam CT scanning, a whole sinogram filling a full 360° range is produced and a standard filtered back-projection (FBP) algorithm is performed to reconstruct artifacts-free images.
NASA Astrophysics Data System (ADS)
Park, S. Y.; Kim, G. A.; Cho, H. S.; Park, C. K.; Lee, D. Y.; Lim, H. W.; Lee, H. W.; Kim, K. S.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Je, U. K.; Woo, T. H.; Oh, J. E.
2018-02-01
In recent digital tomosynthesis (DTS), iterative reconstruction methods are often used owing to the potential to provide multiplanar images of superior image quality to conventional filtered-backprojection (FBP)-based methods. However, they require enormous computational cost in the iterative process, which has still been an obstacle to put them to practical use. In this work, we propose a new DTS reconstruction method incorporated with a dual-resolution voxelization scheme in attempt to overcome these difficulties, in which the voxels outside a small region-of-interest (ROI) containing target diagnosis are binned by 2 × 2 × 2 while the voxels inside the ROI remain unbinned. We considered a compressed-sensing (CS)-based iterative algorithm with a dual-constraint strategy for more accurate DTS reconstruction. We implemented the proposed algorithm and performed a systematic simulation and experiment to demonstrate its viability. Our results indicate that the proposed method seems to be effective for reducing computational cost considerably in iterative DTS reconstruction, keeping the image quality inside the ROI not much degraded. A binning size of 2 × 2 × 2 required only about 31.9% computational memory and about 2.6% reconstruction time, compared to those for no binning case. The reconstruction quality was evaluated in terms of the root-mean-square error (RMSE), the contrast-to-noise ratio (CNR), and the universal-quality index (UQI).
NASA Astrophysics Data System (ADS)
Wang, Yihan; Lu, Tong; Wan, Wenbo; Liu, Lingling; Zhang, Songhe; Li, Jiao; Zhao, Huijuan; Gao, Feng
2018-02-01
To fully realize the potential of photoacoustic tomography (PAT) in preclinical and clinical applications, rapid measurements and robust reconstructions are needed. Sparse-view measurements have been adopted effectively to accelerate the data acquisition. However, since the reconstruction from the sparse-view sampling data is challenging, both of the effective measurement and the appropriate reconstruction should be taken into account. In this study, we present an iterative sparse-view PAT reconstruction scheme where a virtual parallel-projection concept matching for the proposed measurement condition is introduced to help to achieve the "compressive sensing" procedure of the reconstruction, and meanwhile the spatially adaptive filtering fully considering the a priori information of the mutually similar blocks existing in natural images is introduced to effectively recover the partial unknown coefficients in the transformed domain. Therefore, the sparse-view PAT images can be reconstructed with higher quality compared with the results obtained by the universal back-projection (UBP) algorithm in the same sparse-view cases. The proposed approach has been validated by simulation experiments, which exhibits desirable performances in image fidelity even from a small number of measuring positions.
Robust method to detect and locate local earthquakes by means of amplitude measurements.
NASA Astrophysics Data System (ADS)
del Puy Papí Isaba, María; Brückl, Ewald
2016-04-01
In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic station. As a direct consequence, we are able to save computing time for the calculation of the final back-projected maximum resultant amplitude at every grid-point. The capability of the method was demonstrated firstly using synthetic data. In the next step, this method was applied to data of 43 local earthquakes of low and medium magnitude (1.7 < magnitude scale < 4.3). These earthquakes were recorded and detected by the seismic network ALPAACT (seismological and geodetic monitoring of Alpine PAnnonian ACtive Tectonics) in the period 2010/06/11 to 2013/09/20. Data provided by the ALPAACT network is used in order to understand seismic activity in the Mürz Valley - Semmering - Vienna Basin transfer fault system in Austria and what makes it such a relatively high earthquake hazard and risk area. The method will substantially support our efforts to involve scholars from polytechnic schools in seismological work within the Sparkling Science project Schools & Quakes.
NASA Astrophysics Data System (ADS)
Okuwaki, R.; Kasahara, A.; Yagi, Y.
2017-12-01
The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the real data of the Mw 8.3 2015 Illapel Chile earthquake, and further discuss the limitations of the new BP/HBP methods proposed in this study when they are used for exploring the rupture properties of the earthquakes.
The effect of a finite focal spot size on location dependent detectability in a fan beam CT system
NASA Astrophysics Data System (ADS)
Kim, Byeongjoon; Baek, Jongduk
2017-03-01
A finite focal spot size is one of the sources to degrade the resolution performance in a fan beam CT system. In this work, we investigated the effect of the finite focal spot size on signal detectability. For the evaluation, five spherical objects with diameters of 1 mm, 2 mm, 3 mm, 4 mm, and 5 mm were used. The optical focal spot size viewed at the iso-center was a 1 mm (height) × 1 mm (width) with a target angle of 7 degrees, corresponding to an 8.21 mm (i.e., 1 mm / sin (7°)) focal spot length. Simulated projection data were acquired using 8 × 8 source lets, and reconstructed by Hanning weighted filtered backprojection. For each spherical object, the detectability was calculated at (0 mm, 0 mm) and (0 mm, 200 mm) using two image quality metrics: pixel signal to noise ratio (SNR) and detection SNR. For all signal sizes, the pixel SNR is higher at the iso-center since the noise variance at the off-center is much higher than that at the iso-center due to the backprojection weightings used in direct fan beam reconstruction. In contrast, detection SNR shows similar values for different spherical objects except 1 mm and 2 mm diameter spherical objects. Overall, the results indicate the resolution loss caused by the finite focal spot size degrades the detection performance, especially for small objects with less than 2 mm diameter.
PI-line-based image reconstruction in helical cone-beam computed tomography with a variable pitch.
Zou, Yu; Pan, Xiaochuan; Xia, Dan; Wang, Ge
2005-08-01
Current applications of helical cone-beam computed tomography (CT) involve primarily a constant pitch where the translating speed of the table and the rotation speed of the source-detector remain constant. However, situations do exist where it may be more desirable to use a helical scan with a variable translating speed of the table, leading a variable pitch. One of such applications could arise in helical cone-beam CT fluoroscopy for the determination of vascular structures through real-time imaging of contrast bolus arrival. Most of the existing reconstruction algorithms have been developed only for helical cone-beam CT with constant pitch, including the backprojection-filtration (BPF) and filtered-backprojection (FBP) algorithms that we proposed previously. It is possible to generalize some of these algorithms to reconstruct images exactly for helical cone-beam CT with a variable pitch. In this work, we generalize our BPF and FBP algorithms to reconstruct images directly from data acquired in helical cone-beam CT with a variable pitch. We have also performed a preliminary numerical study to demonstrate and verify the generalization of the two algorithms. The results of the study confirm that our generalized BPF and FBP algorithms can yield exact reconstruction in helical cone-beam CT with a variable pitch. It should be pointed out that our generalized BPF algorithm is the only algorithm that is capable of reconstructing exactly region-of-interest image from data containing transverse truncations.
NASA Astrophysics Data System (ADS)
Zhang, Hao; Koper, Keith D.; Pankow, Kristine; Ge, Zengxi
2017-05-01
The 13 November 2016 Mw 7.8 Kaikoura, New Zealand, earthquake was investigated using teleseismic P waves. Backprojection of high-frequency P waves from two regional arrays shows unilateral rupture of at least two southwest-northeast striking faults with an average rupture speed of 1.4-1.6 km/s and total duration of 100 s. Guided by these backprojection results, 33 globally distributed low-frequency P waves were inverted for a finite fault model (FFM) of slip. The FFM showed evidence of several subevents; however, it lacked significant moment release near the epicenter, where a large burst of high-frequency energy was observed. A local strong-motion network recorded strong shaking near the epicenter; hence, for this earthquake the distribution of backprojection energy is superior to the FFM as a guide of strong shaking. For future large earthquakes that occur in regions without strong-motion networks, initial shaking estimates could benefit from backprojection constraints.
Bernstein, Ally Leigh; Dhanantwari, Amar; Jurcova, Martina; Cheheltani, Rabee; Naha, Pratap Chandra; Ivanc, Thomas; Shefer, Efrat; Cormode, David Peter
2016-01-01
Computed tomography is a widely used medical imaging technique that has high spatial and temporal resolution. Its weakness is its low sensitivity towards contrast media. Iterative reconstruction techniques (ITER) have recently become available, which provide reduced image noise compared with traditional filtered back-projection methods (FBP), which may allow the sensitivity of CT to be improved, however this effect has not been studied in detail. We scanned phantoms containing either an iodine contrast agent or gold nanoparticles. We used a range of tube voltages and currents. We performed reconstruction with FBP, ITER and a novel, iterative, modal-based reconstruction (IMR) algorithm. We found that noise decreased in an algorithm dependent manner (FBP > ITER > IMR) for every scan and that no differences were observed in attenuation rates of the agents. The contrast to noise ratio (CNR) of iodine was highest at 80 kV, whilst the CNR for gold was highest at 140 kV. The CNR of IMR images was almost tenfold higher than that of FBP images. Similar trends were found in dual energy images formed using these algorithms. In conclusion, IMR-based reconstruction techniques will allow contrast agents to be detected with greater sensitivity, and may allow lower contrast agent doses to be used. PMID:27185492
Electric field tomography for contactless imaging of resistivity in biomedical applications.
Korjenevsky, A V
2004-02-01
The technique of contactless imaging of resistivity distribution inside conductive objects, which can be applied in medical diagnostics, has been suggested and analyzed. The method exploits the interaction of a high-frequency electric field with a conductive medium. Unlike electrical impedance tomography, no electric current is injected into the medium from outside. The interaction is accompanied with excitation of high-frequency currents and redistribution of free charges inside the medium leading to strong and irregular perturbation of the field's magnitude outside and inside the object. Along with this the considered interaction also leads to small and regular phase shifts of the field in the area surrounding the object. Measuring these phase shifts using a set of electrodes placed around the object enables us to reconstruct the internal structure of the medium. The basics of this technique, which we name electric field tomography (EFT), are described, simple analytical estimations are made and requirements for measuring equipment are formulated. The realizability of the technique is verified by numerical simulations based on the finite elements method. Results of simulation have confirmed initial estimations and show that in the case of EFT even a comparatively simple filtered backprojection algorithm can be used for reconstructing the static resistivity distribution in biological tissues.
Performance Enhancement of the RatCAP Awake Rate Brain PET System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaska, P.; Vaska, P.; Woody, C.
The first full prototype of the RatCAP PET system, designed to image the brain of a rat while conscious, has been completed. Initial results demonstrated excellent spatial resolution, 1.8 mm FWHM with filtered backprojection and <1.5 mm FWHM with a Monte Carlo based MLEM method. However, noise equivalent countrate studies indicated the need for better timing to mitigate the effect of randoms. Thus, the front-end ASIC has been redesigned to minimize time walk, an accurate coincidence time alignment method has been implemented, and a variance reduction technique for the randoms is being developed. To maximize the quantitative capabilities required formore » neuroscience, corrections are being implemented and validated for positron range and photon noncollinearity, scatter (including outside the field of view), attenuation, randoms, and detector efficiency (deadtime is negligible). In addition, a more robust and compact PCI-based optical data acquisition system has been built to replace the original VME-based system while retaining the linux-based data processing and image reconstruction codes. Finally, a number of new animal imaging experiments have been carried out to demonstrate the performance of the RatCAP in real imaging situations, including an F-18 fluoride bone scan, a C-11 raclopride scan, and a dynamic C-11 methamphetamine scan.« less
Synthetic aperture tomographic phase microscopy for 3D imaging of live cells in translational motion
Lue, Niyom; Choi, Wonshik; Popescu, Gabriel; Badizadegan, Kamran; Dasari, Ramachandra R.; Feld, Michael S.
2009-01-01
We present a technique for 3D imaging of live cells in translational motion without need of axial scanning of objective lens. A set of transmitted electric field images of cells at successive points of transverse translation is taken with a focused beam illumination. Based on Hyugens’ principle, angular plane waves are synthesized from E-field images of a focused beam. For a set of synthesized angular plane waves, we apply a filtered back-projection algorithm and obtain 3D maps of refractive index of live cells. This technique, which we refer to as synthetic aperture tomographic phase microscopy, can potentially be combined with flow cytometry or microfluidic devices, and will enable high throughput acquisition of quantitative refractive index data from large numbers of cells. PMID:18825263
3D Compton scattering imaging and contour reconstruction for a class of Radon transforms
NASA Astrophysics Data System (ADS)
Rigaud, Gaël; Hahn, Bernadette N.
2018-07-01
Compton scattering imaging is a nascent concept arising from the current development of high-sensitive energy detectors and is devoted to exploit the scattering radiation to image the electron density of the studied medium. Such detectors are able to collect incoming photons in terms of energy. This paper introduces potential 3D modalities in Compton scattering imaging (CSI). The associated measured data are modeled using a class of generalized Radon transforms. The study of this class of operators leads to build a filtered back-projection kind algorithm preserving the contours of the sought-for function and offering a fast approach to partially solve the associated inverse problems. Simulation results including Poisson noise demonstrate the potential of this new imaging concept as well as the proposed image reconstruction approach.
A reconstruction method for cone-beam differential x-ray phase-contrast computed tomography.
Fu, Jian; Velroyen, Astrid; Tan, Renbo; Zhang, Junwei; Chen, Liyuan; Tapfer, Arne; Bech, Martin; Pfeiffer, Franz
2012-09-10
Most existing differential phase-contrast computed tomography (DPC-CT) approaches are based on three kinds of scanning geometries, described by parallel-beam, fan-beam and cone-beam. Due to the potential of compact imaging systems with magnified spatial resolution, cone-beam DPC-CT has attracted significant interest. In this paper, we report a reconstruction method based on a back-projection filtration (BPF) algorithm for cone-beam DPC-CT. Due to the differential nature of phase contrast projections, the algorithm restrains from differentiation of the projection data prior to back-projection, unlike BPF algorithms commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a micro-focus x-ray tube source. Moreover, the numerical simulation and experimental results demonstrate that the proposed method can deal with several classes of truncated cone-beam datasets. We believe that this feature is of particular interest for future medical cone-beam phase-contrast CT imaging applications.
NASA Astrophysics Data System (ADS)
Nakatani, Y.; Mochizuki, K.; Shinohara, M.; Yamada, T.; Hino, R.; Ito, Y.; Murai, Y.; Sato, T.
2013-12-01
A subducting seamount which has a height of about 3 km was revealed off Ibaraki in the Japan Trench by a seismic survey (Mochizuki et al., 2008). Mochizuki et al. (2008) also interpreted that interplate coupling was weak over the seamount because seismicity was low and the slip of the recent large earthquake did not propagate over it. To carry out further investigation, we deployed dense ocean bottom seismometers (OBSs) array around the seamount for about a year. During the observation period, seismicity off Ibaraki was activated due to the occurrence of the 2011 Tohoku earthquake. The southern edge of the mainshock rupture area was considered to be located around off Ibaraki by many source analyses. Moreover, Kubo et al. (2013) proposes the seamount played an important role in the rupture termination of the largest aftershock. Therefore, in this study, we try to understand about spatiotemporal variation of seismicity around the seamount before and after the Mw 9.0 event as a first step to elucidate relationship between the subducting seamount and seismogenic behavior. We used velocity waveforms of 1 Hz long-term OBSs which were densely deployed at station intervals of about 6 km. The sampling rate is 200 Hz and the observation period is from October 16, 2010 to September 19, 2011. Because of the ambient noise and effects of thick seafloor sediments, it is difficult to apply methods which have been used to on-land observational data for detecting seismicity to OBS data and to handle continuous waveforms automatically. We therefore apply back-projection method (e.g., Kiser and Ishii, 2012) to OBS waveform data which estimate energy-release source by stacking waveforms. Among many back-projection methods, we adopt a semblance analysis (e.g., Honda et al., 2008) which can detect feeble waves. First of all, we constructed a 3-D velocity structure model off Ibaraki by compiling the results of marine seismic surveys (e.g., Nakahigashi et al., 2012). Then, we divided a target area into small areas and calculated P-wave traveltimes between each station and all small areas by fast marching method (Rawlinson et al., 2006). After constructing theoretical travel-time tables, we applied a proper frequency filter to the observed waveforms and estimated seismic energy release by projecting semblance values. As the result of applying our method, we could successfully detect magnitude 2-3 earthquakes.
An iterative reconstruction method for high-pitch helical luggage CT
NASA Astrophysics Data System (ADS)
Xue, Hui; Zhang, Li; Chen, Zhiqiang; Jin, Xin
2012-10-01
X-ray luggage CT is widely used in airports and railway stations for the purpose of detecting contrabands and dangerous goods that may be potential threaten to public safety, playing an important role in homeland security. An X-ray luggage CT is usually in a helical trajectory with a high pitch for achieving a high passing speed of the luggage. The disadvantage of high pitch is that conventional filtered back-projection (FBP) requires a very large slice thickness, leading to bad axial resolution and helical artifacts. Especially when severe data inconsistencies are present in the z-direction, like the ends of a scanning object, the partial volume effect leads to inaccuracy value and may cause a wrong identification. In this paper, an iterative reconstruction method is developed to improve the image quality and accuracy for a large-spacing multi-detector high-pitch helical luggage CT system. In this method, the slice thickness is set to be much smaller than the pitch. Each slice involves projection data collected in a rather small angular range, being an ill-conditioned limited-angle problem. Firstly a low-resolution reconstruction is employed to obtain images, which are used as prior images in the following process. Then iterative reconstruction is performed to obtain high-resolution images. This method enables a high volume coverage speed and a thin reconstruction slice for the helical luggage CT. We validate this method with data collected in a commercial X-ray luggage CT.
Zha, Kan; Busch, Stephen; Park, Cheolwoong; ...
2016-06-24
In-cylinder flow measurements are necessary to gain a fundamental understanding of swirl-supported, light-duty Diesel engine processes for high thermal efficiency and low emissions. Planar particle image velocimetry (PIV) can be used for non-intrusive, in situ measurement of swirl-plane velocity fields through a transparent piston. In order to keep the flow unchanged from all-metal engine operation, the geometry of the transparent piston must adapt the production-intent metal piston geometry. As a result, a temporally- and spatially-variant optical distortion is introduced to the particle images. Here, to ensure reliable measurement of particle displacements, this work documents a systematic exploration of optical distortionmore » quantification and a hybrid back-projection procedure that combines ray-tracing-based geometric and in situ manual back-projection approaches.« less
A distance-driven deconvolution method for CT image-resolution improvement
NASA Astrophysics Data System (ADS)
Han, Seokmin; Choi, Kihwan; Yoo, Sang Wook; Yi, Jonghyon
2016-12-01
The purpose of this research is to achieve high spatial resolution in CT (computed tomography) images without hardware modification. The main idea is to consider geometry optics model, which can provide the approximate blurring PSF (point spread function) kernel, which varies according to the distance from the X-ray tube to each point. The FOV (field of view) is divided into several band regions based on the distance from the X-ray source, and each region is deconvolved with a different deconvolution kernel. As the number of subbands increases, the overshoot of the MTF (modulation transfer function) curve increases first. After that, the overshoot begins to decrease while still showing a larger MTF than the normal FBP (filtered backprojection). The case of five subbands seems to show balanced performance between MTF boost and overshoot minimization. It can be seen that, as the number of subbands increases, the noise (STD) can be seen to show a tendency to decrease. The results shows that spatial resolution in CT images can be improved without using high-resolution detectors or focal spot wobbling. The proposed algorithm shows promising results in improving spatial resolution while avoiding excessive noise boost.
High-spatial-resolution nanoparticle x-ray fluorescence tomography
NASA Astrophysics Data System (ADS)
Larsson, Jakob C.; Vâgberg, William; Vogt, Carmen; Lundström, Ulf; Larsson, Daniel H.; Hertz, Hans M.
2016-03-01
X-ray fluorescence tomography (XFCT) has potential for high-resolution 3D molecular x-ray bio-imaging. In this technique the fluorescence signal from targeted nanoparticles (NPs) is measured, providing information about the spatial distribution and concentration of the NPs inside the object. However, present laboratory XFCT systems typically have limited spatial resolution (>1 mm) and suffer from long scan times and high radiation dose even at high NP concentrations, mainly due to low efficiency and poor signal-to-noise ratio. We have developed a laboratory XFCT system with high spatial resolution (sub-100 μm), low NP concentration and vastly decreased scan times and dose, opening up the possibilities for in-vivo small-animal imaging research. The system consists of a high-brightness liquid-metal-jet microfocus x-ray source, x-ray focusing optics and an energy-resolving photon-counting detector. By using the source's characteristic 24 keV line-emission together with carefully matched molybdenum nanoparticles the Compton background is greatly reduced, increasing the SNR. Each measurement provides information about the spatial distribution and concentration of the Mo nanoparticles. A filtered back-projection method is used to produce the final XFCT image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, S; Zhang, Y; Ma, J
Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less
Characterization of photon-counting multislit breast tomosynthesis.
Berggren, Karl; Cederström, Björn; Lundqvist, Mats; Fredenberg, Erik
2018-02-01
It has been shown that breast tomosynthesis may improve sensitivity and specificity compared to two-dimensional mammography, resulting in increased detection-rate of cancers or lowered call-back rates. The purpose of this study is to characterize a spectral photon-counting multislit breast tomosynthesis system that is able to do single-scan spectral imaging with multiple collimated x-ray beams. The system differs in many aspects compared to conventional tomosynthesis using energy-integrating flat-panel detectors. The investigated system was a prototype consisting of a dual-threshold photon-counting detector with 21 collimated line detectors scanning across the compressed breast. A review of the system is done in terms of detector, acquisition geometry, and reconstruction methods. Three reconstruction methods were used, simple back-projection, filtered back-projection and an iterative algebraic reconstruction technique. The image quality was evaluated by measuring the modulation transfer-function (MTF), normalized noise-power spectrum, detective quantum-efficiency (DQE), and artifact spread-function (ASF) on reconstructed spectral tomosynthesis images for a total-energy bin (defined by a low-energy threshold calibrated to remove electronic noise) and for a high-energy bin (with a threshold calibrated to split the spectrum in roughly equal parts). Acquisition was performed using a 29 kVp W/Al x-ray spectrum at a 0.24 mGy exposure. The difference in MTF between the two energy bins was negligible, that is, there was no energy dependence on resolution. The MTF dropped to 50% at 1.5 lp/mm to 2.3 lp/mm in the scan direction and 2.4 lp/mm to 3.3 lp/mm in the slit direction, depending on the reconstruction method. The full width at half maximum of the ASF was found to range from 13.8 mm to 18.0 mm for the different reconstruction methods. The zero-frequency DQE of the system was found to be 0.72. The fraction of counts in the high-energy bin was measured to be 59% of the total detected spectrum. Scantimes ranged from 4 s to 16.5 s depending on voltage and current settings. The characterized system generates spectral tomosynthesis images with a dual-energy photon-counting detector. Measurements show a high DQE, enabling high image quality at a low dose, which is beneficial for low-dose applications such as screening. The single-scan spectral images open up for applications such as quantitative material decomposition and contrast-enhanced tomosynthesis. © 2017 American Association of Physicists in Medicine.
Sutherland, J G H; Miksys, N; Furutani, K M; Thomson, R M
2014-01-01
To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxel and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for (125)I, (103)Pd, and (131)Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for (103)Pd seeds and smallest but still considerable differences for (131)Cs seeds. Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, D; Kang, S; Kim, T
2014-06-01
Purpose: In this paper, we implemented the four-dimensional (4D) digital tomosynthesis (DTS) imaging based on algebraic image reconstruction technique and total-variation minimization method in order to compensate the undersampled projection data and improve the image quality. Methods: The projection data were acquired as supposed the cone-beam computed tomography system in linear accelerator by the Monte Carlo simulation and the in-house 4D digital phantom generation program. We performed 4D DTS based upon simultaneous algebraic reconstruction technique (SART) among the iterative image reconstruction technique and total-variation minimization method (TVMM). To verify the effectiveness of this reconstruction algorithm, we performed systematic simulation studiesmore » to investigate the imaging performance. Results: The 4D DTS algorithm based upon the SART and TVMM seems to give better results than that based upon the existing method, or filtered-backprojection. Conclusion: The advanced image reconstruction algorithm for the 4D DTS would be useful to validate each intra-fraction motion during radiation therapy. In addition, it will be possible to give advantage to real-time imaging for the adaptive radiation therapy. This research was supported by Leading Foreign Research Institute Recruitment Program (Grant No.2009-00420) and Basic Atomic Energy Research Institute (BAERI); (Grant No. 2009-0078390) through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP)« less
NASA Astrophysics Data System (ADS)
Sasaki, Yoshiaki; Emori, Ryota; Inage, Hiroki; Goto, Masaki; Takahashi, Ryo; Yuasa, Tetsuya; Taniguchi, Hiroshi; Devaraj, Balasigamani; Akatsuka, Takao
2004-05-01
The heterodyne detection technique, on which the coherent detection imaging (CDI) method founds, can discriminate and select very weak, highly directional forward scattered, and coherence retaining photons that emerge from scattering media in spite of their complex and highly scattering nature. That property enables us to reconstruct tomographic images using the same reconstruction technique as that of X-Ray CT, i.e., the filtered backprojection method. Our group had so far developed a transillumination laser CT imaging method based on the CDI method in the visible and near-infrared regions and reconstruction from projections, and reported a variety of tomographic images both in vitro and in vivo of biological objects to demonstrate the effectiveness to biomedical use. Since the previous system was not optimized, it took several hours to obtain a single image. For a practical use, we developed a prototype CDI-based imaging system using parallel fiber array and optical switches to reduce the measurement time significantly. Here, we describe a prototype transillumination laser CT imaging system using fiber-optic based on optical heterodyne detection for early diagnosis of rheumatoid arthritis (RA), by demonstrating the tomographic imaging of acrylic phantom as well as the fundamental imaging properties. We expect that further refinements of the fiber-optic-based laser CT imaging system could lead to a novel and practical diagnostic tool for rheumatoid arthritis and other joint- and bone-related diseases in human finger.
Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M
2014-07-01
A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.
NASA Astrophysics Data System (ADS)
Wang, Tonghe; Zhu, Lei
2016-09-01
Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an average error of less than 1%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutherland, J. G. H.; Miksys, N.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca
2014-01-15
Purpose: To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Methods: Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxelmore » and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for{sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Results: Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for{sup 103}Pd seeds and smallest but still considerable differences for {sup 131}Cs seeds. Conclusions: Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.« less
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-01-01
Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. Conclusions: An ultrafast, reliable and scalable 4D CBCT/CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment. PMID:22149842
Navigating Earthquake Physics with High-Resolution Array Back-Projection
NASA Astrophysics Data System (ADS)
Meng, Lingsen
Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The high-frequency subevents are located at the edges of geodetic slip regions, which are correlated to the stopping phases associated with rupture speed reduction when the earthquake arrests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M
Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results:more » A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.« less
Back-Projection Cortical Potential Imaging: Theory and Results.
Haor, Dror; Shavit, Reuven; Shapiro, Moshe; Geva, Amir B
2017-07-01
Electroencephalography (EEG) is the single brain monitoring technique that is non-invasive, portable, passive, exhibits high-temporal resolution, and gives a directmeasurement of the scalp electrical potential. Amajor disadvantage of the EEG is its low-spatial resolution, which is the result of the low-conductive skull that "smears" the currents coming from within the brain. Recording brain activity with both high temporal and spatial resolution is crucial for the localization of confined brain activations and the study of brainmechanismfunctionality, whichis then followed by diagnosis of brain-related diseases. In this paper, a new cortical potential imaging (CPI) method is presented. The new method gives an estimation of the electrical activity on the cortex surface and thus removes the "smearing effect" caused by the skull. The scalp potentials are back-projected CPI (BP-CPI) onto the cortex surface by building a well-posed problem to the Laplace equation that is solved by means of the finite elements method on a realistic head model. A unique solution to the CPI problem is obtained by introducing a cortical normal current estimation technique. The technique is based on the same mechanism used in the well-known surface Laplacian calculation, followed by a scalp-cortex back-projection routine. The BP-CPI passed four stages of validation, including validation on spherical and realistic head models, probabilistic analysis (Monte Carlo simulation), and noise sensitivity tests. In addition, the BP-CPI was compared with the minimum norm estimate CPI approach and found superior for multi-source cortical potential distributions with very good estimation results (CC >0.97) on a realistic head model in the regions of interest, for two representative cases. The BP-CPI can be easily incorporated in different monitoring tools and help researchers by maintaining an accurate estimation for the cortical potential of ongoing or event-related potentials in order to have better neurological inferences from the EEG.
Penalized weighted least-squares approach for low-dose x-ray computed tomography
NASA Astrophysics Data System (ADS)
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-03-01
The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.
NASA Astrophysics Data System (ADS)
Makeev, Andrey; Ikejimba, Lynda; Lo, Joseph Y.; Glick, Stephen J.
2016-03-01
Although digital mammography has reduced breast cancer mortality by approximately 30%, sensitivity and specificity are still far from perfect. In particular, the performance of mammography is especially limited for women with dense breast tissue. Two out of every three biopsies performed in the U.S. are unnecessary, thereby resulting in increased patient anxiety, pain, and possible complications. One promising tomographic breast imaging method that has recently been approved by the FDA is dedicated breast computed tomography (BCT). However, visualizing lesions with BCT can still be challenging for women with dense breast tissue due to the minimal contrast for lesions surrounded by fibroglandular tissue. In recent years there has been renewed interest in improving lesion conspicuity in x-ray breast imaging by administration of an iodinated contrast agent. Due to the fully 3-D imaging nature of BCT, as well as sub-optimal contrast enhancement while the breast is under compression with mammography and breast tomosynthesis, dedicated BCT of the uncompressed breast is likely to offer the best solution for injected contrast-enhanced x-ray breast imaging. It is well known that use of statistically-based iterative reconstruction in CT results in improved image quality at lower radiation dose. Here we investigate possible improvements in image reconstruction for BCT, by optimizing free regularization parameter in method of maximum likelihood and comparing its performance with clinical cone-beam filtered backprojection (FBP) algorithm.
NASA Astrophysics Data System (ADS)
O'Connor, J. Michael; Pretorius, P. Hendrik; Gifford, Howard C.; Licho, Robert; Joffe, Samuel; McGuiness, Matthew; Mehurg, Shannon; Zacharias, Michael; Brankov, Jovan G.
2012-02-01
Our previous Single Photon Emission Computed Tomography (SPECT) myocardial perfusion imaging (MPI) research explored the utility of numerical observers. We recently created two hundred and eighty simulated SPECT cardiac cases using Dynamic MCAT (DMCAT) and SIMIND Monte Carlo tools. All simulated cases were then processed with two reconstruction methods: iterative ordered subset expectation maximization (OSEM) and filtered back-projection (FBP). Observer study sets were assembled for both OSEM and FBP methods. Five physicians performed an observer study on one hundred and seventy-nine images from the simulated cases. The observer task was to indicate detection of any myocardial perfusion defect using the American Society of Nuclear Cardiology (ASNC) 17-segment cardiac model and the ASNC five-scale rating guidelines. Human observer Receiver Operating Characteristic (ROC) studies established the guidelines for the subsequent evaluation of numerical model observer (NO) performance. Several NOs were formulated and their performance was compared with the human observer performance. One type of NO was based on evaluation of a cardiac polar map that had been pre-processed using a gradient-magnitude watershed segmentation algorithm. The second type of NO was also based on analysis of a cardiac polar map but with use of a priori calculated average image derived from an ensemble of normal cases.
Generalized Fourier slice theorem for cone-beam image reconstruction.
Zhao, Shuang-Ren; Jiang, Dazong; Yang, Kevin; Yang, Kang
2015-01-01
The cone-beam reconstruction theory has been proposed by Kirillov in 1961, Tuy in 1983, Feldkamp in 1984, Smith in 1985, Pierre Grangeat in 1990. The Fourier slice theorem is proposed by Bracewell 1956, which leads to the Fourier image reconstruction method for parallel-beam geometry. The Fourier slice theorem is extended to fan-beam geometry by Zhao in 1993 and 1995. By combining the above mentioned cone-beam image reconstruction theory and the above mentioned Fourier slice theory of fan-beam geometry, the Fourier slice theorem in cone-beam geometry is proposed by Zhao 1995 in short conference publication. This article offers the details of the derivation and implementation of this Fourier slice theorem for cone-beam geometry. Especially the problem of the reconstruction from Fourier domain has been overcome, which is that the value of in the origin of Fourier space is 0/0. The 0/0 type of limit is proper handled. As examples, the implementation results for the single circle and two perpendicular circle source orbits are shown. In the cone-beam reconstruction if a interpolation process is considered, the number of the calculations for the generalized Fourier slice theorem algorithm is
Geometric correction method for 3d in-line X-ray phase contrast image reconstruction
2014-01-01
Background Mechanical system with imperfect or misalignment of X-ray phase contrast imaging (XPCI) components causes projection data misplaced, and thus result in the reconstructed slice images of computed tomography (CT) blurred or with edge artifacts. So the features of biological microstructures to be investigated are destroyed unexpectedly, and the spatial resolution of XPCI image is decreased. It makes data correction an essential pre-processing step for CT reconstruction of XPCI. Methods To remove unexpected blurs and edge artifacts, a mathematics model for in-line XPCI is built by considering primary geometric parameters which include a rotation angle and a shift variant in this paper. Optimal geometric parameters are achieved by finding the solution of a maximization problem. And an iterative approach is employed to solve the maximization problem by using a two-step scheme which includes performing a composite geometric transformation and then following a linear regression process. After applying the geometric transformation with optimal parameters to projection data, standard filtered back-projection algorithm is used to reconstruct CT slice images. Results Numerical experiments were carried out on both synthetic and real in-line XPCI datasets. Experimental results demonstrate that the proposed method improves CT image quality by removing both blurring and edge artifacts at the same time compared to existing correction methods. Conclusions The method proposed in this paper provides an effective projection data correction scheme and significantly improves the image quality by removing both blurring and edge artifacts at the same time for in-line XPCI. It is easy to implement and can also be extended to other XPCI techniques. PMID:25069768
Computed tomography of x-ray images using neural networks
NASA Astrophysics Data System (ADS)
Allred, Lloyd G.; Jones, Martin H.; Sheats, Matthew J.; Davis, Anthony W.
2000-03-01
Traditional CT reconstruction is done using the technique of Filtered Backprojection. While this technique is widely employed in industrial and medical applications, it is not generally understood that FB has a fundamental flaw. Gibbs phenomena states any Fourier reconstruction will produce errors in the vicinity of all discontinuities, and that the error will equal 28 percent of the discontinuity. A number of years back, one of the authors proposed a biological perception model whereby biological neural networks perceive 3D images from stereo vision. The perception model proports an internal hard-wired neural network which emulates the external physical process. A process is repeated whereby erroneous unknown internal values are used to generate an emulated signal with is compared to external sensed data, generating an error signal. Feedback from the error signal is then sued to update the erroneous internal values. The process is repeated until the error signal no longer decrease. It was soon realized that the same method could be used to obtain CT from x-rays without having to do Fourier transforms. Neural networks have the additional potential for handling non-linearities and missing data. The technique has been applied to some coral images, collected at the Los Alamos high-energy x-ray facility. The initial images show considerable promise, in some instances showing more detail than the FB images obtained from the same data. Although routine production using this new method would require a massively parallel computer, the method shows promise, especially where refined detail is required.
Magnetic Resonance Imaging of Solids Using Oscillating Field Gradients
NASA Astrophysics Data System (ADS)
Daud, Yaacob Mat
1992-01-01
Available from UMI in association with The British Library. A fully automatic solid state NMR imaging spectrometer is described. Use has been made of oscillating field gradients to frequency and phase encode the spatial localisation of the nuclear spins. The RF pulse is applied during the zero crossing of the field gradient, so only low RF power is needed to cover the narrow spectral width of the spins. The oscillating field gradient coils were operated on resonance hence large gradient strength could be applied (up to 200G/cm). Two image reconstruction methods were used, filtered back-projection and two dimensional Fourier transformation. The use of phase encoding, both with oscillating and with pulsed field gradients, enabled us to acquire the data when the gradients were off, and this method proved to be insensitive to eddy currents. It also allowed the use of narrow bandwidth receiver thus improving the signal to noise ratio. The maximum entropy method was used in an effort to remove data truncation effects, although the results were not too convincing. The application of these new imaging schemes, was tested by mapping the T_1 and T_2 of polymers. The calculated relaxation maps produced precise spatial information about T_1 and T_2 which is not possible to achieve by conventional relaxation weight mapping. In a second application, the diffusion of water vapour into dried zeolite powder was studied. We found that the diffusion process is not Fickian.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Q; Han, H; Xing, L
Purpose: Dictionary learning based method has attracted more and more attentions in low-dose CT due to the superior performance on suppressing noise and preserving structural details. Considering the structures and noise vary from region to region in one imaging object, we propose a region-specific dictionary learning method to improve the low-dose CT reconstruction. Methods: A set of normal-dose images was used for dictionary learning. Segmentations were performed on these images, so that the training patch sets corresponding to different regions can be extracted out. After that, region-specific dictionaries were learned from these training sets. For the low-dose CT reconstruction, amore » conventional reconstruction, such as filtered back-projection (FBP), was performed firstly, and then segmentation was followed to segment the image into different regions. Sparsity constraints of each region based on its dictionary were used as regularization terms. The regularization parameters were selected adaptively according to different regions. A low-dose human thorax dataset was used to evaluate the proposed method. The single dictionary based method was performed for comparison. Results: Since the lung region is very different from the other part of thorax, two dictionaries corresponding to lung region and the rest part of thorax respectively were learned to better express the structural details and avoid artifacts. With only one dictionary some artifact appeared in the body region caused by the spot atoms corresponding to the structures in the lung region. And also some structure in the lung regions cannot be recovered well by only one dictionary. The quantitative indices of the result by the proposed method were also improved a little compared to the single dictionary based method. Conclusion: Region-specific dictionary can make the dictionary more adaptive to different region characteristics, which is much desirable for enhancing the performance of dictionary learning based method.« less
Deformable registration of CT and cone-beam CT with local intensity matching.
Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon
2017-02-07
Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.
Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M
2014-01-01
Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction. PMID:23318346
Deformable registration of CT and cone-beam CT with local intensity matching
NASA Astrophysics Data System (ADS)
Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon
2017-02-01
Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.
Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography
NASA Astrophysics Data System (ADS)
Borg, Leise; Jørgensen, Jakob S.; Frikel, Jürgen; Sporring, Jon
2017-12-01
Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance to sample is derived and verified numerically and with experimental data. The model accurately describes the arising variable-truncation artifacts across simulated variations of the experimental setup. Three variable-truncation artifact-reduction methods are proposed, all aimed at addressing sinogram discontinuities that are shown to be the source of the streaks. The ‘reduction to limited angle’ (RLA) method simply keeps only non-truncated projections; the ‘detector-directed smoothing’ (DDS) method smooths the discontinuities; while the ‘reflexive boundary condition’ (RBC) method enforces a zero derivative at the discontinuities. Experimental results using both simulated and real data show that the proposed methods effectively reduce variable-truncation artifacts. The RBC method is found to provide the best artifact reduction and preservation of image features using both visual and quantitative assessment. The analysis and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray tomography experiments.
Metal artifact reduction in tomosynthesis imaging
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxia; Yan, Ming; Tao, Kun; Xuan, Xiao; Sabol, John M.; Lai, Hao
2015-03-01
The utility of digital tomosynthesis has been shown for many clinical scenarios including post orthopedic surgery applications. However, two kinds of metal artifacts can influence diagnosis: undershooting and ripple. In this paper, we describe a novel metal artifact reduction (MAR) algorithm to reduce both of these artifacts within the filtered backprojection framework. First, metal areas that are prone to cause artifacts are identified in the raw projection images. These areas are filled with values similar to those in the local neighborhood. During the filtering step, the filled projection is free of undershooting due to the resulting smooth transition near the metal edge. Finally, the filled area is fused with the filtered raw projection data to recover the metal. Since the metal areas are recognized during the back projection step, anatomy and metal can be distinguished - reducing ripple artifacts. Phantom and clinical experiments were designed to quantitatively and qualitatively evaluate the algorithms. Based on phantom images with and without metal implants, the Artifact Spread Function (ASF) was used to quantify image quality in the ripple artifact area. The tail of the ASF with MAR decreases from in-plane to out-of-plane, implying a good artifact reduction, while the ASF without MAR remains high over a wider range. An intensity plot was utilized to analyze the edge of undershooting areas. The results illustrate that MAR reduces undershooting while preserving the edge and size of the metal. Clinical images evaluated by physicists and technologists agree with these quantitative results to further demonstrate the algorithm's effectiveness.
CUDA-based high-performance computing of the S-BPF algorithm with no-waiting pipelining
NASA Astrophysics Data System (ADS)
Deng, Lin; Yan, Bin; Chang, Qingmei; Han, Yu; Zhang, Xiang; Xi, Xiaoqi; Li, Lei
2015-10-01
The backprojection-filtration (BPF) algorithm has become a good solution for local reconstruction in cone-beam computed tomography (CBCT). However, the reconstruction speed of BPF is a severe limitation for clinical applications. The selective-backprojection filtration (S-BPF) algorithm is developed to improve the parallel performance of BPF by selective backprojection. Furthermore, the general-purpose graphics processing unit (GP-GPU) is a popular tool for accelerating the reconstruction. Much work has been performed aiming for the optimization of the cone-beam back-projection. As the cone-beam back-projection process becomes faster, the data transportation holds a much bigger time proportion in the reconstruction than before. This paper focuses on minimizing the total time in the reconstruction with the S-BPF algorithm by hiding the data transportation among hard disk, CPU and GPU. And based on the analysis of the S-BPF algorithm, some strategies are implemented: (1) the asynchronous calls are used to overlap the implemention of CPU and GPU, (2) an innovative strategy is applied to obtain the DBP image to hide the transport time effectively, (3) two streams for data transportation and calculation are synchronized by the cudaEvent in the inverse of finite Hilbert transform on GPU. Our main contribution is a smart reconstruction of the S-BPF algorithm with GPU's continuous calculation and no data transportation time cost. a 5123 volume is reconstructed in less than 0.7 second on a single Tesla-based K20 GPU from 182 views projection with 5122 pixel per projection. The time cost of our implementation is about a half of that without the overlap behavior.
NASA Astrophysics Data System (ADS)
Kazantsev, I. G.; Olsen, U. L.; Poulsen, H. F.; Hansen, P. C.
2018-02-01
We investigate the idealized mathematical model of single scatter in PET for a detector system possessing excellent energy resolution. The model has the form of integral transforms estimating the distribution of photons undergoing a single Compton scattering with a certain angle. The total single scatter is interpreted as the volume integral over scatter points that constitute a rotation body with a football shape, while single scattering with a certain angle is evaluated as the surface integral over the boundary of the rotation body. The equations for total and sample single scatter calculations are derived using a single scatter simulation approximation. We show that the three-dimensional slice-by-slice filtered backprojection algorithm is applicable for scatter data inversion provided that the attenuation map is assumed to be constant. The results of the numerical experiments are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baba, Justin S; Koju, Vijay; John, Dwayne O
2016-01-01
The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scatteringmore » sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.« less
Comparison of analytic and iterative digital tomosynthesis reconstructions for thin slab objects
NASA Astrophysics Data System (ADS)
Yun, J.; Kim, D. W.; Ha, S.; Kim, H. K.
2017-11-01
For digital x-ray tomosynthesis of thin slab objects, we compare the tomographic imaging performances obtained from the filtered backprojection (FBP) and simultaneous algebraic reconstruction (SART) algorithms. The imaging performance includes the in-plane molulation-transfer function (MTF), the signal difference-to-noise ratio (SDNR), and the out-of-plane blur artifact or artifact-spread function (ASF). The MTF is measured using a thin tungsten-wire phantom, and the SDNR and the ASF are measured using a thin aluminum-disc phantom embedded in a plastic cylinder. The FBP shows a better MTF performance than the SART. On the contrary, the SART outperforms the FBP with regard to the SDNR and ASF performances. Detailed experimental results and their analysis results are described in this paper. For a more proper use of digital tomosynthesis technique, this study suggests to use a reconstuction algorithm suitable for application-specific purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langet, Hélène; Laboratoire des Signaux et Systèmes, CentraleSupélec, Gif-sur-Yvette F-91192; Center for Visual Computing, CentraleSupélec, Châtenay-Malabry F-92295
2015-09-15
Purpose: This paper addresses the reconstruction of x-ray cone-beam computed tomography (CBCT) for interventional C-arm systems. Subsampling of CBCT is a significant issue with C-arms due to their slow rotation and to the low frame rate of their flat panel x-ray detectors. The aim of this work is to propose a novel method able to handle the subsampling artifacts generally observed with analytical reconstruction, through a content-driven hierarchical reconstruction based on compressed sensing. Methods: The central idea is to proceed with a hierarchical method where the most salient features (high intensities or gradients) are reconstructed first to reduce the artifactsmore » these features induce. These artifacts are addressed first because their presence contaminates less salient features. Several hierarchical schemes aiming at streak artifacts reduction are introduced for C-arm CBCT: the empirical orthogonal matching pursuit approach with the ℓ{sub 0} pseudonorm for reconstructing sparse vessels; a convex variant using homotopy with the ℓ{sub 1}-norm constraint of compressed sensing, for reconstructing sparse vessels over a nonsparse background; homotopy with total variation (TV); and a novel empirical extension to nonlinear diffusion (NLD). Such principles are implemented with penalized iterative filtered backprojection algorithms. For soft-tissue imaging, the authors compare the use of TV and NLD filters as sparsity constraints, both optimized with the alternating direction method of multipliers, using a threshold for TV and a nonlinear weighting for NLD. Results: The authors show on simulated data that their approach provides fast convergence to good approximations of the solution of the TV-constrained minimization problem introduced by the compressed sensing theory. Using C-arm CBCT clinical data, the authors show that both TV and NLD can deliver improved image quality by reducing streaks. Conclusions: A flexible compressed-sensing-based algorithmic approach is proposed that is able to accommodate for a wide range of constraints. It is successfully applied to C-arm CBCT images that may not be so well approximated by piecewise constant functions.« less
NASA Astrophysics Data System (ADS)
Du, Yi; Wang, Xiangang; Xiang, Xincheng; Wei, Zhouping
2016-12-01
Optical computed tomography (optical-CT) is a high-resolution, fast, and easily accessible readout modality for gel dosimeters. This paper evaluates a hybrid iterative image reconstruction algorithm for optical-CT gel dosimeter imaging, namely, the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization. The mathematical theory and implementation workflow of the algorithm are detailed. Experiments on two different optical-CT scanners were performed for cross-platform validation. For algorithm evaluation, the iterative convergence is first shown, and peak-to-noise-ratio (PNR) and contrast-to-noise ratio (CNR) results are given with the cone-beam filtered backprojection (FDK) algorithm and the FDK results followed by median filtering (mFDK) as reference. The effect on spatial gradients and reconstruction artefacts is also investigated. The PNR curve illustrates that the results of SART + OS + TV finally converges to that of FDK but with less noise, which implies that the dose-OD calibration method for FDK is also applicable to the proposed algorithm. The CNR in selected regions-of-interest (ROIs) of SART + OS + TV results is almost double that of FDK and 50% higher than that of mFDK. The artefacts in SART + OS + TV results are still visible, but have been much suppressed with little spatial gradient loss. Based on the assessment, we can conclude that this hybrid SART + OS + TV algorithm outperforms both FDK and mFDK in denoising, preserving spatial dose gradients and reducing artefacts, and its effectiveness and efficiency are platform independent.
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Satake, K.; Sheehan, A. F.; Mulia, I. E.; Heidarzadeh, M.; Maeda, T.
2015-12-01
Adaption of absolute or differential pressure gauges (APG or DPG) to Ocean Bottom Seismometers has provided the opportunity to study tsunamis. Recently we extracted tsunami waveforms of the 28 October 2012 Haida Gwaii earthquake recoded by the APG and DPG of Cascadia Initiative program (Sheehan et al., 2015, SRL). We applied such dense tsunami observations (48 stations) together with other records from DARTs (9 stations) to characterize the tsunami source. This study is the first study that used such a large number of offshore tsunami records for earthquake source study. Conventionally the curves of tsunami travel times are drawn backward from station locations to estimate the tsunami source region. Here we propose a more advanced technique called tsunami back-projection to estimate the source region. Our image produced by tsunami back-projection has the largest value or tsunami centroid that is very close to the epicenter and above the Queen Charlotte transform fault (QCF), whereas the negative values are mostly located east of Haida Gwaii in the Hecate Strait. By using tsunami back-projection we avoid picking initial tsunami phase which is a necessary step in the conventional method that is rather subjective. The slip distribution of the 2012 Haida Gwaii earthquake estimated by tsunami waveform inversion shows large slip near the trench (4-5 m) and also on a plate interface southeast the epicenter (3-4 m) below QCF. From the slip distribution, the calculated seismic moment is 5.4 × 1020 N m (Mw 7.8). The steep bathymetry offshore Haida Gwaii and the horizontal movement caused by the earthquake possibly affects the sea surface deformation. The potential tsunami energy calculated from the sea-surface deformation of pure faulting is 2.20 × 1013 J, while that from the bathymetry effect is 0.12 × 1013 J or about 5% of the total potential energy. The significant deformation above the steep slope is confirmed by another tsunami inversion that disregards fault parameters.
Novel application of windowed beamforming function imaging for FLGPR
NASA Astrophysics Data System (ADS)
Xique, Ismael J.; Burns, Joseph W.; Thelen, Brian J.; LaRose, Ryan M.
2018-04-01
Backprojection of cross-correlated array data, using algorithms such as coherent interferometric imaging (Borcea, et al., 2006), has been advanced as a method to improve the statistical stability of images of targets in an inhomogeneous medium. Recently, the Windowed Beamforming Energy (WBE) function algorithm has been introduced as a functionally equivalent approach, which is significantly less computationally burdensome (Borcea, et al., 2011). WBE produces similar results through the use of a quadratic function summing signals after beamforming in transmission and reception, and windowing in the time domain. We investigate the application of WBE to improve the detection of buried targets with forward looking ground penetrating MIMO radar (FLGPR) data. The formulation of WBE as well the software implementation of WBE for the FLGPR data collection will be discussed. WBE imaging results are compared to standard backprojection and Coherence Factor imaging. Additionally, the effectiveness of WBE on field-collected data is demonstrated qualitatively through images and quantitatively through the use of a CFAR statistic on buried targets of a variety of contrast levels.
NASA Astrophysics Data System (ADS)
Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.
The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2014-06-01
Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.
Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.
Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B
2016-01-01
We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
Gomez-Cardona, Daniel; Cruz-Bastida, Juan Pablo; Li, Ke; Budde, Adam; Hsieh, Jiang; Chen, Guang-Hong
2016-01-01
Purpose: Noise characteristics of clinical multidetector CT (MDCT) systems can be quantified by the noise power spectrum (NPS). Although the NPS of CT has been extensively studied in the past few decades, the joint impact of the bowtie filter and object position on the NPS has not been systematically investigated. This work studies the interplay of these two factors on the two dimensional (2D) local NPS of a clinical CT system that uses the filtered backprojection algorithm for image reconstruction. Methods: A generalized NPS model was developed to account for the impact of the bowtie filter and image object location in the scan field-of-view (SFOV). For a given bowtie filter, image object, and its location in the SFOV, the shape and rotational symmetries of the 2D local NPS were directly computed from the NPS model without going through the image reconstruction process. The obtained NPS was then compared with the measured NPSs from the reconstructed noise-only CT images in both numerical phantom simulation studies and experimental phantom studies using a clinical MDCT scanner. The shape and the associated symmetry of the 2D NPS were classified by borrowing the well-known atomic spectral symbols s, p, and d, which correspond to circular, dumbbell, and cloverleaf symmetries, respectively, of the wave function of electrons in an atom. Finally, simulated bar patterns were embedded into experimentally acquired noise backgrounds to demonstrate the impact of different NPS symmetries on the visual perception of the object. Results: (1) For a central region in a centered cylindrical object, an s-wave symmetry was always present in the NPS, no matter whether the bowtie filter was present or not. In contrast, for a peripheral region in a centered object, the symmetry of its NPS was highly dependent on the bowtie filter, and both p-wave symmetry and d-wave symmetry were observed in the NPS. (2) For a centered region-ofinterest (ROI) in an off-centered object, the symmetry of its NPS was found to be different from that of a peripheral ROI in the centered object, even when the physical positions of the two ROIs relative to the isocenter were the same. (3) The potential clinical impact of the highly anisotropic NPS, caused by the interplay of the bowtie filter and position of the image object, was highlighted in images of specific bar patterns oriented at different angles. The visual perception of the bar patterns was found to be strongly dependent on their orientation. Conclusions: The NPS of CT depends strongly on the bowtie filter and object position. Even if the location of the ROI with respect to the isocenter is fixed, there can be different symmetries in the NPS, which depend on the object position and the size of the bowtie filter. For an isolated off-centered object, the NPS of its CT images cannot be represented by the NPS measured from a centered object. PMID:27487866
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez-Cardona, Daniel; Cruz-Bastida, Juan Pablo
2016-08-15
Purpose: Noise characteristics of clinical multidetector CT (MDCT) systems can be quantified by the noise power spectrum (NPS). Although the NPS of CT has been extensively studied in the past few decades, the joint impact of the bowtie filter and object position on the NPS has not been systematically investigated. This work studies the interplay of these two factors on the two dimensional (2D) local NPS of a clinical CT system that uses the filtered backprojection algorithm for image reconstruction. Methods: A generalized NPS model was developed to account for the impact of the bowtie filter and image object locationmore » in the scan field-of-view (SFOV). For a given bowtie filter, image object, and its location in the SFOV, the shape and rotational symmetries of the 2D local NPS were directly computed from the NPS model without going through the image reconstruction process. The obtained NPS was then compared with the measured NPSs from the reconstructed noise-only CT images in both numerical phantom simulation studies and experimental phantom studies using a clinical MDCT scanner. The shape and the associated symmetry of the 2D NPS were classified by borrowing the well-known atomic spectral symbols s, p, and d, which correspond to circular, dumbbell, and cloverleaf symmetries, respectively, of the wave function of electrons in an atom. Finally, simulated bar patterns were embedded into experimentally acquired noise backgrounds to demonstrate the impact of different NPS symmetries on the visual perception of the object. Results: (1) For a central region in a centered cylindrical object, an s-wave symmetry was always present in the NPS, no matter whether the bowtie filter was present or not. In contrast, for a peripheral region in a centered object, the symmetry of its NPS was highly dependent on the bowtie filter, and both p-wave symmetry and d-wave symmetry were observed in the NPS. (2) For a centered region-ofinterest (ROI) in an off-centered object, the symmetry of its NPS was found to be different from that of a peripheral ROI in the centered object, even when the physical positions of the two ROIs relative to the isocenter were the same. (3) The potential clinical impact of the highly anisotropic NPS, caused by the interplay of the bowtie filter and position of the image object, was highlighted in images of specific bar patterns oriented at different angles. The visual perception of the bar patterns was found to be strongly dependent on their orientation. Conclusions: The NPS of CT depends strongly on the bowtie filter and object position. Even if the location of the ROI with respect to the isocenter is fixed, there can be different symmetries in the NPS, which depend on the object position and the size of the bowtie filter. For an isolated off-centered object, the NPS of its CT images cannot be represented by the NPS measured from a centered object.« less
GPU-accelerated regularized iterative reconstruction for few-view cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca
2015-04-15
Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less
Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha
2007-09-01
The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.
Komarov, Denis A; Hirata, Hiroshi
2017-08-01
In this paper, we introduce a procedure for the reconstruction of spectral-spatial EPR images using projections acquired with the constant sweep of a magnetic field. The application of a constant field-sweep and a predetermined data sampling rate simplifies the requirements for EPR imaging instrumentation and facilitates the backprojection-based reconstruction of spectral-spatial images. The proposed approach was applied to the reconstruction of a four-dimensional numerical phantom and to actual spectral-spatial EPR measurements. Image reconstruction using projections with a constant field-sweep was three times faster than the conventional approach with the application of a pseudo-angle and a scan range that depends on the applied field gradient. Spectral-spatial EPR imaging with a constant field-sweep for data acquisition only slightly reduces the signal-to-noise ratio or functional resolution of the resultant images and can be applied together with any common backprojection-based reconstruction algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.
2010-04-01
Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.
System matrix computation vs storage on GPU: A comparative study in cone beam CT.
Matenine, Dmitri; Côté, Geoffroi; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe
2018-02-01
Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersection distances between the trajectories of photons and the object, also called ray tracing or system matrix computation. This work focused on the thin-ray model is aimed at comparing different system matrix handling strategies using graphical processing units (GPUs). In this work, the system matrix is modeled by thin rays intersecting a regular grid of box-shaped voxels, known to be an accurate representation of the forward projection operator in CT. However, an uncompressed system matrix exceeds the random access memory (RAM) capacities of typical computers by one order of magnitude or more. Considering the RAM limitations of GPU hardware, several system matrix handling methods were compared: full storage of a compressed system matrix, on-the-fly computation of its coefficients, and partial storage of the system matrix with partial on-the-fly computation. These methods were tested on geometries mimicking a cone beam CT (CBCT) acquisition of a human head. Execution times of three routines of interest were compared: forward projection, backprojection, and ordered-subsets convex (OSC) iteration. A fully stored system matrix yielded the shortest backprojection and OSC iteration times, with a 1.52× acceleration for OSC when compared to the on-the-fly approach. Nevertheless, the maximum problem size was bound by the available GPU RAM and geometrical symmetries. On-the-fly coefficient computation did not require symmetries and was shown to be the fastest for forward projection. It also offered reasonable execution times of about 176.4 ms per view per OSC iteration for a detector of 512 × 448 pixels and a volume of 384 3 voxels, using commodity GPU hardware. Partial system matrix storage has shown a performance similar to the on-the-fly approach, while still relying on symmetries. Partial system matrix storage was shown to yield the lowest relative performance. On-the-fly ray tracing was shown to be the most flexible method, yielding reasonable execution times. A fully stored system matrix allowed for the lowest backprojection and OSC iteration times and may be of interest for certain performance-oriented applications. © 2017 American Association of Physicists in Medicine.
Pang, Wai-Man; Qin, Jing; Lu, Yuqiang; Xie, Yongming; Chui, Chee-Kong; Heng, Pheng-Ann
2011-03-01
To accelerate the simultaneous algebraic reconstruction technique (SART) with motion compensation for speedy and quality computed tomography reconstruction by exploiting CUDA-enabled GPU. Two core techniques are proposed to fit SART into the CUDA architecture: (1) a ray-driven projection along with hardware trilinear interpolation, and (2) a voxel-driven back-projection that can avoid redundant computation by combining CUDA shared memory. We utilize the independence of each ray and voxel on both techniques to design CUDA kernel to represent a ray in the projection and a voxel in the back-projection respectively. Thus, significant parallelization and performance boost can be achieved. For motion compensation, we rectify each ray's direction during the projection and back-projection stages based on a known motion vector field. Extensive experiments demonstrate the proposed techniques can provide faster reconstruction without compromising image quality. The process rate is nearly 100 projections s (-1), and it is about 150 times faster than a CPU-based SART. The reconstructed image is compared against ground truth visually and quantitatively by peak signal-to-noise ratio (PSNR) and line profiles. We further evaluate the reconstruction quality using quantitative metrics such as signal-to-noise ratio (SNR) and mean-square-error (MSE). All these reveal that satisfactory results are achieved. The effects of major parameters such as ray sampling interval and relaxation parameter are also investigated by a series of experiments. A simulated dataset is used for testing the effectiveness of our motion compensation technique. The results demonstrate our reconstructed volume can eliminate undesirable artifacts like blurring. Our proposed method has potential to realize instantaneous presentation of 3D CT volume to physicians once the projection data are acquired.
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-12-01
Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. An ultrafast, reliable and scalable 4D CBCT∕CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment.
Oblique reconstructions in tomosynthesis. II. Super-resolution
Acciavatti, Raymond J.; Maidment, Andrew D. A.
2013-01-01
Purpose: In tomosynthesis, super-resolution has been demonstrated using reconstruction planes parallel to the detector. Super-resolution allows for subpixel resolution relative to the detector. The purpose of this work is to develop an analytical model that generalizes super-resolution to oblique reconstruction planes. Methods: In a digital tomosynthesis system, a sinusoidal test object is modeled along oblique angles (i.e., “pitches”) relative to the plane of the detector in a 3D divergent-beam acquisition geometry. To investigate the potential for super-resolution, the input frequency is specified to be greater than the alias frequency of the detector. Reconstructions are evaluated in an oblique plane along the extent of the object using simple backprojection (SBP) and filtered backprojection (FBP). By comparing the amplitude of the reconstruction against the attenuation coefficient of the object at various frequencies, the modulation transfer function (MTF) is calculated to determine whether modulation is within detectable limits for super-resolution. For experimental validation of super-resolution, a goniometry stand was used to orient a bar pattern phantom along various pitches relative to the breast support in a commercial digital breast tomosynthesis system. Results: Using theoretical modeling, it is shown that a single projection image cannot resolve a sine input whose frequency exceeds the detector alias frequency. The high frequency input is correctly visualized in SBP or FBP reconstruction using a slice along the pitch of the object. The Fourier transform of this reconstructed slice is maximized at the input frequency as proof that the object is resolved. Consistent with the theoretical results, experimental images of a bar pattern phantom showed super-resolution in oblique reconstructions. At various pitches, the highest frequency with detectable modulation was determined by visual inspection of the bar patterns. The dependency of the highest detectable frequency on pitch followed the same trend as the analytical model. It was demonstrated that super-resolution is not achievable if the pitch of the object approaches 90°, corresponding to the case in which the test frequency is perpendicular to the breast support. Only low frequency objects are detectable at pitches close to 90°. Conclusions: This work provides a platform for investigating super-resolution in oblique reconstructions for tomosynthesis. In breast imaging, this study should have applications in visualizing microcalcifications and other subtle signs of cancer. PMID:24320445
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
Nguyen, Van-Giang; Lee, Soo-Jin
2016-07-01
Iterative reconstruction from Compton scattered data is known to be computationally more challenging than that from conventional line-projection based emission data in that the gamma rays that undergo Compton scattering are modeled as conic projections rather than line projections. In conventional tomographic reconstruction, to parallelize the projection and backprojection operations using the graphics processing unit (GPU), approximated methods that use an unmatched pair of ray-tracing forward projector and voxel-driven backprojector have been widely used. In this work, we propose a new GPU-accelerated method for Compton camera reconstruction which is more accurate by using exactly matched pair of projector and backprojector. To calculate conic forward projection, we first sample the cone surface into conic rays and accumulate the intersecting chord lengths of the conic rays passing through voxels using a fast ray-tracing method (RTM). For conic backprojection, to obtain the true adjoint of the conic forward projection, while retaining the computational efficiency of the GPU, we use a voxel-driven RTM which is essentially the same as the standard RTM used for the conic forward projector. Our simulation results show that, while the new method is about 3 times slower than the approximated method, it is still about 16 times faster than the CPU-based method without any loss of accuracy. The net conclusion is that our proposed method is guaranteed to retain the reconstruction accuracy regardless of the number of iterations by providing a perfectly matched projector-backprojector pair, which makes iterative reconstruction methods for Compton imaging faster and more accurate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
X-ray computed tomography of wood-adhesive bondlines: Attenuation and phase-contrast effects
Paris, Jesse L.; Kamke, Frederick A.; Xiao, Xianghui
2015-07-29
Microscale X-ray computed tomography (XCT) is discussed as a technique for identifying 3D adhesive distribution in wood-adhesive bondlines. Visualization and material segmentation of the adhesives from the surrounding cellular structures require sufficient gray-scale contrast in the reconstructed XCT data. Commercial wood-adhesive polymers have similar chemical characteristics and density to wood cell wall polymers and therefore do not provide good XCT attenuation contrast in their native form. Here, three different adhesive types, namely phenol formaldehyde, polymeric diphenylmethane diisocyanate, and a hybrid polyvinyl acetate, are tagged with iodine such that they yield sufficient X-ray attenuation contrast. However, phase-contrast effects at material edgesmore » complicate image quality and segmentation in XCT data reconstructed with conventional filtered backprojection absorption contrast algorithms. A quantitative phase retrieval algorithm, which isolates and removes the phase-contrast effect, was demonstrated. The paper discusses and illustrates the balance between material X-ray attenuation and phase-contrast effects in all quantitative XCT analyses of wood-adhesive bondlines.« less
Xu, C H; Wang, L; Shi, X T; You, F S; Fu, F; Liu, R G; Dai, M; Zhao, Z W; Gao, G D; Dong, X Z
2010-01-01
The aim of this study was to use electrical impedance tomography (EIT) to detect and image acute intracranial haemorrhage (ICH) in an animal model. Blood was infused into the frontal lobe of the brains of anaesthetized piglets and impedance was measured using 16 electrodes placed in a circle on the scalp. The EIT images were constructed using a filtered back-projection algorithm. The mean of all the pixel intensities within a region of interest--the mean resistivity value (MRV)--was used to evaluate the relative impedance changes in the target region. A symmetrical index (SI), reflecting the relative impedance on both sides of the brain, was also calculated. Changes in MRV and SI were associated with the injection of blood, demonstrating that EIT can successfully detect ICH in this animal model. The unique features of EIT may be beneficial for diagnosing ICH early in patients after cranial surgery, thereby reducing the risk of complications and mortality.
X-ray computed tomography of wood-adhesive bondlines: Attenuation and phase-contrast effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paris, Jesse L.; Kamke, Frederick A.; Xiao, Xianghui
Microscale X-ray computed tomography (XCT) is discussed as a technique for identifying 3D adhesive distribution in wood-adhesive bondlines. Visualization and material segmentation of the adhesives from the surrounding cellular structures require sufficient gray-scale contrast in the reconstructed XCT data. Commercial wood-adhesive polymers have similar chemical characteristics and density to wood cell wall polymers and therefore do not provide good XCT attenuation contrast in their native form. Here, three different adhesive types, namely phenol formaldehyde, polymeric diphenylmethane diisocyanate, and a hybrid polyvinyl acetate, are tagged with iodine such that they yield sufficient X-ray attenuation contrast. However, phase-contrast effects at material edgesmore » complicate image quality and segmentation in XCT data reconstructed with conventional filtered backprojection absorption contrast algorithms. A quantitative phase retrieval algorithm, which isolates and removes the phase-contrast effect, was demonstrated. The paper discusses and illustrates the balance between material X-ray attenuation and phase-contrast effects in all quantitative XCT analyses of wood-adhesive bondlines.« less
NOTE: A BPF-type algorithm for CT with a curved PI detector
NASA Astrophysics Data System (ADS)
Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping
2006-08-01
Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941 59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.
A BPF-type algorithm for CT with a curved PI detector.
Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping
2006-08-21
Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941-59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam-Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam-Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.
Lasnon, Charline; Dugue, Audrey Emmanuelle; Briand, Mélanie; Blanc-Fournier, Cécile; Dutoit, Soizic; Louis, Marie-Hélène; Aide, Nicolas
2015-06-01
We compared conventional filtered back-projection (FBP), two-dimensional-ordered subsets expectation maximization (OSEM) and maximum a posteriori (MAP) NEMA NU 4-optimized reconstructions for therapy assessment. Varying reconstruction settings were used to determine the parameters for optimal image quality with two NEMA NU 4 phantom acquisitions. Subsequently, data from two experiments in which nude rats bearing subcutaneous tumors had received a dual PI3K/mTOR inhibitor were reconstructed with the NEMA NU 4-optimized parameters. Mann-Whitney tests were used to compare mean standardized uptake value (SUV(mean)) variations among groups. All NEMA NU 4-optimized reconstructions showed the same 2-deoxy-2-[(18)F]fluoro-D-glucose ([(18)F]FDG) kinetic patterns and detected a significant difference in SUV(mean) relative to day 0 between controls and treated groups for all time points with comparable p values. In the framework of therapy assessment in rats bearing subcutaneous tumors, all algorithms available on the Inveon system performed equally.
Eigenvector decomposition of full-spectrum x-ray computed tomography.
Gonzales, Brian J; Lalush, David S
2012-03-07
Energy-discriminated x-ray computed tomography (CT) data were projected onto a set of basis functions to suppress the noise in filtered back-projection (FBP) reconstructions. The x-ray CT data were acquired using a novel x-ray system which incorporated a single-pixel photon-counting x-ray detector to measure the x-ray spectrum for each projection ray. A matrix of the spectral response of different materials was decomposed using eigenvalue decomposition to form the basis functions. Projection of FBP onto basis functions created a de facto image segmentation of multiple contrast agents. Final reconstructions showed significant noise suppression while preserving important energy-axis data. The noise suppression was demonstrated by a marked improvement in the signal-to-noise ratio (SNR) along the energy axis for multiple regions of interest in the reconstructed images. Basis functions used on a more coarsely sampled energy axis still showed an improved SNR. We conclude that the noise-resolution trade off along the energy axis was significantly improved using the eigenvalue decomposition basis functions.
Optical transillumination tomography with tolerance against refraction mismatch.
Haidekker, Mark A
2005-12-01
Optical transillumination tomography (OT) is a laser-based imaging modality where ballistic photons are used for projection generation. Image reconstruction is therefore similar to X-ray computed tomography. This modality promises fast image acquisition, good resolution and contrast, and inexpensive instrumentation for imaging of weakly scattering objects, such as for example tissue-engineered constructs. In spite of its advantages, OT is not widely used. One reason is its sensitivity towards changes in material refractive index along the light path. Beam refraction artefacts cause areas of overestimated tissue density and blur geometric details. A spatial filter, introduced into the beam path to eliminate scattered photons, will also remove refracted photons from the projections. In the projections, zones affected by refraction can be detected by thresholding. By using algebraic reconstruction techniques (ART) in conjunction with suitable interpolation algorithms, reconstruction artefacts can be partly avoided. Reconstructions from a test image were performed. Standard filtered backprojection (FBP) showed a round mean square (RMS) deviation from the original image of 9.9. RMS deviation with refraction-tolerant ART reconstruction was 0.33 and 0.24, depending on the algorithm, compared to 0.57 (FBP) and 0.06 (ART) in a non-refracting case. In addition, modified ART reconstruction allowed detection of small geometric details that were invisible in standard reconstructions. Refraction-tolerant ART may be the key to eliminating one of the major challenges of OT.
Fast optical transillumination tomography with large-size projection acquisition.
Huang, Hsuan-Ming; Xia, Jinjun; Haidekker, Mark A
2008-10-01
Techniques such as optical coherence tomography and diffuse optical tomography have been shown to effectively image highly scattering samples such as tissue. An additional modality has received much less attention: Optical transillumination (OT) tomography, a modality that promises very high acquisition speed for volumetric scans. With the motivation to image tissue-engineered blood vessels for possible biomechanical testing, we have developed a fast OT device using a collimated, noncoherent beam with a large diameter together with a large-size CMOS camera that has the ability to acquire 3D projections in a single revolution of the sample. In addition, we used accelerated iterative reconstruction techniques to improve image reconstruction speed, while at the same time obtaining better image quality than through filtered backprojection. The device was tested using ink-filled polytetrafluorethylene tubes to determine geometric reconstruction accuracy and recovery of absorbance. Even in the presence of minor refractive index mismatch, the weighted error of the measured radius was <5% in all cases, and a high linear correlation of ink absorbance determined with a photospectrometer of R(2) = 0.99 was found, although the OT device systematically underestimated absorbance. Reconstruction time was improved from several hours (standard arithmetic reconstruction) to 90 s per slice with our optimized algorithm. Composed of only a light source, two spatial filters, a sample bath, and a CMOS camera, this device was extremely simple and cost-efficient to build.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, P; Cammin, J; Solberg, T
Purpose: Proton radiography and proton computed tomography (PCT) can be used to measure proton stopping power directly. However, practical and cost effective proton imaging detectors are not widely available. In this study, the authors investigated the feasibility of proton imaging using a silicon diode array. Methods: A one-dimensional silicon-diode detector array (1DSDA) was aligned with the central axis (CAX) of the proton beam. Polymethyl methacrylate (PMMA) slabs were used to find the correspondence between the water equivalent thickness (WET) and 1DSDA channel number. 2D proton radiographs (PR) were obtained by translation and rotation of a phantom relative to CAX whilemore » the proton nozzle and 1DSDA were kept stationary. A PCT image of one slice of the phantom was reconstructed using filtered backprojection. Results: PR and PCT images of the PMMA cube were successfully acquired using the 1DSDA. The WET of the phantom was measured using PR data with an accuracy of 4.2% or better. Structures down to 1 mm in size could be resolved. Reconstruction of a PCT image showed very good agreement with simulation. Limitations in spatial resolution are attributed to limited spatial sampling, beam collimation, and proton scatter. Conclusion: The results demonstrate the feasibility of using silicon diode arrays for proton imaging. Such a device can potentially offer fast image acquisition, high spatial and energy resolution for PR and PCT.« less
Numerical observer for atherosclerotic plaque classification in spectral computed tomography
Lorsakul, Auranuch; Fakhri, Georges El; Worstell, William; Ouyang, Jinsong; Rakvongthai, Yothin; Laine, Andrew F.; Li, Quanzheng
2016-01-01
Abstract. Spectral computed tomography (SCT) generates better image quality than conventional computed tomography (CT). It has overcome several limitations for imaging atherosclerotic plaque. However, the literature evaluating the performance of SCT based on objective image assessment is very limited for the task of discriminating plaques. We developed a numerical-observer method and used it to assess performance on discrimination vulnerable-plaque features and compared the performance among multienergy CT (MECT), dual-energy CT (DECT), and conventional CT methods. Our numerical observer was designed to incorporate all spectral information and comprised two-processing stages. First, each energy-window domain was preprocessed by a set of localized channelized Hotelling observers (CHO). In this step, the spectral image in each energy bin was decorrelated using localized prewhitening and matched filtering with a set of Laguerre–Gaussian channel functions. Second, the series of the intermediate scores computed from all the CHOs were integrated by a Hotelling observer with an additional prewhitening and matched filter. The overall signal-to-noise ratio (SNR) and the area under the receiver operating characteristic curve (AUC) were obtained, yielding an overall discrimination performance metric. The performance of our new observer was evaluated for the particular binary classification task of differentiating between alternative plaque characterizations in carotid arteries. A clinically realistic model of signal variability was also included in our simulation of the discrimination tasks. The inclusion of signal variation is a key to applying the proposed observer method to spectral CT data. Hence, the task-based approaches based on the signal-known-exactly/background-known-exactly (SKE/BKE) framework and the clinical-relevant signal-known-statistically/background-known-exactly (SKS/BKE) framework were applied for analytical computation of figures of merit (FOM). Simulated data of a carotid-atherosclerosis patient were used to validate our methods. We used an extended cardiac-torso anthropomorphic digital phantom and three simulated plaque types (i.e., calcified plaque, fatty-mixed plaque, and iodine-mixed blood). The images were reconstructed using a standard filtered backprojection (FBP) algorithm for all the acquisition methods and were applied to perform two different discrimination tasks of: (1) calcified plaque versus fatty-mixed plaque and (2) calcified plaque versus iodine-mixed blood. MECT outperformed DECT and conventional CT systems for all cases of the SKE/BKE and SKS/BKE tasks (all p<0.01). On average of signal variability, MECT yielded the SNR improvements over other acquisition methods in the range of 46.8% to 65.3% (all p<0.01) for FBP-Ramp images and 53.2% to 67.7% (all p<0.01) for FBP-Hanning images for both identification tasks. This proposed numerical observer combined with our signal variability framework is promising for assessing material characterization obtained through the additional energy-dependent attenuation information of SCT. These methods can be further extended to other clinical tasks such as kidney or urinary stone identification applications. PMID:27429999
Evaluation of the effect of filter apodization for volume PET imaging using the 3-D RP algorithm
NASA Astrophysics Data System (ADS)
Baghaei, H.; Wong, Wai-Hoi; Li, Hongdi; Uribe, J.; Wang, Yu; Aykac, M.; Liu, Yaqiang; Xing, Tao
2003-02-01
We investigated the influence of filter apodization and cutoff frequency on the image quality of volume positron emission tomography (PET) imaging using the three-dimensional reprojection (3-D RP) algorithm. An important parameter in 3-D RP and other filtered backprojection algorithms is the choice of the filter window function. In this study, the Hann, Hamming, and Butterworth low-pass window functions were investigated. For each window, a range of cutoff frequencies was considered. Projection data were acquired by scanning a uniform cylindrical phantom, a cylindrical phantom containing four small lesion phantoms having diameters of 3, 4, 5, and 6 mm and the 3-D Hoffman brain phantom. All measurements were performed using the high-resolution PET camera developed at the M.D. Anderson Cancer Center (MDAPET), University of Texas, Houston, TX. This prototype camera, which is a multiring scanner with no septa, has an intrinsic transaxial resolution of 2.8 mm. The evaluation was performed by computing the noise level in the reconstructed images of the uniform phantom and the contrast recovery of the 6-mm hot lesion in a warm background and also by visually inspecting images, especially those of the Hoffman brain phantom. For this work, we mainly studied the central slices which are less affected by the incompleteness of the 3-D data. Overall, the Butterworth window offered a better contrast-noise performance over the Hann and Hamming windows. For our high statistics data, for the Hann and Hamming apodization functions a cutoff frequency of 0.6-0.8 of the Nyquist frequency resulted in a reasonable compromise between the contrast recovery and noise level and for the Butterworth window a cutoff frequency of 0.4-0.6 of the Nyquist frequency was a reasonable choice. For the low statistics data, use of lower cutoff frequencies was more appropriate.
NASA Astrophysics Data System (ADS)
Ghosh, A.; LI, B.
2016-12-01
Alaska-Aleutian subduction zone is one of the most seismically active subduction zones in this planet. It is characterized by remarkable along-strike variations in seismic behavior, more than 50 active volcanoes, and presents a unique opportunity to serve as a natural laboratory to study subduction zone processes including fault dynamics. Yet details of the seismicity pattern, spatiotemporal distribution of slow earthquakes, nature of interaction between slow and fast earthquakes and their implication on the tectonic behavior remain unknown. We use a hybrid seismic network approach and install 3 mini seismic arrays and 5 stand-alone stations to simultaneously image subduction fault and nearby volcanic system (Makushin). The arrays and stations are strategically located in the Unalaska Island, where prolific tremor activity is detected and located by a solo pilot array in summer 2012. The hybrid network is operational between summer 2015 and 2016 in continuous mode. One of the three arrays starts in summer 2014 and provides additional data covering a longer time span. The pilot array in the Akutan Island recorded continuous seismic data for 2 months. An automatic beam-backprojection analysis detects almost daily tremor activity, with an average of more than an hour per day. We imaged two active sources separated by a tremor gap. The western source, right under the Unalaska Island shows the most prolific activity with a hint of steady migration. In addition, we are able to identify more than 10 families of low frequency earthquakes (LFEs) in this area. They are located within the tremor source area as imaged by the bean-backprojection technique. Application of a match filter technique reveals that intervals between LFE activities are shorter during tremor activity and longer during quiet time period. We expect to present new results from freshly obtained data. The experiment A-cubed is illuminating subduction zone processes under Unalaska Island in unprecedented detail.
Gebhard, Cathérine; Fuchs, Tobias A; Fiechter, Michael; Stehli, Julia; Stähli, Barbara E; Gaemperli, Oliver; Kaufmann, Philipp A
2013-10-01
The accuracy of coronary computed tomography angiography (CCTA) in obese persons is compromised by increased image noise. We investigated CCTA image quality acquired on a high-definition 64-slice CT scanner using modern adaptive statistical iterative reconstruction (ASIR). Seventy overweight and obese patients (24 males; mean age 57 years, mean body mass index 33 kg/m(2)) were studied with clinically-indicated contrast enhanced CCTA. Thirty-five patients underwent a standard definition protocol with filtered backprojection reconstruction (SD-FBP) while 35 patients matched for gender, age, body mass index and coronary artery calcifications underwent a novel high definition protocol with ASIR (HD-ASIR). Segment by segment image quality was assessed using a four-point scale (1 = excellent, 2 = good, 3 = moderate, 4 = non-diagnostic) and revealed better scores for HD-ASIR compared to SD-FBP (1.5 ± 0.43 vs. 1.8 ± 0.48; p < 0.05). The smallest detectable vessel diameter was also improved, 1.0 ± 0.5 mm for HD-ASIR as compared to 1.4 ± 0.4 mm for SD-FBP (p < 0.001). Average vessel attenuation was higher for HD-ASIR (388.3 ± 109.6 versus 350.6 ± 90.3 Hounsfield Units, HU; p < 0.05), while image noise, signal-to-noise ratio and contrast-to noise ratio did not differ significantly between reconstruction protocols (p = NS). The estimated effective radiation doses were similar, 2.3 ± 0.1 and 2.5 ± 0.1 mSv (HD-ASIR vs. SD-ASIR respectively). Compared to a standard definition backprojection protocol (SD-FBP), a newer high definition scan protocol in combination with ASIR (HD-ASIR) incrementally improved image quality and visualization of distal coronary artery segments in overweight and obese individuals, without increasing image noise and radiation dose.
4D Cone-beam CT reconstruction using a motion model based on principal component analysis
Staub, David; Docef, Alen; Brock, Robert S.; Vaman, Constantin; Murphy, Martin J.
2011-01-01
Purpose: To provide a proof of concept validation of a novel 4D cone-beam CT (4DCBCT) reconstruction algorithm and to determine the best methods to train and optimize the algorithm. Methods: The algorithm animates a patient fan-beam CT (FBCT) with a patient specific parametric motion model in order to generate a time series of deformed CTs (the reconstructed 4DCBCT) that track the motion of the patient anatomy on a voxel by voxel scale. The motion model is constrained by requiring that projections cast through the deformed CT time series match the projections of the raw patient 4DCBCT. The motion model uses a basis of eigenvectors that are generated via principal component analysis (PCA) of a training set of displacement vector fields (DVFs) that approximate patient motion. The eigenvectors are weighted by a parameterized function of the patient breathing trace recorded during 4DCBCT. The algorithm is demonstrated and tested via numerical simulation. Results: The algorithm is shown to produce accurate reconstruction results for the most complicated simulated motion, in which voxels move with a pseudo-periodic pattern and relative phase shifts exist between voxels. The tests show that principal component eigenvectors trained on DVFs from a novel 2D/3D registration method give substantially better results than eigenvectors trained on DVFs obtained by conventionally registering 4DCBCT phases reconstructed via filtered backprojection. Conclusions: Proof of concept testing has validated the 4DCBCT reconstruction approach for the types of simulated data considered. In addition, the authors found the 2D/3D registration approach to be our best choice for generating the DVF training set, and the Nelder-Mead simplex algorithm the most robust optimization routine. PMID:22149852
Low-dose X-ray CT reconstruction via dictionary learning.
Xu, Qiong; Yu, Hengyong; Mou, Xuanqin; Zhang, Lei; Hsieh, Jiang; Wang, Ge
2012-09-01
Although diagnostic medical imaging provides enormous benefits in the early detection and accuracy diagnosis of various diseases, there are growing concerns on the potential side effect of radiation induced genetic, cancerous and other diseases. How to reduce radiation dose while maintaining the diagnostic performance is a major challenge in the computed tomography (CT) field. Inspired by the compressive sensing theory, the sparse constraint in terms of total variation (TV) minimization has already led to promising results for low-dose CT reconstruction. Compared to the discrete gradient transform used in the TV method, dictionary learning is proven to be an effective way for sparse representation. On the other hand, it is important to consider the statistical property of projection data in the low-dose CT case. Recently, we have developed a dictionary learning based approach for low-dose X-ray CT. In this paper, we present this method in detail and evaluate it in experiments. In our method, the sparse constraint in terms of a redundant dictionary is incorporated into an objective function in a statistical iterative reconstruction framework. The dictionary can be either predetermined before an image reconstruction task or adaptively defined during the reconstruction process. An alternating minimization scheme is developed to minimize the objective function. Our approach is evaluated with low-dose X-ray projections collected in animal and human CT studies, and the improvement associated with dictionary learning is quantified relative to filtered backprojection and TV-based reconstructions. The results show that the proposed approach might produce better images with lower noise and more detailed structural features in our selected cases. However, there is no proof that this is true for all kinds of structures.
Limited view angle iterative CT reconstruction
NASA Astrophysics Data System (ADS)
Kisner, Sherman J.; Haneda, Eri; Bouman, Charles A.; Skatter, Sondre; Kourinny, Mikhail; Bedford, Simon
2012-03-01
Computed Tomography (CT) is widely used for transportation security to screen baggage for potential threats. For example, many airports use X-ray CT to scan the checked baggage of airline passengers. The resulting reconstructions are then used for both automated and human detection of threats. Recently, there has been growing interest in the use of model-based reconstruction techniques for application in CT security systems. Model-based reconstruction offers a number of potential advantages over more traditional direct reconstruction such as filtered backprojection (FBP). Perhaps one of the greatest advantages is the potential to reduce reconstruction artifacts when non-traditional scan geometries are used. For example, FBP tends to produce very severe streaking artifacts when applied to limited view data, which can adversely affect subsequent processing such as segmentation and detection. In this paper, we investigate the use of model-based reconstruction in conjunction with limited-view scanning architectures, and we illustrate the value of these methods using transportation security examples. The advantage of limited view architectures is that it has the potential to reduce the cost and complexity of a scanning system, but its disadvantage is that limited-view data can result in structured artifacts in reconstructed images. Our method of reconstruction depends on the formulation of both a forward projection model for the system, and a prior model that accounts for the contents and densities of typical baggage. In order to evaluate our new method, we use realistic models of baggage with randomly inserted simple simulated objects. Using this approach, we show that model-based reconstruction can substantially reduce artifacts and improve important metrics of image quality such as the accuracy of the estimated CT numbers.
Low-Dose X-ray CT Reconstruction via Dictionary Learning
Xu, Qiong; Zhang, Lei; Hsieh, Jiang; Wang, Ge
2013-01-01
Although diagnostic medical imaging provides enormous benefits in the early detection and accuracy diagnosis of various diseases, there are growing concerns on the potential side effect of radiation induced genetic, cancerous and other diseases. How to reduce radiation dose while maintaining the diagnostic performance is a major challenge in the computed tomography (CT) field. Inspired by the compressive sensing theory, the sparse constraint in terms of total variation (TV) minimization has already led to promising results for low-dose CT reconstruction. Compared to the discrete gradient transform used in the TV method, dictionary learning is proven to be an effective way for sparse representation. On the other hand, it is important to consider the statistical property of projection data in the low-dose CT case. Recently, we have developed a dictionary learning based approach for low-dose X-ray CT. In this paper, we present this method in detail and evaluate it in experiments. In our method, the sparse constraint in terms of a redundant dictionary is incorporated into an objective function in a statistical iterative reconstruction framework. The dictionary can be either predetermined before an image reconstruction task or adaptively defined during the reconstruction process. An alternating minimization scheme is developed to minimize the objective function. Our approach is evaluated with low-dose X-ray projections collected in animal and human CT studies, and the improvement associated with dictionary learning is quantified relative to filtered backprojection and TV-based reconstructions. The results show that the proposed approach might produce better images with lower noise and more detailed structural features in our selected cases. However, there is no proof that this is true for all kinds of structures. PMID:22542666
TU-F-18A-06: Dual Energy CT Using One Full Scan and a Second Scan with Very Few Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T; Zhu, L
Purpose: The conventional dual energy CT (DECT) requires two full CT scans at different energy levels, resulting in dose increase as well as imaging errors from patient motion between the two scans. To shorten the scan time of DECT and thus overcome these drawbacks, we propose a new DECT algorithm using one full scan and a second scan with very few projections by preserving structural information. Methods: We first reconstruct a CT image on the full scan using a standard filtered-backprojection (FBP) algorithm. We then use a compressed sensing (CS) based iterative algorithm on the second scan for reconstruction frommore » very few projections. The edges extracted from the first scan are used as weights in the Objectives: function of the CS-based reconstruction to substantially improve the image quality of CT reconstruction. The basis material images are then obtained by an iterative image-domain decomposition method and an electron density map is finally calculated. The proposed method is evaluated on phantoms. Results: On the Catphan 600 phantom, the CT reconstruction mean error using the proposed method on 20 and 5 projections are 4.76% and 5.02%, respectively. Compared with conventional iterative reconstruction, the proposed edge weighting preserves object structures and achieves a better spatial resolution. With basis materials of Iodine and Teflon, our method on 20 projections obtains similar quality of decomposed material images compared with FBP on a full scan and the mean error of electron density in the selected regions of interest is 0.29%. Conclusion: We propose an effective method for reducing projections and therefore scan time in DECT. We show that a full scan plus a 20-projection scan are sufficient to provide DECT images and electron density with similar quality compared with two full scans. Our future work includes more phantom studies to validate the performance of our method.« less
High-throughput measurement of rice tillers using a conveyor equipped with x-ray computed tomography
NASA Astrophysics Data System (ADS)
Yang, Wanneng; Xu, Xiaochun; Duan, Lingfeng; Luo, Qingming; Chen, Shangbin; Zeng, Shaoqun; Liu, Qian
2011-02-01
Tillering is one of the most important agronomic traits because the number of shoots per plant determines panicle number, a key component of grain yield. The conventional method of counting tillers is still manual. Under the condition of mass measurement, the accuracy and efficiency could be gradually degraded along with fatigue of experienced staff. Thus, manual measurement, including counting and recording, is not only time consuming but also lack objectivity. To automate this process, we developed a high-throughput facility, dubbed high-throughput system for measuring automatically rice tillers (H-SMART), for measuring rice tillers based on a conventional x-ray computed tomography (CT) system and industrial conveyor. Each pot-grown rice plant was delivered into the CT system for scanning via the conveyor equipment. A filtered back-projection algorithm was used to reconstruct the transverse section image of the rice culms. The number of tillers was then automatically extracted by image segmentation. To evaluate the accuracy of this system, three batches of rice at different growth stages (tillering, heading, or filling) were tested, yielding absolute mean absolute errors of 0.22, 0.36, and 0.36, respectively. Subsequently, the complete machine was used under industry conditions to estimate its efficiency, which was 4320 pots per continuous 24 h workday. Thus, the H-SMART could determine the number of tillers of pot-grown rice plants, providing three advantages over the manual tillering method: absence of human disturbance, automation, and high throughput. This facility expands the application of agricultural photonics in plant phenomics.
NASA Astrophysics Data System (ADS)
Espinosa, Luis; Prieto, Flavio; Brancheriau, Loïc.
2017-03-01
Trees play a major ecological and sanitary role in modern cities. Nondestructive imaging methods allow to analyze the inner structures of trees, without altering their condition. In this study, we are interested on evaluating the influence of anisotropy condition in wood on the tomography image reconstruction using ultrasonic waves, by time-of-flight (TOF) estimation using the raytracing approach, a technique used particularly in the field of exploration seismography to simulate wave fronts in elastic media. Mechanical parameters from six wood species and one isotropic material were defined and their wave fronts and corresponding TOF values were obtained, using the proposed raytracing method. If the material presented anisotropy, the ray paths between the emitter and the receivers were not straight; therefore, curved rays were obtained for wood and the TOF measurements were affected. To obtain the tomographic image from the TOF measurements, the filtered back-projection algorithm was applied, a widely used technique in applications of straight ray tomography, but also commonly used in wood acoustic tomography. First, discs without inner defects for isotropic and wood materials (Spruce sample) were tested. Isotropic material resulted in a flat color image; for wood material, a gradient of velocities was obtained. After, centric and eccentric defects were tested, both for isotropic and orthotropic cases. From the results obtained for wood, when using a reconstruction algorithm intended for straight ray tomography, the images presented velocity variations from the border to the center that made difficult the discrimination of possible defects inside the samples, especially for eccentric cases.
Yang, Wanneng; Xu, Xiaochun; Duan, Lingfeng; Luo, Qingming; Chen, Shangbin; Zeng, Shaoqun; Liu, Qian
2011-02-01
Tillering is one of the most important agronomic traits because the number of shoots per plant determines panicle number, a key component of grain yield. The conventional method of counting tillers is still manual. Under the condition of mass measurement, the accuracy and efficiency could be gradually degraded along with fatigue of experienced staff. Thus, manual measurement, including counting and recording, is not only time consuming but also lack objectivity. To automate this process, we developed a high-throughput facility, dubbed high-throughput system for measuring automatically rice tillers (H-SMART), for measuring rice tillers based on a conventional x-ray computed tomography (CT) system and industrial conveyor. Each pot-grown rice plant was delivered into the CT system for scanning via the conveyor equipment. A filtered back-projection algorithm was used to reconstruct the transverse section image of the rice culms. The number of tillers was then automatically extracted by image segmentation. To evaluate the accuracy of this system, three batches of rice at different growth stages (tillering, heading, or filling) were tested, yielding absolute mean absolute errors of 0.22, 0.36, and 0.36, respectively. Subsequently, the complete machine was used under industry conditions to estimate its efficiency, which was 4320 pots per continuous 24 h workday. Thus, the H-SMART could determine the number of tillers of pot-grown rice plants, providing three advantages over the manual tillering method: absence of human disturbance, automation, and high throughput. This facility expands the application of agricultural photonics in plant phenomics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, J; Fan, J; Gopinatha Pillai, A
Purpose: To further reduce CT dose, a practical sparse-view acquisition scheme is proposed to provide the same attenuation estimation as higher dose for PET imaging in the extended scan field-of-view. Methods: CT scans are often used for PET attenuation correction and can be acquired at very low CT radiation dose. Low dose techniques often employ low tube voltage/current accompanied with a smooth filter before backprojection to reduce CT image noise. These techniques can introduce bias in the conversion from HU to attenuation values, especially in the extended CT scan field-of-view (FOV). In this work, we propose an ultra-low dose CTmore » technique for PET attenuation correction based on sparse-view acquisition. That is, instead of an acquisition of full amount of views, only a fraction of views are acquired. We tested this technique on a 64-slice GE CT scanner using multiple phantoms. CT scan FOV truncation completion was performed based on the published water-cylinder extrapolation algorithm. A number of continuous views per rotation: 984 (full), 246, 123, 82 and 62 have been tested, corresponding to a CT dose reduction of none, 4x, 8x, 12x and 16x. We also simulated sparse-view acquisition by skipping views from the fully-acquired view data. Results: FBP reconstruction with Q. AC filter on reduced views in the full extended scan field-of-view possesses similar image quality to the reconstruction on acquired full view data. The results showed a further potential for dose reduction compared to the full acquisition, without sacrificing any significant attenuation support to the PET. Conclusion: With the proposed sparse-view method, one can potential achieve at least 2x more CT dose reduction compared to the current Ultra-Low Dose (ULD) PET/CT protocol. A pre-scan based dose modulation scheme can be combined with the above sparse-view approaches, which can even further reduce the CT scan dose during a PET/CT exam.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y
2015-06-15
Purpose: To improve the quality of kV X-ray cone beam CT (CBCT) for use in radiotherapy delivery assessment and re-planning by using penalized likelihood (PL) iterative reconstruction and auto-segmentation accuracy of the resulting CBCTs as an image quality metric. Methods: Present filtered backprojection (FBP) CBCT reconstructions can be improved upon by PL reconstruction with image formation models and appropriate regularization constraints. We use two constraints: 1) image smoothing via an edge preserving filter, and 2) a constraint minimizing the differences between the reconstruction and a registered prior image. Reconstructions of prostate therapy CBCTs were computed with constraint 1 alone andmore » with both constraints. The prior images were planning CTs(pCT) deformable-registered to the FBP reconstructions. Anatomy segmentations were done using atlas-based auto-segmentation (Elekta ADMIRE). Results: We observed small but consistent improvements in the Dice similarity coefficients of PL reconstructions over the FBP results, and additional small improvements with the added prior image constraint. For a CBCT with anatomy very similar in appearance to the pCT, we observed these changes in the Dice metric: +2.9% (prostate), +8.6% (rectum), −1.9% (bladder). For a second CBCT with a very different rectum configuration, we observed +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). For a third case with significant lateral truncation of the field of view, we observed: +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). Adding the prior image constraint raised Dice measures by about 1%. Conclusion: Efficient and practical adaptive radiotherapy requires accurate deformable registration and accurate anatomy delineation. We show here small and consistent patterns of improved contour accuracy using PL iterative reconstruction compared with FBP reconstruction. However, the modest extent of these results and the pattern of differences across CBCT cases suggest that significant further development will be required to make CBCT useful to adaptive radiotherapy.« less
Augmented reality based real-time subcutaneous vein imaging system
Ai, Danni; Yang, Jian; Fan, Jingfan; Zhao, Yitian; Song, Xianzheng; Shen, Jianbing; Shao, Ling; Wang, Yongtian
2016-01-01
A novel 3D reconstruction and fast imaging system for subcutaneous veins by augmented reality is presented. The study was performed to reduce the failure rate and time required in intravenous injection by providing augmented vein structures that back-project superimposed veins on the skin surface of the hand. Images of the subcutaneous vein are captured by two industrial cameras with extra reflective near-infrared lights. The veins are then segmented by a multiple-feature clustering method. Vein structures captured by the two cameras are matched and reconstructed based on the epipolar constraint and homographic property. The skin surface is reconstructed by active structured light with spatial encoding values and fusion displayed with the reconstructed vein. The vein and skin surface are both reconstructed in the 3D space. Results show that the structures can be precisely back-projected to the back of the hand for further augmented display and visualization. The overall system performance is evaluated in terms of vein segmentation, accuracy of vein matching, feature points distance error, duration times, accuracy of skin reconstruction, and augmented display. All experiments are validated with sets of real vein data. The imaging and augmented system produces good imaging and augmented reality results with high speed. PMID:27446690
Augmented reality based real-time subcutaneous vein imaging system.
Ai, Danni; Yang, Jian; Fan, Jingfan; Zhao, Yitian; Song, Xianzheng; Shen, Jianbing; Shao, Ling; Wang, Yongtian
2016-07-01
A novel 3D reconstruction and fast imaging system for subcutaneous veins by augmented reality is presented. The study was performed to reduce the failure rate and time required in intravenous injection by providing augmented vein structures that back-project superimposed veins on the skin surface of the hand. Images of the subcutaneous vein are captured by two industrial cameras with extra reflective near-infrared lights. The veins are then segmented by a multiple-feature clustering method. Vein structures captured by the two cameras are matched and reconstructed based on the epipolar constraint and homographic property. The skin surface is reconstructed by active structured light with spatial encoding values and fusion displayed with the reconstructed vein. The vein and skin surface are both reconstructed in the 3D space. Results show that the structures can be precisely back-projected to the back of the hand for further augmented display and visualization. The overall system performance is evaluated in terms of vein segmentation, accuracy of vein matching, feature points distance error, duration times, accuracy of skin reconstruction, and augmented display. All experiments are validated with sets of real vein data. The imaging and augmented system produces good imaging and augmented reality results with high speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, W; Niu, T; Xing, L
2015-06-15
Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leadingmore » resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR-NLM provides an effective way to reduce the generic magnified image noise of dual–energy material decomposition while preserving resolution. This work is supported in part by NIH grants 7R01HL111141 and 1R01-EB016777. This work is also supported by the Natural Science Foundation of China (NSFC Grant No. 81201091), Fundamental Research Funds for the Central Universities in China, and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less
Localized water reverberation phases and its impact on back-projection images
NASA Astrophysics Data System (ADS)
Yue, H.; Castillo, J.; Yu, C.; Meng, L.; Zhan, Z.
2017-12-01
Coherent radiators imaged by back-projections (BP) are commonly interpreted as part of the rupture process. Nevertheless, artifacts introduced by structure related phases are rarely discriminated from the rupture process. In this study, we adopt the logic of empirical Greens' function analysis (EGF) to discriminate between rupture and structure effect. We re-examine the waveforms and BP images of the 2012 Mw 7.2 Indian Ocean earthquake and an EGF event (Mw 6.2). The P wave codas of both events present similar shape with characteristic period of approximately 10 s, which are back-projected as coherent radiators near the trench. S wave BP doesn't image energy radiation near the trench. We interpret those coda waves as localized water reverberation phases excited near the trench. We perform a 2D waveform modeling using realistic bathymetry model, and find that the sharp near-trench bathymetry traps the acoustic water waves forming localized reverberation phases. These waves can be imaged as coherent near-trench radiators with similar features as that in the observations. We present a set of methodology to discriminate between the rupture and propagation effects in BP images, which can serve as a criterion of subevent identification.
Adriaens, Antita; Polis, Ingeborgh; Waelbers, Tim; Vandermeulen, Eva; Dobbeleir, André; De Spiegeleer, Bart; Peremans, Kathelijne
2013-01-01
Functional imaging provides important insights into canine brain pathologies such as behavioral problems. Two (99m) Tc-labeled single photon emission computed tomography (SPECT) cerebral blood flow tracers-ethylcysteinate dimer (ECD) and hexamethylpropylene amine oxime (HMPAO)-are commonly used in human medicine and have been used previously in dogs but intrasubject comparison of both tracers in dogs is lacking. Therefore, this study investigated whether regional distribution differences between both tracers occur in dogs as is reported in humans. Eight beagles underwent two SPECT examinations first with (99m) Tc-ECD and followed by (99m) Tc-HMPAO. SPECT scanning was performed with a triple head gamma camera equipped with ultrahigh resolution parallel hole collimators. Images were reconstructed using filtered backprojection with a Butterworth filter. Emission data were fitted to a template permitting semiquantification using predefined regions or volumes of interest (VOIs). For each VOI, perfusion indices were calculated by normalizing the regional counts per voxel to total brain counts per voxel. The obtained perfusion indices for each region for both tracers were compared with a paired Student's T-test. Significant (P < 0.05) regional differences were seen in the subcortical region and the cerebellum. Both tracers can be used to visualize regional cerebral blood flow in dogs, however, due to the observed regional differences, they are not entirely interchangeable. © 2013 Veterinary Radiology & Ultrasound.
GPU-accelerated iterative reconstruction for limited-data tomography in CBCT systems.
de Molina, Claudia; Serrano, Estefania; Garcia-Blas, Javier; Carretero, Jesus; Desco, Manuel; Abella, Monica
2018-05-15
Standard cone-beam computed tomography (CBCT) involves the acquisition of at least 360 projections rotating through 360 degrees. Nevertheless, there are cases in which only a few projections can be taken in a limited angular span, such as during surgery, where rotation of the source-detector pair is limited to less than 180 degrees. Reconstruction of limited data with the conventional method proposed by Feldkamp, Davis and Kress (FDK) results in severe artifacts. Iterative methods may compensate for the lack of data by including additional prior information, although they imply a high computational burden and memory consumption. We present an accelerated implementation of an iterative method for CBCT following the Split Bregman formulation, which reduces computational time through GPU-accelerated kernels. The implementation enables the reconstruction of large volumes (>1024 3 pixels) using partitioning strategies in forward- and back-projection operations. We evaluated the algorithm on small-animal data for different scenarios with different numbers of projections, angular span, and projection size. Reconstruction time varied linearly with the number of projections and quadratically with projection size but remained almost unchanged with angular span. Forward- and back-projection operations represent 60% of the total computational burden. Efficient implementation using parallel processing and large-memory management strategies together with GPU kernels enables the use of advanced reconstruction approaches which are needed in limited-data scenarios. Our GPU implementation showed a significant time reduction (up to 48 ×) compared to a CPU-only implementation, resulting in a total reconstruction time from several hours to few minutes.
Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm
NASA Astrophysics Data System (ADS)
Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.
2017-03-01
Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.
Effect of low-dose CT and iterative reconstruction on trabecular bone microstructure assessment
NASA Astrophysics Data System (ADS)
Kopp, Felix K.; Baum, Thomas; Nasirudin, Radin A.; Mei, Kai; Garcia, Eduardo G.; Burgkart, Rainer; Rummeny, Ernst J.; Bauer, Jan S.; Noël, Peter B.
2016-03-01
The trabecular bone microstructure is an important factor in the development of osteoporosis. It is well known that its deterioration is one effect when osteoporosis occurs. Previous research showed that the analysis of trabecular bone microstructure enables more precise diagnoses of osteoporosis compared to a sole measurement of the mineral density. Microstructure parameters are assessed on volumetric images of the bone acquired either with high-resolution magnetic resonance imaging, high-resolution peripheral quantitative computed tomography or high-resolution computed tomography (CT), with only CT being applicable to the spine, which is one of clinically most relevant fracture sites. However, due to the high radiation exposure for imaging the whole spine these measurements are not applicable in current clinical routine. In this work, twelve vertebrae from three different donors were scanned with standard and low radiation dose. Trabecular bone microstructure parameters were assessed for CT images reconstructed with statistical iterative reconstruction (SIR) and analytical filtered backprojection (FBP). The resulting structure parameters were correlated to the biomechanically determined fracture load of each vertebra. Microstructure parameters assessed for low-dose data reconstructed with SIR significantly correlated with fracture loads as well as parameters assessed for standard-dose data reconstructed with FBP. Ideal results were achieved with low to zero regularization strength yielding microstructure parameters not significantly different from those assessed for standard-dose FPB data. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.
Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.
Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo
2015-05-01
It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.
2013-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
A hyperspectral X-ray computed tomography system for enhanced material identification
NASA Astrophysics Data System (ADS)
Wu, Xiaomei; Wang, Qian; Ma, Jinlei; Zhang, Wei; Li, Po; Fang, Zheng
2017-08-01
X-ray computed tomography (CT) can distinguish different materials according to their absorption characteristics. The hyperspectral X-ray CT (HXCT) system proposed in the present work reconstructs each voxel according to its X-ray absorption spectral characteristics. In contrast to a dual-energy or multi-energy CT system, HXCT employs cadmium telluride (CdTe) as the x-ray detector, which provides higher spectral resolution and separate spectral lines according to the material's photon-counter working principle. In this paper, a specimen containing ten different polymer materials randomly arranged was adopted for material identification by HXCT. The filtered back-projection algorithm was applied for image and spectral reconstruction. The first step was to sort the individual material components of the specimen according to their cross-sectional image intensity. The second step was to classify materials with similar intensities according to their reconstructed spectral characteristics. The results demonstrated the feasibility of the proposed material identification process and indicated that the proposed HXCT system has good prospects for a wide range of biomedical and industrial nondestructive testing applications.
Gomez-Cardona, Daniel; Cruz-Bastida, Juan Pablo; Li, Ke; Budde, Adam; Hsieh, Jiang; Chen, Guang-Hong
2016-08-01
Noise characteristics of clinical multidetector CT (MDCT) systems can be quantified by the noise power spectrum (NPS). Although the NPS of CT has been extensively studied in the past few decades, the joint impact of the bowtie filter and object position on the NPS has not been systematically investigated. This work studies the interplay of these two factors on the two dimensional (2D) local NPS of a clinical CT system that uses the filtered backprojection algorithm for image reconstruction. A generalized NPS model was developed to account for the impact of the bowtie filter and image object location in the scan field-of-view (SFOV). For a given bowtie filter, image object, and its location in the SFOV, the shape and rotational symmetries of the 2D local NPS were directly computed from the NPS model without going through the image reconstruction process. The obtained NPS was then compared with the measured NPSs from the reconstructed noise-only CT images in both numerical phantom simulation studies and experimental phantom studies using a clinical MDCT scanner. The shape and the associated symmetry of the 2D NPS were classified by borrowing the well-known atomic spectral symbols s, p, and d, which correspond to circular, dumbbell, and cloverleaf symmetries, respectively, of the wave function of electrons in an atom. Finally, simulated bar patterns were embedded into experimentally acquired noise backgrounds to demonstrate the impact of different NPS symmetries on the visual perception of the object. (1) For a central region in a centered cylindrical object, an s-wave symmetry was always present in the NPS, no matter whether the bowtie filter was present or not. In contrast, for a peripheral region in a centered object, the symmetry of its NPS was highly dependent on the bowtie filter, and both p-wave symmetry and d-wave symmetry were observed in the NPS. (2) For a centered region-ofinterest (ROI) in an off-centered object, the symmetry of its NPS was found to be different from that of a peripheral ROI in the centered object, even when the physical positions of the two ROIs relative to the isocenter were the same. (3) The potential clinical impact of the highly anisotropic NPS, caused by the interplay of the bowtie filter and position of the image object, was highlighted in images of specific bar patterns oriented at different angles. The visual perception of the bar patterns was found to be strongly dependent on their orientation. The NPS of CT depends strongly on the bowtie filter and object position. Even if the location of the ROI with respect to the isocenter is fixed, there can be different symmetries in the NPS, which depend on the object position and the size of the bowtie filter. For an isolated off-centered object, the NPS of its CT images cannot be represented by the NPS measured from a centered object.
Enjilela, Esmaeil; Lee, Ting-Yim; Hsieh, Jiang; Wisenberg, Gerald; Teefy, Patrick; Yadegari, Andrew; Bagur, Rodrigo; Islam, Ali; Branch, Kelley; So, Aaron
2018-03-01
We implemented and validated a compressed sensing (CS) based algorithm for reconstructing dynamic contrast-enhanced (DCE) CT images of the heart from sparsely sampled X-ray projections. DCE CT imaging of the heart was performed on five normal and ischemic pigs after contrast injection. DCE images were reconstructed with filtered backprojection (FBP) and CS from all projections (984-view) and 1/3 of all projections (328-view), and with CS from 1/4 of all projections (246-view). Myocardial perfusion (MP) measurements with each protocol were compared to those with the reference 984-view FBP protocol. Both the 984-view CS and 328-view CS protocols were in good agreements with the reference protocol. The Pearson correlation coefficients of 984-view CS and 328-view CS determined from linear regression analyses were 0.98 and 0.99 respectively. The corresponding mean biases of MP measurement determined from Bland-Altman analyses were 2.7 and 1.2ml/min/100g. When only 328 projections were used for image reconstruction, CS was more accurate than FBP for MP measurement with respect to 984-view FBP. However, CS failed to generate MP maps comparable to those with 984-view FBP when only 246 projections were used for image reconstruction. DCE heart images reconstructed from one-third of a full projection set with CS were minimally affected by aliasing artifacts, leading to accurate MP measurements with the effective dose reduced to just 33% of conventional full-view FBP method. The proposed CS sparse-view image reconstruction method could facilitate the implementation of sparse-view dynamic acquisition for ultra-low dose CT MP imaging. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Bencic, Timothy J.; Fagan, Amy; Van Zante, Judith F.; Kirkegaard, Jonathan P.; Rohler, David P.; Maniyedath, Arjun; Izen, Steven H.
2013-01-01
A light extinction tomography technique has been developed to monitor ice water clouds upstream of a direct connected engine in the Propulsion Systems Laboratory (PSL) at NASA Glenn Research Center (GRC). The system consists of 60 laser diodes with sheet generating optics and 120 detectors mounted around a 36-inch diameter ring. The sources are pulsed sequentially while the detectors acquire line-of-sight extinction data for each laser pulse. Using computed tomography algorithms, the extinction data are analyzed to produce a plot of the relative water content in the measurement plane. To target the low-spatial-frequency nature of ice water clouds, unique tomography algorithms were developed using filtered back-projection methods and direct inversion methods that use Gaussian basis functions. With the availability of a priori knowledge of the mean droplet size and the total water content at some point in the measurement plane, the tomography system can provide near real-time in-situ quantitative full-field total water content data at a measurement plane approximately 5 feet upstream of the engine inlet. Results from ice crystal clouds in the PSL are presented. In addition to the optical tomography technique, laser sheet imaging has also been applied in the PSL to provide planar ice cloud uniformity and relative water content data during facility calibration before the tomography system was available and also as validation data for the tomography system. A comparison between the laser sheet system and light extinction tomography resulting data are also presented. Very good agreement of imaged intensity and water content is demonstrated for both techniques. Also, comparative studies between the two techniques show excellent agreement in calculation of bulk total water content averaged over the center of the pipe.
Single-Image Distance Measurement by a Smart Mobile Device.
Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling
2017-12-01
Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.
Passive synthetic aperture radar imaging of ground moving targets
NASA Astrophysics Data System (ADS)
Wacks, Steven; Yazici, Birsen
2012-05-01
In this paper we present a method for imaging ground moving targets using passive synthetic aperture radar. A passive radar imaging system uses small, mobile receivers that do not radiate any energy. For these reasons, passive imaging systems result in signicant cost, manufacturing, and stealth advantages. The received signals are obtained by multiple airborne receivers collecting scattered waves due to illuminating sources of opportunity such as commercial television, radio, and cell phone towers. We describe a novel forward model and a corresponding ltered-backprojection type image reconstruction method combined with entropy optimization. Our method determines the location and velocity of multiple targets moving at dierent velocities. Furthermore, it can accommodate arbitrary imaging geometries. we present numerical simulations to verify the imaging method.
Kolditz, Daniel; Meyer, Michael; Kyriakou, Yiannis; Kalender, Willi A
2011-01-07
In C-arm-based flat-detector computed tomography (FDCT) it frequently happens that the patient exceeds the scan field of view (SFOV) in the transaxial direction because of the limited detector size. This results in data truncation and CT image artefacts. In this work three truncation correction approaches for extended field-of-view (EFOV) reconstructions have been implemented and evaluated. An FDCT-based method estimates the patient size and shape from the truncated projections by fitting an elliptical model to the raw data in order to apply an extrapolation. In a camera-based approach the patient is sampled with an optical tracking system and this information is used to apply an extrapolation. In a CT-based method the projections are completed by artificial projection data obtained from the CT data acquired in an earlier exam. For all methods the extended projections are filtered and backprojected with a standard Feldkamp-type algorithm. Quantitative evaluations have been performed by simulations of voxelized phantoms on the basis of the root mean square deviation and a quality factor Q (Q = 1 represents the ideal correction). Measurements with a C-arm FDCT system have been used to validate the simulations and to investigate the practical applicability using anthropomorphic phantoms which caused truncation in all projections. The proposed approaches enlarged the FOV to cover wider patient cross-sections. Thus, image quality inside and outside the SFOV has been improved. Best results have been obtained using the CT-based method, followed by the camera-based and the FDCT-based truncation correction. For simulations, quality factors up to 0.98 have been achieved. Truncation-induced cupping artefacts have been reduced, e.g., from 218% to less than 1% for the measurements. The proposed truncation correction approaches for EFOV reconstructions are an effective way to ensure accurate CT values inside the SFOV and to recover peripheral information outside the SFOV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J; Park, C; Kauweloa, K
2015-06-15
Purpose: As an alternative to full tomographic imaging technique such as cone-beam computed tomography (CBCT), there is growing interest to adopt digital tomosynthesis (DTS) for the use of diagnostic as well as therapeutic applications. The aim of this study is to propose a new DTS system using novel orthogonal scanning technique, which can provide superior image quality DTS images compared to the conventional DTS scanning system. Methods: Unlike conventional DTS scanning system, the proposed DTS is reconstructed with two sets of orthogonal patient scans. 1) X-ray projections that are acquired along transverse trajectory and 2) an additional sets of X-raymore » projections acquired along the vertical direction at the mid angle of the previous transverse scan. To reconstruct DTS, we have used modified filtered backprojection technique to account for the different scanning directions of each projection set. We have evaluated the performance of our method using numerical planning CT data of liver cancer patient and a physical pelvis phantom experiment. The results were compared with conventional DTS techniques with single transverse and vertical scanning. Results: The experiments on both numerical simulation as well as physical experiment showed that the resolution as well as contrast of anatomical structures was much clearer using our method. Specifically, the image quality comparing with transversely scanned DTS showed that the edge and contrast of anatomical structures along Left-Right (LR) directions was comparable however, considerable discrepancy and enhancement could be observed along Superior-Inferior (SI) direction using our method. The opposite was observed when vertically scanned DTS was compared. Conclusion: In this study, we propose a novel DTS system using orthogonal scanning technique. The results indicated that the image quality of our novel DTS system was superior compared to conventional DTS system. This makes our DTS system potentially useful in various on-line clinical applications.« less
A novel pre-processing technique for improving image quality in digital breast tomosynthesis.
Kim, Hyeongseok; Lee, Taewon; Hong, Joonpyo; Sabir, Sohail; Lee, Jung-Ryun; Choi, Young Wook; Kim, Hak Hee; Chae, Eun Young; Cho, Seungryong
2017-02-01
Nonlinear pre-reconstruction processing of the projection data in computed tomography (CT) where accurate recovery of the CT numbers is important for diagnosis is usually discouraged, for such a processing would violate the physics of image formation in CT. However, one can devise a pre-processing step to enhance detectability of lesions in digital breast tomosynthesis (DBT) where accurate recovery of the CT numbers is fundamentally impossible due to the incompleteness of the scanned data. Since the detection of lesions such as micro-calcifications and mass in breasts is the purpose of using DBT, it is justified that a technique producing higher detectability of lesions is a virtue. A histogram modification technique was developed in the projection data domain. Histogram of raw projection data was first divided into two parts: One for the breast projection data and the other for background. Background pixel values were set to a single value that represents the boundary between breast and background. After that, both histogram parts were shifted by an appropriate amount of offset and the histogram-modified projection data were log-transformed. Filtered-backprojection (FBP) algorithm was used for image reconstruction of DBT. To evaluate performance of the proposed method, we computed the detectability index for the reconstructed images from clinically acquired data. Typical breast border enhancement artifacts were greatly suppressed and the detectability of calcifications and masses was increased by use of the proposed method. Compared to a global threshold-based post-reconstruction processing technique, the proposed method produced images of higher contrast without invoking additional image artifacts. In this work, we report a novel pre-processing technique that improves detectability of lesions in DBT and has potential advantages over the global threshold-based post-reconstruction processing technique. The proposed method not only increased the lesion detectability but also reduced typical image artifacts pronounced in conventional FBP-based DBT. © 2016 American Association of Physicists in Medicine.
Machine-learning model observer for detection and localization tasks in clinical SPECT-MPI
NASA Astrophysics Data System (ADS)
Parages, Felipe M.; O'Connor, J. Michael; Pretorius, P. Hendrik; Brankov, Jovan G.
2016-03-01
In this work we propose a machine-learning MO based on Naive-Bayes classification (NB-MO) for the diagnostic tasks of detection, localization and assessment of perfusion defects in clinical SPECT Myocardial Perfusion Imaging (MPI), with the goal of evaluating several image reconstruction methods used in clinical practice. NB-MO uses image features extracted from polar-maps in order to predict lesion detection, localization and severity scores given by human readers in a series of 3D SPECT-MPI. The population used to tune (i.e. train) the NB-MO consisted of simulated SPECT-MPI cases - divided into normals or with lesions in variable sizes and locations - reconstructed using filtered backprojection (FBP) method. An ensemble of five human specialists (physicians) read a subset of simulated reconstructed images, and assigned a perfusion score for each region of the left-ventricle (LV). Polar-maps generated from the simulated volumes along with their corresponding human scores were used to train five NB-MOs (one per human reader), which are subsequently applied (i.e. tested) on three sets of clinical SPECT-MPI polar maps, in order to predict human detection and localization scores. The clinical "testing" population comprises healthy individuals and patients suffering from coronary artery disease (CAD) in three possible regions, namely: LAD, LcX and RCA. Each clinical case was reconstructed using three reconstruction strategies, namely: FBP with no SC (i.e. scatter compensation), OSEM with Triple Energy Window (TEW) SC method, and OSEM with Effective Source Scatter Estimation (ESSE) SC. Alternative Free-Response (AFROC) analysis of perfusion scores shows that NB-MO predicts a higher human performance for scatter-compensated reconstructions, in agreement with what has been reported in published literature. These results suggest that NB-MO has good potential to generalize well to reconstruction methods not used during training, even for reasonably dissimilar datasets (i.e. simulated vs. clinical).
NASA Astrophysics Data System (ADS)
Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.
2006-06-01
In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).
NASA Astrophysics Data System (ADS)
Fan, W.; Shearer, P. M.
2017-12-01
Fan and Shearer [2016] analyzed the 2012 Mw 7.2 Sumatra earthquake and reported that the earthquake dynamically triggered early aftershock/aftershocks 150 km away from the mainshock and 50 s later. The early aftershock/aftershocks were detected with teleseismic P-wave back-projection, coincided with passing surface waves, and showed observable seismic waveforms in a wide frequency range (0.02—5 Hz). Recently, however, Yue et al. [2017] interpreted these coda arrivals as water reverberations from the mainshock, based mostly on EGF analysis of a nearby M6 earthquake and a water-phase synthetic test. Here, we show detailed back-projection and waveform analysis of three M6 earthquakes within 100km of the Mw 7.2 earthquake, including the EGF event analyzed in Yue et al. [2017]. In addition, we examine the waveforms of three M5.5 reverse faulting earthquakes close to our detected early aftershock landward of the trench. Our results show that the coda energy in question is more likely caused by a separate earthquake near the trench than by a mainshock water reverberation phase, thus supporting our earlier conclusion that the detected coherent radiators are likely to be dynamically triggered early aftershock/aftershocks.
IRIS DMC products help explore the Tohoku earthquake
NASA Astrophysics Data System (ADS)
Trabant, C.; Hutko, A. R.; Bahavar, M.; Ahern, T. K.; Benson, R. B.; Casey, R.
2011-12-01
Within two hours after the great March 11, 2011 Tohoku earthquake the IRIS DMC started publishing automated data products through its Searchable Product Depository (SPUD), which provides quick viewing of many aspects of the data and preliminary analysis of this great earthquake. These products are part of the DMC's data product development effort intended to serve many purposes: stepping-stones for future research projects, data visualizations, data characterization, research result comparisons as well as outreach material. Our current and soon-to-be-released products that allow users to explore this and other global M>6.0 events include 1) Event Plots, which are a suite of maps, record sections, regional vespagrams and P-coda stacks 2) US Array Ground Motion Visualizations that show the vertical and horizontal global seismic wavefield sweeping across US Array including minor and major arc surface waves and their polarizations 3) back-projection movies that show the time history of short-period energy from the rupture 4) R1 source-time functions that show approximate duration and source directivity and 5) aftershock sequence maps and statistics movies based on NEIC alerts that self-update every hour in the first few days following the mainshock. Higher order information for the Tohoku event that can be inferred based on our products which will be highlighted include a rupture duration of order 150 sec (P-coda stacks, back-projections, R1 STFs) that ruptured approximately 400 km along strike primarily towards the south (back-projections, R1 STFs, aftershock animation) with a very low rupture velocity (back-projections, R1 STFs). All of our event-based products are automated and consistently produced shortly after the event so that they may serve as familiar baselines for the seismology research community. More details on these and other existing products are available at: http://www.iris.edu/dms/products/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, H; Xing, L; Liang, Z
Purpose: To investigate a novel low-dose CT (LdCT) image reconstruction strategy for lung CT imaging in radiation therapy. Methods: The proposed approach consists of four steps: (1) use the traditional filtered back-projection (FBP) method to reconstruct the LdCT image; (2) calculate structure similarity (SSIM) index between the FBP-reconstructed LdCT image and a set of normal-dose CT (NdCT) images, and select the NdCT image with the highest SSIM as the learning source; (3) segment the NdCT source image into lung and outside tissue regions via simple thresholding, and adopt multiple linear regression to learn high-order Markov random field (MRF) pattern formore » each tissue region in the NdCT source image; (4) segment the FBP-reconstructed LdCT image into lung and outside regions as well, and apply the learnt MRF prior in each tissue region for statistical iterative reconstruction of the LdCT image following the penalized weighted least squares (PWLS) framework. Quantitative evaluation of the reconstructed images was based on the signal-to-noise ratio (SNR), local binary pattern (LBP) and histogram of oriented gradients (HOG) metrics. Results: It was observed that lung and outside tissue regions have different MRF patterns predicted from the NdCT. Visual inspection showed that our method obviously outperformed the traditional FBP method. Comparing with the region-smoothing PWLS method, our method has, in average, 13% increase in SNR, 15% decrease in LBP difference, and 12% decrease in HOG difference from reference standard for all regions of interest, which indicated the superior performance of the proposed method in terms of image resolution and texture preservation. Conclusion: We proposed a novel LdCT image reconstruction method by learning similar image characteristics from a set of NdCT images, and the to-be-learnt NdCT image does not need to be scans from the same subject. This approach is particularly important for enhancing image quality in radiation therapy.« less
Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays.
Park, Jae-Hyeung; Lee, Sung-Keun; Jo, Na-Young; Kim, Hee-Jae; Kim, Yong-Soo; Lim, Hong-Gi
2014-10-20
We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.
A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy
Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.
2017-01-01
One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Z; Jiang, W; Stuart, B
Purpose: Since electrons are easily scattered, the virtual source position for electrons is expected to locate below the x-ray target of Medical Linacs. However, the effective SSD method yields the electron virtual position above the x-ray target for some applicators for some energy in Siemens Linacs. In this study, we propose to use IC Profiler (Sun Nuclear) for evaluating the electron virtual source position for the standard electron applicators for various electron energies. Methods: The profile measurements for various nominal source-to-detector distances (SDDs) of 100–115 cm were carried out for electron beam energies of 6–18 MeV. Two methods were used:more » one was to use a 0.125 cc ion chamber (PTW, Type 31010) with buildup mounted in a PTW water tank without water filled; and the other was to use IC Profiler with a buildup to achieve charge particle equilibrium. The full width at half-maximum (FWHM) method was used to determine the field sizes for the measured profiles. Backprojecting (by a straight line) the distance between the 50% points on the beam profiles for the various SDDs, yielded the virtual source position for each applicator. Results: The profiles were obtained and the field sizes were determined by FWHM. The virtual source positions were determined through backprojection of profiles for applicators (5, 10, 15, 20, 25). For instance, they were 96.415 cm (IC Profiler) vs 95.844 cm (scanning ion chamber) for 9 MeV electrons with 10×10 cm applicator and 97.160 cm vs 97.161 cm for 12 MeV electrons with 10×10 cm applicator. The differences in the virtual source positions between IC profiler and scanning ion chamber were within 1.5%. Conclusion: IC Profiler provides a practical method for determining the electron virtual source position and its results are consistent with those obtained by profiles of scanning ion chamber with buildup.« less
NASA Astrophysics Data System (ADS)
Jacobson, M. W.; Ketcha, M. D.; Capostagno, S.; Martin, A.; Uneri, A.; Goerres, J.; De Silva, T.; Reaungamornrat, S.; Han, R.; Manbachi, A.; Stayman, J. W.; Vogt, S.; Kleinszig, G.; Siewerdsen, J. H.
2018-01-01
Modern cone-beam CT systems, especially C-arms, are capable of diverse source-detector orbits. However, geometric calibration of these systems using conventional configurations of spherical fiducials (BBs) may be challenged for novel source-detector orbits and system geometries. In part, this is because the BB configurations are designed with careful forethought regarding the intended orbit so that BB marker projections do not overlap in projection views. Examples include helical arrangements of BBs (Rougee et al 1993 Proc. SPIE 1897 161-9) such that markers do not overlap in projections acquired from a circular orbit and circular arrangements of BBs (Cho et al 2005 Med. Phys. 32 968-83). As a more general alternative, this work proposes a calibration method based on an array of line-shaped, radio-opaque wire segments. With this method, geometric parameter estimation is accomplished by relating the 3D line equations representing the wires to the 2D line equations of their projections. The use of line fiducials simplifies many challenges with fiducial recognition and extraction in an orbit-independent manner. For example, their projections can overlap only mildly, for any gantry pose, as long as the wires are mutually non-coplanar in 3D. The method was tested in application to circular and non-circular trajectories in simulation and in real orbits executed using a mobile C-arm prototype for cone-beam CT. Results indicated high calibration accuracy, as measured by forward and backprojection/triangulation error metrics. Triangulation errors on the order of microns and backprojected ray deviations uniformly less than 0.2 mm were observed in both real and simulated orbits. Mean forward projection errors less than 0.1 mm were observed in a comprehensive sweep of different C-arm gantry angulations. Finally, successful integration of the method into a CT imaging chain was demonstrated in head phantom scans.
Le, Huy Q.; Molloi, Sabee
2011-01-01
Purpose: To experimentally investigate whether a computed tomography (CT) system based on CdZnTe (CZT) detectors in conjunction with a least-squares parameter estimation technique can be used to decompose four different materials. Methods: The material decomposition process was divided into a segmentation task and a quantification task. A least-squares minimization algorithm was used to decompose materials with five measurements of the energy dependent linear attenuation coefficients. A small field-of-view energy discriminating CT system was built. The CT system consisted of an x-ray tube, a rotational stage, and an array of CZT detectors. The CZT array was composed of 64 pixels, each of which is 0.8×0.8×3 mm. Images were acquired at 80 kVp in fluoroscopic mode at 50 ms per frame. The detector resolved the x-ray spectrum into energy bins of 22–32, 33–39, 40–46, 47–56, and 57–80 keV. Four phantoms were constructed from polymethylmethacrylate (PMMA), polyethylene, polyoxymethylene, hydroxyapatite, and iodine. Three phantoms were composed of three materials with embedded hydroxyapatite (50, 150, 250, and 350 mg∕ml) and iodine (4, 8, 12, and 16 mg∕ml) contrast elements. One phantom was composed of four materials with embedded hydroxyapatite (150 and 350 mg∕ml) and iodine (8 and 16 mg∕ml). Calibrations consisted of PMMA phantoms with either hydroxyapatite (100, 200, 300, 400, and 500 mg∕ml) or iodine (5, 15, 25, 35, and 45 mg∕ml) embedded. Filtered backprojection and a ramp filter were used to reconstruct images from each energy bin. Material segmentation and quantification were performed and compared between different phantoms. Results: All phantoms were decomposed accurately, but some voxels in the base material regions were incorrectly identified. Average quantification errors of hydroxyapatite∕iodine were 9.26∕7.13%, 7.73∕5.58%, and 12.93∕8.23% for the three-material PMMA, polyethylene, and polyoxymethylene phantoms, respectively. The average errors for the four-material phantom were 15.62% and 2.76% for hydroxyapatite and iodine, respectively. Conclusions: The calibrated least-squares minimization technique of decomposition performed well in breast imaging tasks with an energy resolving detector. This method can provide material basis images containing concentrations of the relevant materials that can potentially be valuable in the diagnostic process. PMID:21361191
Zero cylinder coordinate system approach to image reconstruction in fan beam ICT
NASA Astrophysics Data System (ADS)
Yan, Yan-Chun; Xian, Wu; Hall, Ernest L.
1992-11-01
The state-of-the-art of the transform algorithms has allowed the newest versions to produce excellent and efficient reconstructed images in most applications, especially in medical CT and industrial CT etc. Based on the Zero Cylinder Coordinate system (ZCC) presented in this paper, a new transform algorithm of image reconstruction in fan beam industrial CT is suggested. It greatly reduces the amount of computation of the backprojection, which requires only two INC instructions to calculate the weighted factor and the subcoordinate. A new backprojector is designed, which simplifies its assembly-line mechanism based on the ZCC method. Finally, a simulation results on microcomputer is given out, which proves this method is effective and practical.
Back-Projection Imaging of extended, diffuse seismic sources in volcanic and hydrothermal systems
NASA Astrophysics Data System (ADS)
Kelly, C. L.; Lawrence, J. F.; Beroza, G. C.
2017-12-01
Volcanic and hydrothermal systems exhibit a wide range of seismicity that is directly linked to fluid and volatile activity in the subsurface and that can be indicative of imminent hazardous activity. Seismograms recorded near volcanic and hydrothermal systems typically contain "noisy" records, but in fact, these complex signals are generated by many overlapping low-magnitude displacements and pressure changes at depth. Unfortunately, excluding times of high-magnitude eruptive activity that typically occur infrequently relative to the length of a system's entire eruption cycle, these signals often have very low signal-to-noise ratios and are difficult to identify and study using established seismic analysis techniques (i.e. phase-picking, template matching). Arrays of short-period and broadband seismic sensors are proven tools for monitoring short- and long-term changes in volcanic and hydrothermal systems. Time-reversal techniques (i.e. back-projection) that are improved by additional seismic observations have been successfully applied to locating volcano-seismic sources recorded by dense sensor arrays. We present results from a new computationally efficient back-projection method that allows us to image the evolution of extended, diffuse sources of volcanic and hydrothermal seismicity. We correlate short time-window seismograms from receiver-pairs to find coherent signals and propagate them back in time to potential source locations in a 3D subsurface model. The strength of coherent seismic signal associated with any potential source-receiver-receiver geometry is equal to the correlation of the short time-windows of seismic records at appropriate time lags as determined by the velocity structure and ray paths. We stack (sum) all short time-window correlations from all receiver-pairs to determine the cumulative coherence of signals at each potential source location. Through stacking, coherent signals from extended and/or repeating sources of short-period energy radiation interfere constructively while background noise signals interfere destructively, such that the most likely source locations of the observed seismicity are illuminated. We compile results to analyze changes in the distribution and prevalence of these sources throughout a systems entire eruptive cycle.
Tight-frame based iterative image reconstruction for spectral breast CT
Zhao, Bo; Gao, Hao; Ding, Huanjun; Molloi, Sabee
2013-01-01
Purpose: To investigate tight-frame based iterative reconstruction (TFIR) technique for spectral breast computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The experimental data were acquired with a fan-beam breast CT system based on a cadmium zinc telluride photon-counting detector. The images were reconstructed with a varying number of projections using the TFIR and filtered backprojection (FBP) techniques. The image quality between these two techniques was evaluated. The image's spatial resolution was evaluated using a high-resolution phantom, and the contrast to noise ratio (CNR) was evaluated using a postmortem breast sample. The postmortem breast samples were decomposed into water, lipid, and protein contents based on images reconstructed from TFIR with 204 projections and FBP with 614 projections. The volumetric fractions of water, lipid, and protein from the image-based measurements in both TFIR and FBP were compared to the chemical analysis. Results: The spatial resolution and CNR were comparable for the images reconstructed by TFIR with 204 projections and FBP with 614 projections. Both reconstruction techniques provided accurate quantification of water, lipid, and protein composition of the breast tissue when compared with data from the reference standard chemical analysis. Conclusions: Accurate breast tissue decomposition can be done with three fold fewer projection images by the TFIR technique without any reduction in image spatial resolution and CNR. This can result in a two-third reduction of the patient dose in a multislit and multislice spiral CT system in addition to the reduced scanning time in this system. PMID:23464320
Debatin, Maurice; Hesser, Jürgen
2015-01-01
Reducing the amount of time for data acquisition and reconstruction in industrial CT decreases the operation time of the X-ray machine and therefore increases the sales. This can be achieved by reducing both, the dose and the pulse length of the CT system and the number of projections for the reconstruction, respectively. In this paper, a novel generalized Anisotropic Total Variation regularization for under-sampled, low-dose iterative CT reconstruction is discussed and compared to the standard methods, Total Variation, Adaptive weighted Total Variation and Filtered Backprojection. The novel regularization function uses a priori information about the Gradient Magnitude Distribution of the scanned object for the reconstruction. We provide a general parameterization scheme and evaluate the efficiency of our new algorithm for different noise levels and different number of projection views. When noise is not present, error-free reconstructions are achievable for AwTV and GATV from 40 projections. In cases where noise is simulated, our strategy achieves a Relative Root Mean Square Error that is up to 11 times lower than Total Variation-based and up to 4 times lower than AwTV-based iterative statistical reconstruction (e.g. for a SNR of 223 and 40 projections). To obtain the same reconstruction quality as achieved by Total Variation, the projection number and the pulse length, and the acquisition time and the dose respectively can be reduced by a factor of approximately 3.5, when AwTV is used and a factor of approximately 6.7, when our proposed algorithm is used.
Cumulative phase delay imaging - A new contrast enhanced ultrasound modality
NASA Astrophysics Data System (ADS)
Demi, Libertario; van Sloun, Ruud J. G.; Wijkstra, Hessel; Mischi, Massimo
2015-10-01
Recently, a new acoustic marker for ultrasound contrast agents (UCAs) has been introduced. A cumulative phase delay (CPD) between the second harmonic and fundamental pressure wave field components is in fact observable for ultrasound propagating through UCAs. This phenomenon is absent in the case of tissue nonlinearity and is dependent on insonating pressure and frequency, UCA concentration, and propagation path length through UCAs. In this paper, ultrasound images based on this marker are presented. The ULA-OP research platform, in combination with a LA332 linear array probe (Esaote, Firenze Italy), were used to image a gelatin phantom containing a PVC plate (used as a reflector) and a cylindrical cavity measuring 7 mm in diameter (placed in between the observation point and the PVC plate). The cavity contained a 240 µL/L SonoVueO® UCA concentration. Two insonating frequencies (3 MHz and 2.5 MHz) were used to scan the gelatine phantom. A mechanical index MI = 0.07, measured in water at the cavity location with a HGL-0400 hydrophone (Onda, Sunnyvale, CA), was utilized. Processing the ultrasound signals backscattered from the plate, ultrasound images were generated in a tomographic fashion using the filtered back-projection method. As already observed in previous studies, significantly higher CPD values are measured when imaging at a frequency of 2.5 MHz, as compared to imaging at 3 MHz. In conclusion, these results confirm the applicability of the discussed CPD as a marker for contrast imaging. Comparison with standard contrast-enhanced ultrasound imaging modalities will be the focus of future work.
Soft-tissue imaging with C-arm cone-beam CT using statistical reconstruction
NASA Astrophysics Data System (ADS)
Wang, Adam S.; Webster Stayman, J.; Otake, Yoshito; Kleinszig, Gerhard; Vogt, Sebastian; Gallia, Gary L.; Khanna, A. Jay; Siewerdsen, Jeffrey H.
2014-02-01
The potential for statistical image reconstruction methods such as penalized-likelihood (PL) to improve C-arm cone-beam CT (CBCT) soft-tissue visualization for intraoperative imaging over conventional filtered backprojection (FBP) is assessed in this work by making a fair comparison in relation to soft-tissue performance. A prototype mobile C-arm was used to scan anthropomorphic head and abdomen phantoms as well as a cadaveric torso at doses substantially lower than typical values in diagnostic CT, and the effects of dose reduction via tube current reduction and sparse sampling were also compared. Matched spatial resolution between PL and FBP was determined by the edge spread function of low-contrast (˜40-80 HU) spheres in the phantoms, which were representative of soft-tissue imaging tasks. PL using the non-quadratic Huber penalty was found to substantially reduce noise relative to FBP, especially at lower spatial resolution where PL provides a contrast-to-noise ratio increase up to 1.4-2.2× over FBP at 50% dose reduction across all objects. Comparison of sampling strategies indicates that soft-tissue imaging benefits from fully sampled acquisitions at dose above ˜1.7 mGy and benefits from 50% sparsity at dose below ˜1.0 mGy. Therefore, an appropriate sampling strategy along with the improved low-contrast visualization offered by statistical reconstruction demonstrates the potential for extending intraoperative C-arm CBCT to applications in soft-tissue interventions in neurosurgery as well as thoracic and abdominal surgeries by overcoming conventional tradeoffs in noise, spatial resolution, and dose.
Empirical projection-based basis-component decomposition method
NASA Astrophysics Data System (ADS)
Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland
2009-02-01
Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.
Effects of small variations of speed of sound in optoacoustic tomographic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deán-Ben, X. Luís; Ntziachristos, Vasilis; Razansky, Daniel, E-mail: dr@tum.de
2014-07-15
Purpose: Speed of sound difference in the imaged object and surrounding coupling medium may reduce the resolution and overall quality of optoacoustic tomographic reconstructions obtained by assuming a uniform acoustic medium. In this work, the authors investigate the effects of acoustic heterogeneities and discuss potential benefits of accounting for those during the reconstruction procedure. Methods: The time shift of optoacoustic signals in an acoustically heterogeneous medium is studied theoretically by comparing different continuous and discrete wave propagation models. A modification of filtered back-projection reconstruction is subsequently implemented by considering a straight acoustic rays model for ultrasound propagation. The results obtainedmore » with this reconstruction procedure are compared numerically and experimentally to those obtained assuming a heuristically fitted uniform speed of sound in both full-view and limited-view optoacoustic tomography scenarios. Results: The theoretical analysis showcases that the errors in the time-of-flight of the signals predicted by considering the straight acoustic rays model tend to be generally small. When using this model for reconstructing simulated data, the resulting images accurately represent the theoretical ones. On the other hand, significant deviations in the location of the absorbing structures are found when using a uniform speed of sound assumption. The experimental results obtained with tissue-mimicking phantoms and a mouse postmortem are found to be consistent with the numerical simulations. Conclusions: Accurate analysis of effects of small speed of sound variations demonstrates that accounting for differences in the speed of sound allows improving optoacoustic reconstruction results in realistic imaging scenarios involving acoustic heterogeneities in tissues and surrounding media.« less
A flexible, small positron emission tomography prototype for resource-limited laboratories
NASA Astrophysics Data System (ADS)
Miranda-Menchaca, A.; Martínez-Dávalos, A.; Murrieta-Rodríguez, T.; Alva-Sánchez, H.; Rodríguez-Villafuerte, M.
2015-05-01
Modern small-animal PET scanners typically consist of a large number of detectors along with complex electronics to provide tomographic images for research in the preclinical sciences that use animal models. These systems can be expensive, especially for resource-limited educational and academic institutions in developing countries. In this work we show that a small-animal PET scanner can be built with a relatively reduced budget while, at the same time, achieving relatively high performance. The prototype consists of four detector modules each composed of LYSO pixelated crystal arrays (individual crystal elements of dimensions 1 × 1 × 10 mm3) coupled to position-sensitive photomultiplier tubes. Tomographic images are obtained by rotating the subject to complete enough projections for image reconstruction. Image quality was evaluated for different reconstruction algorithms including filtered back-projection and iterative reconstruction with maximum likelihood-expectation maximization and maximum a posteriori methods. The system matrix was computed both with geometric considerations and by Monte Carlo simulations. Prior to image reconstruction, Fourier data rebinning was used to increase the number of lines of response used. The system was evaluated for energy resolution at 511 keV (best 18.2%), system sensitivity (0.24%), spatial resolution (best 0.87 mm), scatter fraction (4.8%) and noise equivalent count-rate. The system can be scaled-up to include up to 8 detector modules, increasing detection efficiency, and its price may be reduced as newer solid state detectors become available replacing the traditional photomultiplier tubes. Prototypes like this may prove to be very valuable for educational, training, preclinical and other biological research purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T; Zhu, L
Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less
Attenuation correction strategies for multi-energy photon emitters using SPECT
NASA Astrophysics Data System (ADS)
Pretorius, P. H.; King, M. A.; Pan, T.-S.; Hutton, B. F.
1997-06-01
The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojection (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation maximization (ML-OS) reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: 1) the 93 keV attenuation map for attenuation correction, 2) the 185 keV attenuation map for attenuation correction, 3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and 4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCRs of sphere 4 (in proximity to the liver, spleen and backbone) were under-estimated, although TCRs were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately. They are recommended for multi-energy photon SPECT imaging quantitation when there is a need to combine the acquisitions of multiple windows.
Quantitative comparison of noise texture across CT scanners from different manufacturers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solomon, Justin B.; Christianson, Olav; Samei, Ehsan
2012-10-15
Purpose: To quantitatively compare noise texture across computed tomography (CT) scanners from different manufacturers using the noise power spectrum (NPS). Methods: The American College of Radiology CT accreditation phantom (Gammex 464, Gammex, Inc., Middleton, WI) was imaged on two scanners: Discovery CT 750HD (GE Healthcare, Waukesha, WI), and SOMATOM Definition Flash (Siemens Healthcare, Germany), using a consistent acquisition protocol (120 kVp, 0.625/0.6 mm slice thickness, 250 mAs, and 22 cm field of view). Images were reconstructed using filtered backprojection and a wide selection of reconstruction kernels. For each image set, the 2D NPS were estimated from the uniform section ofmore » the phantom. The 2D spectra were normalized by their integral value, radially averaged, and filtered by the human visual response function. A systematic kernel-by-kernel comparison across manufacturers was performed by computing the root mean square difference (RMSD) and the peak frequency difference (PFD) between the NPS from different kernels. GE and Siemens kernels were compared and kernel pairs that minimized the RMSD and |PFD| were identified. Results: The RMSD (|PFD|) values between the NPS of GE and Siemens kernels varied from 0.01 mm{sup 2} (0.002 mm{sup -1}) to 0.29 mm{sup 2} (0.74 mm{sup -1}). The GE kernels 'Soft,''Standard,''Chest,' and 'Lung' closely matched the Siemens kernels 'B35f,''B43f,''B41f,' and 'B80f' (RMSD < 0.05 mm{sup 2}, |PFD| < 0.02 mm{sup -1}, respectively). The GE 'Bone,''Bone+,' and 'Edge' kernels all matched most closely with Siemens 'B75f' kernel but with sizeable RMSD and |PFD| values up to 0.18 mm{sup 2} and 0.41 mm{sup -1}, respectively. These sizeable RMSD and |PFD| values corresponded to visually perceivable differences in the noise texture of the images. Conclusions: It is possible to use the NPS to quantitatively compare noise texture across CT systems. The degree to which similar texture across scanners could be achieved varies and is limited by the kernels available on each scanner.« less
Hasegawa, Hiroaki; Mihara, Yoshiyuki; Ino, Kenji; Sato, Jiro
2014-11-01
The purpose of this study was to evaluate the radiation dose reduction to patients and radiologists in computed tomography (CT) guided examinations for the thoracic region using CT fluoroscopy. Image quality evaluation of the real-time filtered back-projection (RT-FBP) images and the real-time adaptive iterative dose reduction (RT-AIDR) images was carried out on noise and artifacts that were considered to affect the CT fluoroscopy. The image standard deviation was improved in the fluoroscopy setting with less than 30 mA on 120 kV. With regard to the evaluation of artifact visibility and the amount generated by the needle attached to the chest phantom, there was no significant difference between the RT-FBP images with 120 kV, 20 mA and the RT-AIDR images with low-dose conditions (greater than 80 kV, 30 mA and less than 120 kV, 20 mA). The results suggest that it is possible to reduce the radiation dose by approximately 34% at the maximum using RT-AIDR while maintaining image quality equivalent to the RT-FBP images with 120 V, 20 mA.
Wellenberg, Ruud H H; Boomsma, Martijn F; van Osch, Jochen A C; Vlassenbroek, Alain; Milles, Julien; Edens, Mireille A; Streekstra, Geert J; Slump, Cornelis H; Maas, Mario
To quantify the combined use of iterative model-based reconstruction (IMR) and orthopaedic metal artefact reduction (O-MAR) in reducing metal artefacts and improving image quality in a total hip arthroplasty phantom. Scans acquired at several dose levels and kVps were reconstructed with filtered back-projection (FBP), iterative reconstruction (iDose) and IMR, with and without O-MAR. Computed tomography (CT) numbers, noise levels, signal-to-noise-ratios and contrast-to-noise-ratios were analysed. Iterative model-based reconstruction results in overall improved image quality compared to iDose and FBP (P < 0.001). Orthopaedic metal artefact reduction is most effective in reducing severe metal artefacts improving CT number accuracy by 50%, 60%, and 63% (P < 0.05) and reducing noise by 1%, 62%, and 85% (P < 0.001) whereas improving signal-to-noise-ratios by 27%, 47%, and 46% (P < 0.001) and contrast-to-noise-ratios by 16%, 25%, and 19% (P < 0.001) with FBP, iDose, and IMR, respectively. The combined use of IMR and O-MAR strongly improves overall image quality and strongly reduces metal artefacts in the CT imaging of a total hip arthroplasty phantom.
NASA Astrophysics Data System (ADS)
Lojacono, Xavier; Richard, Marie-Hélène; Ley, Jean-Luc; Testa, Etienne; Ray, Cédric; Freud, Nicolas; Létang, Jean Michel; Dauvergne, Denis; Maxim, Voichiţa; Prost, Rémy
2013-10-01
The Compton camera is a relevant imaging device for the detection of prompt photons produced by nuclear fragmentation in hadrontherapy. It may allow an improvement in detection efficiency compared to a standard gamma-camera but requires more sophisticated image reconstruction techniques. In this work, we simulate low statistics acquisitions from a point source having a broad energy spectrum compatible with hadrontherapy. We then reconstruct the image of the source with a recently developed filtered backprojection algorithm, a line-cone approach and an iterative List Mode Maximum Likelihood Expectation Maximization algorithm. Simulated data come from a Compton camera prototype designed for hadrontherapy online monitoring. Results indicate that the achievable resolution in directions parallel to the detector, that may include the beam direction, is compatible with the quality control requirements. With the prototype under study, the reconstructed image is elongated in the direction orthogonal to the detector. However this direction is of less interest in hadrontherapy where the first requirement is to determine the penetration depth of the beam in the patient. Additionally, the resolution may be recovered using a second camera.
Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.
2016-01-01
Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878
Ning, Peigang; Zhu, Shaocheng; Shi, Dapeng; Guo, Ying; Sun, Minghua
2014-01-01
This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs) of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol) were recorded. At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively.
NASA Astrophysics Data System (ADS)
Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.
2016-09-01
Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.
GPU implementation of prior image constrained compressed sensing (PICCS)
NASA Astrophysics Data System (ADS)
Nett, Brian E.; Tang, Jie; Chen, Guang-Hong
2010-04-01
The Prior Image Constrained Compressed Sensing (PICCS) algorithm (Med. Phys. 35, pg. 660, 2008) has been applied to several computed tomography applications with both standard CT systems and flat-panel based systems designed for guiding interventional procedures and radiation therapy treatment delivery. The PICCS algorithm typically utilizes a prior image which is reconstructed via the standard Filtered Backprojection (FBP) reconstruction algorithm. The algorithm then iteratively solves for the image volume that matches the measured data, while simultaneously assuring the image is similar to the prior image. The PICCS algorithm has demonstrated utility in several applications including: improved temporal resolution reconstruction, 4D respiratory phase specific reconstructions for radiation therapy, and cardiac reconstruction from data acquired on an interventional C-arm. One disadvantage of the PICCS algorithm, just as other iterative algorithms, is the long computation times typically associated with reconstruction. In order for an algorithm to gain clinical acceptance reconstruction must be achievable in minutes rather than hours. In this work the PICCS algorithm has been implemented on the GPU in order to significantly reduce the reconstruction time of the PICCS algorithm. The Compute Unified Device Architecture (CUDA) was used in this implementation.
NASA Astrophysics Data System (ADS)
Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.
2016-02-01
In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.
Laser microbeam CT scanning of dosimetry gels
NASA Astrophysics Data System (ADS)
Maryanski, Marek J.; Ranade, Manisha K.
2001-06-01
A novel design of an optical tomographic scanner is described that can be used for 3D mapping of optical attenuation coefficient within translucent cylindrical objects with spatial resolution on the order of 100 microns. Our scanner design utilizes the cylindrical geometry of the imaged object to obtain the desired paths of the scanning light rays. A rotating mirror and a photodetector are placed at two opposite foci of the translucent cylinder that acts as a cylindrical lens. A He-Ne laser beam passes first through a focusing lens and then is reflected by the rotating mirror, so as to scan the interior of the cylinder with focused and parallel paraxial rays that are subsequently collected by the photodetector to produce the projection data, as the cylinder rotates in small angle increments between projections. Filtered backprojection is then used to reconstruct planar distributions of optical attenuation coefficient in the cylinder. Multiplanar scans are used to obtain a complete 3D tomographic reconstruction. Among other applications, the scanner can be used in radiation therapy dosimetry and quality assurance for mapping 3D radiation dose distributions in various types of tissue-equivalent gel phantoms that change their optical attenuation coefficients in proportion to the absorbed radiation dose.
Explosion localization and characterization via infrasound using numerical modeling
NASA Astrophysics Data System (ADS)
Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.
2017-12-01
Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.
High-resolution reconstruction for terahertz imaging.
Xu, Li-Min; Fan, Wen-Hui; Liu, Jia
2014-11-20
We present a high-resolution (HR) reconstruction model and algorithms for terahertz imaging, taking advantage of super-resolution methodology and algorithms. The algorithms used include projection onto a convex sets approach, iterative backprojection approach, Lucy-Richardson iteration, and 2D wavelet decomposition reconstruction. Using the first two HR reconstruction methods, we successfully obtain HR terahertz images with improved definition and lower noise from four low-resolution (LR) 22×24 terahertz images taken from our homemade THz-TDS system at the same experimental conditions with 1.0 mm pixel. Using the last two HR reconstruction methods, we transform one relatively LR terahertz image to a HR terahertz image with decreased noise. This indicates potential application of HR reconstruction methods in terahertz imaging with pulsed and continuous wave terahertz sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, A; Stayman, J; Otake, Y
Purpose: To address the challenges of image quality, radiation dose, and reconstruction speed in intraoperative cone-beam CT (CBCT) for neurosurgery by combining model-based image reconstruction (MBIR) with accelerated algorithmic and computational methods. Methods: Preclinical studies involved a mobile C-arm for CBCT imaging of two anthropomorphic head phantoms that included simulated imaging targets (ventricles, soft-tissue structures/bleeds) and neurosurgical procedures (deep brain stimulation (DBS) electrode insertion) for assessment of image quality. The penalized likelihood (PL) framework was used for MBIR, incorporating a statistical model with image regularization via an edgepreserving penalty. To accelerate PL reconstruction, the ordered-subset, separable quadratic surrogates (OS-SQS) algorithmmore » was modified to incorporate Nesterov's method and implemented on a multi-GPU system. A fair comparison of image quality between PL and conventional filtered backprojection (FBP) was performed by selecting reconstruction parameters that provided matched low-contrast spatial resolution. Results: CBCT images of the head phantoms demonstrated that PL reconstruction improved image quality (∼28% higher CNR) even at half the radiation dose (3.3 mGy) compared to FBP. A combination of Nesterov's method and fast projectors yielded a PL reconstruction run-time of 251 sec (cf., 5729 sec for OS-SQS, 13 sec for FBP). Insertion of a DBS electrode resulted in severe metal artifact streaks in FBP reconstructions, whereas PL was intrinsically robust against metal artifact. The combination of noise and artifact was reduced from 32.2 HU in FBP to 9.5 HU in PL, thereby providing better assessment of device placement and potential complications. Conclusion: The methods can be applied to intraoperative CBCT for guidance and verification of neurosurgical procedures (DBS electrode insertion, biopsy, tumor resection) and detection of complications (intracranial hemorrhage). Significant improvement in image quality, dose reduction, and reconstruction time of ∼4 min will enable practical deployment of low-dose C-arm CBCT within the operating room. AAPM Research Seed Funding (2013-2014); NIH Fellowship F32EB017571; Siemens Healthcare (XP Division)« less
Ning, Ruola; Tang, Xiangyang; Conover, David; Yu, Rongfeng
2003-07-01
Cone beam computed tomography (CBCT) has been investigated in the past two decades due to its potential advantages over a fan beam CT. These advantages include (a) great improvement in data acquisition efficiency, spatial resolution, and spatial resolution uniformity, (b) substantially better utilization of x-ray photons generated by the x-ray tube compared to a fan beam CT, and (c) significant advancement in clinical three-dimensional (3D) CT applications. However, most studies of CBCT in the past are focused on cone beam data acquisition theories and reconstruction algorithms. The recent development of x-ray flat panel detectors (FPD) has made CBCT imaging feasible and practical. This paper reports a newly built flat panel detector-based CBCT prototype scanner and presents the results of the preliminary evaluation of the prototype through a phantom study. The prototype consisted of an x-ray tube, a flat panel detector, a GE 8800 CT gantry, a patient table and a computer system. The prototype was constructed by modifying a GE 8800 CT gantry such that both a single-circle cone beam acquisition orbit and a circle-plus-two-arcs orbit can be achieved. With a circle-plus-two-arcs orbit, a complete set of cone beam projection data can be obtained, consisting of a set of circle projections and a set of arc projections. Using the prototype scanner, the set of circle projections were acquired by rotating the x-ray tube and the FPD together on the gantry, and the set of arc projections were obtained by tilting the gantry while the x-ray tube and detector were at the 12 and 6 o'clock positions, respectively. A filtered backprojection exact cone beam reconstruction algorithm based on a circle-plus-two-arcs orbit was used for cone beam reconstruction from both the circle and arc projections. The system was first characterized in terms of the linearity and dynamic range of the detector. Then the uniformity, spatial resolution and low contrast resolution were assessed using different phantoms mainly in the central plane of the cone beam reconstruction. Finally, the reconstruction accuracy of using the circle-plus-two-arcs orbit and its related filtered backprojection cone beam volume CT reconstruction algorithm was evaluated with a specially designed disk phantom. The results obtained using the new cone beam acquisition orbit and the related reconstruction algorithm were compared to those obtained using a single-circle cone beam geometry and Feldkamp's algorithm in terms of reconstruction accuracy. The results of the study demonstrate that the circle-plus-two-arcs cone beam orbit is achievable in practice. Also, the reconstruction accuracy of cone beam reconstruction is significantly improved with the circle-plus-two-arcs orbit and its related exact CB-FPB algorithm, as compared to using a single circle cone beam orbit and Feldkamp's algorithm.
Chung, Heeteak; Li, Jonathan; Samant, Sanjiv
2011-04-08
Two-dimensional array dosimeters are commonly used to perform pretreatment quality assurance procedures, which makes them highly desirable for measuring transit fluences for in vivo dose reconstruction. The purpose of this study was to determine if an in vivo dose reconstruction via transit dosimetry using a 2D array dosimeter was possible. To test the accuracy of measuring transit dose distribution using a 2D array dosimeter, we evaluated it against the measurements made using ionization chamber and radiochromic film (RCF) profiles for various air gap distances (distance from the exit side of the solid water slabs to the detector distance; 0 cm, 30 cm, 40 cm, 50 cm, and 60 cm) and solid water slab thicknesses (10 cm and 20 cm). The backprojection dose reconstruction algorithm was described and evaluated. The agreement between the ionization chamber and RCF profiles for the transit dose distribution measurements ranged from -0.2% ~ 4.0% (average 1.79%). Using the backprojection dose reconstruction algorithm, we found that, of the six conformal fields, four had a 100% gamma index passing rate (3%/3 mm gamma index criteria), and two had gamma index passing rates of 99.4% and 99.6%. Of the five IMRT fields, three had a 100% gamma index passing rate, and two had gamma index passing rates of 99.6% and 98.8%. It was found that a 2D array dosimeter could be used for backprojection dose reconstruction for in vivo dosimetry.
Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko
2012-11-01
To compare the performance of model-based iterative reconstruction (MBIR) with that of standard filtered back projection (FBP) for measuring vascular wall attenuation. After subjecting 9 vascular models (actual attenuation value of wall, 89 HU) with wall thickness of 0.5, 1.0, or 1.5 mm that we filled with contrast material of 275, 396, or 542 HU to scanning using 64-detector computed tomography (CT), we reconstructed images using MBIR and FBP (Bone, Detail kernels) and measured wall attenuation at the center of the wall for each model. We performed attenuation measurements for each model and additional supportive measurements by a differentiation curve. We analyzed statistics using analyzes of variance with repeated measures. Using the Bone kernel, standard deviation of the measurement exceeded 30 HU in most conditions. In measurements at the wall center, the attenuation values obtained using MBIR were comparable to or significantly closer to the actual wall attenuation than those acquired using Detail kernel. Using differentiation curves, we could measure attenuation for models with walls of 1.0- or 1.5-mm thickness using MBIR but only those of 1.5-mm thickness using Detail kernel. We detected no significant differences among the attenuation values of the vascular walls of either thickness (MBIR, P=0.1606) or among the 3 densities of intravascular contrast material (MBIR, P=0.8185; Detail kernel, P=0.0802). Compared with FBP, MBIR reduces both reconstruction blur and image noise simultaneously, facilitates recognition of vascular wall boundaries, and can improve accuracy in measuring wall attenuation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Impulse radar imaging system for concealed object detection
NASA Astrophysics Data System (ADS)
Podd, F. J. W.; David, M.; Iqbal, G.; Hussain, F.; Morris, D.; Osakue, E.; Yeow, Y.; Zahir, S.; Armitage, D. W.; Peyton, A. J.
2013-10-01
Electromagnetic systems for imaging concealed objects at checkpoints typically employ radiation at millimetre and terahertz frequencies. These systems have been shown to be effective and provide a sufficiently high resolution image. However there are difficulties and current electromagnetic systems have limitations particularly in accurately differentiating between threat and innocuous objects based on shape, surface emissivity or reflectivity, which are indicative parameters. In addition, water has a high absorption coefficient at millimetre wavelength and terahertz frequencies, which makes it more difficult for these frequencies to image through thick damp clothing. This paper considers the potential of using ultra wideband (UWB) in the low gigahertz range. The application of this frequency band to security screening appears to be a relatively new field. The business case for implementing the UWB system has been made financially viable by the recent availability of low-cost integrated circuits operating at these frequencies. Although designed for the communication sector, these devices can perform the required UWB radar measurements as well. This paper reports the implementation of a 2 to 5 GHz bandwidth linear array scanner. The paper describes the design and fabrication of transmitter and receiver antenna arrays whose individual elements are a type of antipodal Vivaldi antenna. The antenna's frequency and angular response were simulated in CST Microwave Studio and compared with laboratory measurements. The data pre-processing methods of background subtraction and deconvolution are implemented to improve the image quality. The background subtraction method uses a reference dataset to remove antenna crosstalk and room reflections from the dataset. The deconvolution method uses a Wiener filter to "sharpen" the returned echoes which improves the resolution of the reconstructed image. The filter uses an impulse response reference dataset and a signal-to-noise parameter to determine how the frequencies contained in the echo dataset are normalised. The chosen image reconstruction algorithm is based on the back-projection method. The algorithm was implemented in MATLAB and uses a pre-calculated sensitivity matrix to increase the computation speed. The results include both 2D and 3D image datasets. The 3D datasets were obtained by scanning the dual sixteen element linear antenna array over the test object. The system has been tested on both humans and mannequin test objects. The front surface of an object placed on the human/mannequin torso is clearly visible, but its presence is also seen from a tell-tale imaging characteristic. This characteristic is caused by a reduction in the wave velocity as the electromagnetic radiation passes through the object, and manifests as an indentation in the reconstructed image that is readily identifiable. The prototype system has been shown to easily detect a 12 mm x 30 mm x70 mm plastic object concealed under clothing.
Research on Synthetic Aperture Radar Processing for the Spaceborne Sliding Spotlight Mode.
Shen, Shijian; Nie, Xin; Zhang, Xinggan
2018-02-03
Gaofen-3 (GF-3) is China' first C-band multi-polarization synthetic aperture radar (SAR) satellite, which also provides the sliding spotlight mode for the first time. Sliding-spotlight mode is a novel mode to realize imaging with not only high resolution, but also wide swath. Several key technologies for sliding spotlight mode in spaceborne SAR with high resolution are investigated in this paper, mainly including the imaging parameters, the methods of velocity estimation and ambiguity elimination, and the imaging algorithms. Based on the chosen Convolution BackProjection (CBP) and PFA (Polar Format Algorithm) imaging algorithms, a fast implementation method of CBP and a modified PFA method suitable for sliding spotlight mode are proposed, and the processing flows are derived in detail. Finally, the algorithms are validated by simulations and measured data.
Gao, Jingkun; Deng, Bin; Qin, Yuliang; Wang, Hongqiang; Li, Xiang
2016-12-14
An efficient wide-angle inverse synthetic aperture imaging method considering the spherical wavefront effects and suitable for the terahertz band is presented. Firstly, the echo signal model under spherical wave assumption is established, and the detailed wavefront curvature compensation method accelerated by 1D fast Fourier transform (FFT) is discussed. Then, to speed up the reconstruction procedure, the fast Gaussian gridding (FGG)-based nonuniform FFT (NUFFT) is employed to focus the image. Finally, proof-of-principle experiments are carried out and the results are compared with the ones obtained by the convolution back-projection (CBP) algorithm. The results demonstrate the effectiveness and the efficiency of the presented method. This imaging method can be directly used in the field of nondestructive detection and can also be used to provide a solution for the calculation of the far-field RCSs (Radar Cross Section) of targets in the terahertz regime.
Lu, Wenting; Yan, Hao; Gu, Xuejun; Tian, Zhen; Luo, Ouyang; Yang, Liu; Zhou, Linghong; Cervino, Laura; Wang, Jing; Jiang, Steve; Jia, Xun
2014-10-21
With the aim of maximally reducing imaging dose while meeting requirements for adaptive radiation therapy (ART), we propose in this paper a new cone beam CT (CBCT) acquisition and reconstruction method that delivers images with a low noise level inside a region of interest (ROI) and a relatively high noise level outside the ROI. The acquired projection images include two groups: densely sampled projections at a low exposure with a large field of view (FOV) and sparsely sampled projections at a high exposure with a small FOV corresponding to the ROI. A new algorithm combining the conventional filtered back-projection algorithm and the tight-frame iterative reconstruction algorithm is also designed to reconstruct the CBCT based on these projection data. We have validated our method on a simulated head-and-neck (HN) patient case, a semi-real experiment conducted on a HN cancer patient under a full-fan scan mode, as well as a Catphan phantom under a half-fan scan mode. Relative root-mean-square errors (RRMSEs) of less than 3% for the entire image and ~1% within the ROI compared to the ground truth have been observed. These numbers demonstrate the ability of our proposed method to reconstruct high-quality images inside the ROI. As for the part outside ROI, although the images are relatively noisy, it can still provide sufficient information for radiation dose calculations in ART. Dose distributions calculated on our CBCT image and on a standard CBCT image are in agreement, with a mean relative difference of 0.082% inside the ROI and 0.038% outside the ROI. Compared with the standard clinical CBCT scheme, an imaging dose reduction of approximately 3-6 times inside the ROI was achieved, as well as an 8 times outside the ROI. Regarding computational efficiency, it takes 1-3 min to reconstruct a CBCT image depending on the number of projections used. These results indicate that the proposed method has the potential for application in ART.
Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Kang, Joo Hyun; Lim, Sang Moo; Kim, Hee-Joung
2014-05-01
The Siemens Biograph TruePoint TrueV (B-TPTV) positron emission tomography (PET) scanner performs 3D PET reconstruction using a system matrix with point spread function (PSF) modeling (called the True X reconstruction). PET resolution was dramatically improved with the True X method. In this study, we assessed the spatial resolution and image quality on a B-TPTV PET scanner. In addition, we assessed the feasibility of animal imaging with a B-TPTV PET and compared it with a microPET R4 scanner. Spatial resolution was measured at center and at 8 cm offset from the center in transverse plane with warm background activity. True X, ordered subset expectation maximization (OSEM) without PSF modeling, and filtered back-projection (FBP) reconstruction methods were used. Percent contrast (% contrast) and percent background variability (% BV) were assessed according to NEMA NU2-2007. The recovery coefficient (RC), non-uniformity, spill-over ratio (SOR), and PET imaging of the Micro Deluxe Phantom were assessed to compare image quality of B-TPTV PET with that of the microPET R4. When True X reconstruction was used, spatial resolution was <3.65 mm with warm background activity. % contrast and % BV with True X reconstruction were higher than those with the OSEM reconstruction algorithm without PSF modeling. In addition, the RC with True X reconstruction was higher than that with the FBP method and the OSEM without PSF modeling method on the microPET R4. The non-uniformity with True X reconstruction was higher than that with FBP and OSEM without PSF modeling on microPET R4. SOR with True X reconstruction was better than that with FBP or OSEM without PSF modeling on the microPET R4. This study assessed the performance of the True X reconstruction. Spatial resolution with True X reconstruction was improved by 45 % and its % contrast was significantly improved compared to those with the conventional OSEM without PSF modeling reconstruction algorithm. The noise level was higher than that with the other reconstruction algorithm. Therefore, True X reconstruction should be used with caution when quantifying PET data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, S; Lo, P; Hoffman, J
Purpose: To evaluate the robustness of CAD or Quantitative Imaging methods, they should be tested on a variety of cases and under a variety of image acquisition and reconstruction conditions that represent the heterogeneity encountered in clinical practice. The purpose of this work was to develop a fully-automated pipeline for generating CT images that represent a wide range of dose and reconstruction conditions. Methods: The pipeline consists of three main modules: reduced-dose simulation, image reconstruction, and quantitative analysis. The first two modules of the pipeline can be operated in a completely automated fashion, using configuration files and running the modulesmore » in a batch queue. The input to the pipeline is raw projection CT data; this data is used to simulate different levels of dose reduction using a previously-published algorithm. Filtered-backprojection reconstructions are then performed using FreeCT-wFBP, a freely-available reconstruction software for helical CT. We also added support for an in-house, model-based iterative reconstruction algorithm using iterative coordinate-descent optimization, which may be run in tandem with the more conventional recon methods. The reduced-dose simulations and image reconstructions are controlled automatically by a single script, and they can be run in parallel on our research cluster. The pipeline was tested on phantom and lung screening datasets from a clinical scanner (Definition AS, Siemens Healthcare). Results: The images generated from our test datasets appeared to represent a realistic range of acquisition and reconstruction conditions that we would expect to find clinically. The time to generate images was approximately 30 minutes per dose/reconstruction combination on a hybrid CPU/GPU architecture. Conclusion: The automated research pipeline promises to be a useful tool for either training or evaluating performance of quantitative imaging software such as classifiers and CAD algorithms across the range of acquisition and reconstruction parameters present in the clinical environment. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less
Rupture evolution of the 2006 Java tsunami earthquake and the possible role of splay faults
NASA Astrophysics Data System (ADS)
Fan, Wenyuan; Bassett, Dan; Jiang, Junle; Shearer, Peter M.; Ji, Chen
2017-11-01
The 2006 Mw 7.8 Java earthquake was a tsunami earthquake, exhibiting frequency-dependent seismic radiation along strike. High-frequency global back-projection results suggest two distinct rupture stages. The first stage lasted ∼65 s with a rupture speed of ∼1.2 km/s, while the second stage lasted from ∼65 to 150 s with a rupture speed of ∼2.7 km/s. High-frequency radiators resolved with back-projection during the second stage spatially correlate with splay fault traces mapped from residual free-air gravity anomalies. These splay faults also colocate with a major tsunami source associated with the earthquake inferred from tsunami first-crest back-propagation simulation. These correlations suggest that the splay faults may have been reactivated during the Java earthquake, as has been proposed for other tsunamigenic earthquakes, such as the 1944 Mw 8.1 Tonankai earthquake in the Nankai Trough.
NASA Astrophysics Data System (ADS)
Yang, Qingsong; Cong, Wenxiang; Wang, Ge
2016-10-01
X-ray phase contrast imaging is an important mode due to its sensitivity to subtle features of soft biological tissues. Grating-based differential phase contrast (DPC) imaging is one of the most promising phase imaging techniques because it works with a normal x-ray tube of a large focal spot at a high flux rate. However, a main obstacle before this paradigm shift is the fabrication of large-area gratings of a small period and a high aspect ratio. Imaging large objects with a size-limited grating results in data truncation which is a new type of the interior problem. While the interior problem was solved for conventional x-ray CT through analytic extension, compressed sensing and iterative reconstruction, the difficulty for interior reconstruction from DPC data lies in that the implementation of the system matrix requires the differential operation on the detector array, which is often inaccurate and unstable in the case of noisy data. Here, we propose an iterative method based on spline functions. The differential data are first back-projected to the image space. Then, a system matrix is calculated whose components are the Hilbert transforms of the spline bases. The system matrix takes the whole image as an input and outputs the back-projected interior data. Prior information normally assumed for compressed sensing is enforced to iteratively solve this inverse problem. Our results demonstrate that the proposed algorithm can successfully reconstruct an interior region of interest (ROI) from the differential phase data through the ROI.
Lv, Peijie; Liu, Jie; Zhang, Rui; Jia, Yan
2015-01-01
Objective To assess the lesion conspicuity and image quality in CT evaluation of small (≤ 3 cm) hepatocellular carcinomas (HCCs) using automatic tube voltage selection (ATVS) and automatic tube current modulation (ATCM) with or without iterative reconstruction. Materials and Methods One hundred and five patients with 123 HCC lesions were included. Fifty-seven patients were scanned using both ATVS and ATCM and images were reconstructed using either filtered back-projection (FBP) (group A1) or sinogram-affirmed iterative reconstruction (SAFIRE) (group A2). Forty-eight patients were imaged using only ATCM, with a fixed tube potential of 120 kVp and FBP reconstruction (group B). Quantitative parameters (image noise in Hounsfield unit and contrast-to-noise ratio of the aorta, the liver, and the hepatic tumors) and qualitative visual parameters (image noise, overall image quality, and lesion conspicuity as graded on a 5-point scale) were compared among the groups. Results Group A2 scanned with the automatically chosen 80 kVp and 100 kVp tube voltages ranked the best in lesion conspicuity and subjective and objective image quality (p values ranging from < 0.001 to 0.004) among the three groups, except for overall image quality between group A2 and group B (p = 0.022). Group A1 showed higher image noise (p = 0.005) but similar lesion conspicuity and overall image quality as compared with group B. The radiation dose in group A was 19% lower than that in group B (p = 0.022). Conclusion CT scanning with combined use of ATVS and ATCM and image reconstruction with SAFIRE algorithm provides higher lesion conspicuity and better image quality for evaluating small hepatic HCCs with radiation dose reduction. PMID:25995682
Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-01-01
Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290
NASA Astrophysics Data System (ADS)
Schafer, Sebastian; Wang, Adam; Otake, Yoshito; Stayman, J. W.; Zbijewski, Wojciech; Kleinszig, Gerhard; Xia, Xuewei; Gallia, Gary L.; Siewerdsen, Jeffrey H.
2013-03-01
Intraoperative imaging could improve patient safety and quality assurance (QA) via the detection of subtle complications that might otherwise only be found hours after surgery. Such capability could therefore reduce morbidity and the need for additional intervention. Among the severe adverse events that could be more quickly detected by high-quality intraoperative imaging is acute intracranial hemorrhage (ICH), conventionally assessed using post-operative CT. A mobile C-arm capable of high-quality cone-beam CT (CBCT) in combination with advanced image reconstruction techniques is reported as a means of detecting ICH in the operating room. The system employs an isocentric C-arm with a flat-panel detector in dual gain mode, correction of x-ray scatter and beam-hardening, and a penalized likelihood (PL) iterative reconstruction method. Performance in ICH detection was investigated using a quantitative phantom focusing on (non-contrast-enhanced) blood-brain contrast, an anthropomorphic head phantom, and a porcine model with injection of fresh blood bolus. The visibility of ICH was characterized in terms of contrast-to-noise ratio (CNR) and qualitative evaluation of images by a neurosurgeon. Across a range of size and contrast of the ICH as well as radiation dose from the CBCT scan, the CNR was found to increase from ~2.2-3.7 for conventional filtered backprojection (FBP) to ~3.9-5.4 for PL at equivalent spatial resolution. The porcine model demonstrated superior ICH detectability for PL. The results support the role of high-quality mobile C-arm CBCT employing advanced reconstruction algorithms for detecting subtle complications in the operating room at lower radiation dose and lower cost than intraoperative CT scanners and/or fixedroom C-arms. Such capability could present a potentially valuable aid to patient safety and QA.
Development of a prototype chest digital tomosynthesis R/F system
NASA Astrophysics Data System (ADS)
Choi, Sunghoon; Lee, Haenghwa; Lee, Donghoon; Choi, Seungyeon; Shin, Jungwook; Jang, Woojin; Seo, Chang-Woo; Kim, Hee-Joung
2017-03-01
Digital tomosynthesis has an advantage of low radiation dose compared to conventional computed tomography (CT) by utilizing small number of projections ( 80) acquired over a limited angular range. It can produce 3D volumetric data although they may have some artifacts due to incomplete sampling. Based upon these attractive merits, we developed a prototype digital tomosynthesis R/F system especially for the purpose of applications in chest imaging. Prototype chest digital tomosynthesis (CDT) R/F system contains an X-ray tube with high power R/F pulse generator, flat-panel detector, R/F table, electromechanical radiographic subsystems including precise motor controller, and a reconstruction server. For image reconstruction, users could select the reconstruction option between analytic and iterative methods. Reconstructed images of Catphan700 and LUNGMAN phantoms clearly and rapidly described the internal structures of the phantoms using graphics processing unit (GPU) programming. Contrast-to-noise ratio (CNR) values of the CTP682 module was higher in images using the simultaneous algebraic reconstruction technique (SART) than those using filtered backprojection (FBP) for all materials by factors of 2.60, 3.78, 5.50, 2.30, 3.70, and 2.52 for air, lung foam, low density polyethylene (LDPE), Delrin (acetal homopolymer resin), bone 50% (hydroxyapatite), and Teflon, respectively. Total elapsed times for producing 3D volume were 2.92 sec and 86.29 sec on average for FBP and SART (20 iterations), respectively. The times required for reconstruction were clinically feasible. Moreover, the total radiation dose from the system (5.68 mGy) could demonstrate a significant lowered radiation dose compared to conventional chest CT scan. Consequently, our prototype tomosynthesis R/F system represents an important advance in digital tomosynthesis applications.
NASA Astrophysics Data System (ADS)
Choi, Sunghoon; Lee, Seungwan; Lee, Haenghwa; Lee, Donghoon; Choi, Seungyeon; Shin, Jungwook; Seo, Chang-Woo; Kim, Hee-Joung
2017-03-01
Digital tomosynthesis offers the advantage of low radiation doses compared to conventional computed tomography (CT) by utilizing small numbers of projections ( 80) acquired over a limited angular range. It produces 3D volumetric data, although there are artifacts due to incomplete sampling. Based upon these characteristics, we developed a prototype digital tomosynthesis R/F system for applications in chest imaging. Our prototype chest digital tomosynthesis (CDT) R/F system contains an X-ray tube with high power R/F pulse generator, flat-panel detector, R/F table, electromechanical radiographic subsystems including a precise motor controller, and a reconstruction server. For image reconstruction, users select between analytic and iterative reconstruction methods. Our reconstructed images of Catphan700 and LUNGMAN phantoms clearly and rapidly described the internal structures of phantoms using graphics processing unit (GPU) programming. Contrast-to-noise ratio (CNR) values of the CTP682 module of Catphan700 were higher in images using a simultaneous algebraic reconstruction technique (SART) than in those using filtered back-projection (FBP) for all materials by factors of 2.60, 3.78, 5.50, 2.30, 3.70, and 2.52 for air, lung foam, low density polyethylene (LDPE), Delrin® (acetal homopolymer resin), bone 50% (hydroxyapatite), and Teflon, respectively. Total elapsed times for producing 3D volume were 2.92 s and 86.29 s on average for FBP and SART (20 iterations), respectively. The times required for reconstruction were clinically feasible. Moreover, the total radiation dose from our system (5.68 mGy) was lower than that of conventional chest CT scan. Consequently, our prototype tomosynthesis R/F system represents an important advance in digital tomosynthesis applications.
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
NASA Astrophysics Data System (ADS)
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-03-01
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
NASA Astrophysics Data System (ADS)
Choi, Sunghoon; Lee, Haenghwa; Lee, Donghoon; Choi, Seungyeon; Shin, Jungwook; Jang, Woojin; Seo, Chang-Woo; Kim, Hee-Joung
2017-03-01
A compressed-sensing (CS) technique has been rapidly applied in medical imaging field for retrieving volumetric data from highly under-sampled projections. Among many variant forms, CS technique based on a total-variation (TV) regularization strategy shows fairly reasonable results in cone-beam geometry. In this study, we implemented the TV-based CS image reconstruction strategy in our prototype chest digital tomosynthesis (CDT) R/F system. Due to the iterative nature of time consuming processes in solving a cost function, we took advantage of parallel computing using graphics processing units (GPU) by the compute unified device architecture (CUDA) programming to accelerate our algorithm. In order to compare the algorithmic performance of our proposed CS algorithm, conventional filtered back-projection (FBP) and simultaneous algebraic reconstruction technique (SART) reconstruction schemes were also studied. The results indicated that the CS produced better contrast-to-noise ratios (CNRs) in the physical phantom images (Teflon region-of-interest) by factors of 3.91 and 1.93 than FBP and SART images, respectively. The resulted human chest phantom images including lung nodules with different diameters also showed better visual appearance in the CS images. Our proposed GPU-accelerated CS reconstruction scheme could produce volumetric data up to 80 times than CPU programming. Total elapsed time for producing 50 coronal planes with 1024×1024 image matrix using 41 projection views were 216.74 seconds for proposed CS algorithms on our GPU programming, which could match the clinically feasible time ( 3 min). Consequently, our results demonstrated that the proposed CS method showed a potential of additional dose reduction in digital tomosynthesis with reasonable image quality in a fast time.
Cumulative phase delay imaging - A new contrast enhanced ultrasound modality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demi, Libertario, E-mail: l.demi@tue.nl; Sloun, Ruud J. G. van; Mischi, Massimo
Recently, a new acoustic marker for ultrasound contrast agents (UCAs) has been introduced. A cumulative phase delay (CPD) between the second harmonic and fundamental pressure wave field components is in fact observable for ultrasound propagating through UCAs. This phenomenon is absent in the case of tissue nonlinearity and is dependent on insonating pressure and frequency, UCA concentration, and propagation path length through UCAs. In this paper, ultrasound images based on this marker are presented. The ULA-OP research platform, in combination with a LA332 linear array probe (Esaote, Firenze Italy), were used to image a gelatin phantom containing a PVC platemore » (used as a reflector) and a cylindrical cavity measuring 7 mm in diameter (placed in between the observation point and the PVC plate). The cavity contained a 240 µL/L SonoVueO{sup ®} UCA concentration. Two insonating frequencies (3 MHz and 2.5 MHz) were used to scan the gelatine phantom. A mechanical index MI = 0.07, measured in water at the cavity location with a HGL-0400 hydrophone (Onda, Sunnyvale, CA), was utilized. Processing the ultrasound signals backscattered from the plate, ultrasound images were generated in a tomographic fashion using the filtered back-projection method. As already observed in previous studies, significantly higher CPD values are measured when imaging at a frequency of 2.5 MHz, as compared to imaging at 3 MHz. In conclusion, these results confirm the applicability of the discussed CPD as a marker for contrast imaging. Comparison with standard contrast-enhanced ultrasound imaging modalities will be the focus of future work.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chighvinadze, T; Pistorius, S; CancerCare Manitoba, Winnipeg, MB
2014-08-15
Purpose: To investigate the dependence of the reconstructed image quality on the number of projections in multi-projection Compton scatter tomography (MPCST). The conventional relationship between the projection number used for reconstruction and reconstructed image quality pertained to CT does not necessarily apply to MPCST, which can produce images from a single projection if the detectors have sufficiently high energy and spatial resolution. Methods: The electron density image was obtained using filtered-backprojection of the scatter signal over circular arcs formed using Compton equation. The behavior of the reconstructed image quality as a function of the projection number was evaluated through analyticalmore » simulations and characterized by CNR and MTF. Results: The increase of the projection number improves the contrast with this dependence being a function of fluence. The number of projections required to approach the asymptotic maximum contrast decreases as the fluence increases. Increasing projection number increases the CNR but not spatial resolution. Conclusions: For MPCST using a 500eV energy resolution and a 2×2mm{sup 2} size detector, an adequate image quality can be obtained with a small number of projections provided the incident fluence is high enough. This is conceptually different from conventional CT where a minimum number of projections is required to obtain an adequate image quality. While increasing projection number, even for the lowest dose value, the CNR increases even though the number of photons per projection decreases. The spatial resolution of the image is improved by increasing the sampling within a projection rather than by increasing the number of projections.« less
Direct Reconstruction of CT-Based Attenuation Correction Images for PET With Cluster-Based Penalties
NASA Astrophysics Data System (ADS)
Kim, Soo Mee; Alessio, Adam M.; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Extremely low-dose (LD) CT acquisitions used for PET attenuation correction have high levels of noise and potential bias artifacts due to photon starvation. This paper explores the use of a priori knowledge for iterative image reconstruction of the CT-based attenuation map. We investigate a maximum a posteriori framework with cluster-based multinomial penalty for direct iterative coordinate decent (dICD) reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction used a Poisson log-likelihood data fit term and evaluated two image penalty terms of spatial and mixture distributions. The spatial regularization is based on a quadratic penalty. For the mixture penalty, we assumed that the attenuation map may consist of four material clusters: air + background, lung, soft tissue, and bone. Using simulated noisy sinogram data, dICD reconstruction was performed with different strengths of the spatial and mixture penalties. The combined spatial and mixture penalties reduced the root mean squared error (RMSE) by roughly two times compared with a weighted least square and filtered backprojection reconstruction of CT images. The combined spatial and mixture penalties resulted in only slightly lower RMSE compared with a spatial quadratic penalty alone. For direct PET attenuation map reconstruction from ultra-LD CT acquisitions, the combination of spatial and mixture penalties offers regularization of both variance and bias and is a potential method to reconstruct attenuation maps with negligible patient dose. The presented results, using a best-case histogram suggest that the mixture penalty does not offer a substantive benefit over conventional quadratic regularization and diminishes enthusiasm for exploring future application of the mixture penalty.
NASA Astrophysics Data System (ADS)
Wong, Wai-Hoi; Li, Hongdi; Zhang, Yuxuan; Ramirez, Rocio; An, Shaohui; Wang, Chao; Liu, Shitao; Dong, Yun; Baghaei, Hossain
2015-10-01
We developed a high-resolution Photomultiplier-Quadrant-Sharing (PQS) PET system for human imaging. This system is made up of 24 detector panels. Each panel (bank) consists of 3 ×7 detector blocks, and each block has 16 ×16 LYSO crystals of 2.35 ×2.35 ×15.2 mm3. We used a novel detector-grinding scheme that is compatible with the PQS detector-pixel-decoding requirements to make a gapless cylindrical detector ring for maximizing detection efficiency while delivering an ultrahigh spatial-resolution for a whole-body PET camera with a ring diameter of 87 cm and axial field of view of 27.6 cm. This grinding scheme enables two adjacent gapless panels to share one row of the PMTs to extend the PQS configuration beyond one panel and thus maximize the economic benefit (in PMT usage) of the PQS design. The entire detector ring has 129,024 crystals, all of which are clearly decoded using only 576 PMTs (38-mm diameter). Thus, each PMT on average decodes 224 crystals to achieve a high crystal-pitch resolution of 2.44 mm ×2.44 mm. The detector blocks were mass-produced with our slab-sandwich-slice technique using a set of optimized mirror-film patterns (between crystals) to maximize light output and achieve high spatial and timing resolution. This detection system with time-of-flight capability was placed in a human PET/CT gantry. The reconstructed image resolution of the system was about 2.87 mm using 2D-filtered back-projection. The time-of-flight resolution was 473 ps. The preliminary images of phantoms and clinical studies presented in this work demonstrate the capability of this new PET/CT system to produce high-quality images.
Full field image reconstruction is suitable for high-pitch dual-source computed tomography.
Mahnken, Andreas H; Allmendinger, Thomas; Sedlmair, Martin; Tamm, Miriam; Reinartz, Sebastian D; Flohr, Thomas
2012-11-01
The field of view (FOV) in high-pitch dual-source computed tomography (DSCT) is limited by the size of the second detector. The goal of this study was to develop and evaluate a full FOV image reconstruction technique for high-pitch DSCT. For reconstruction beyond the FOV of the second detector, raw data of the second system were extended to the full dimensions of the first system, using the partly existing data of the first system in combination with a very smooth transition weight function. During the weighted filtered backprojection, the data of the second system were applied with an additional weighting factor. This method was tested for different pitch values from 1.5 to 3.5 on a simulated phantom and on 25 high-pitch DSCT data sets acquired at pitch values of 1.6, 2.0, 2.5, 2.8, and 3.0. Images were reconstructed with FOV sizes of 260 × 260 and 500 × 500 mm. Image quality was assessed by 2 radiologists using a 5-point Likert scale and analyzed with repeated-measure analysis of variance. In phantom and patient data, full FOV image quality depended on pitch. Where complete projection data from both tube-detector systems were available, image quality was unaffected by pitch changes. Full FOV image quality was not compromised at pitch values of 1.6 and remained fully diagnostic up to a pitch of 2.0. At higher pitch values, there was an increasing difference in image quality between limited and full FOV images (P = 0.0097). With this new image reconstruction technique, full FOV image reconstruction can be used up to a pitch of 2.0.
NASA Astrophysics Data System (ADS)
Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly
2016-03-01
This study quantitatively evaluated the performance of the exponential transformation of the free-response operating characteristic curve (EFROC) metric, with the Channelized Hotelling Observer (CHO) as a reference. The CHO has been used for image quality assessment of reconstruction algorithms and imaging systems and often it is applied to study the signal-location-known cases. The CHO also requires a large set of images to estimate the covariance matrix. In terms of clinical applications, this assumption and requirement may be unrealistic. The newly developed location-unknown EFROC detectability metric is estimated from the confidence scores reported by a model observer. Unlike the CHO, EFROC does not require a channelization step and is a non-parametric detectability metric. There are few quantitative studies available on application of the EFROC metric, most of which are based on simulation data. This study investigated the EFROC metric using experimental CT data. A phantom with four low contrast objects: 3mm (14 HU), 5mm (7HU), 7mm (5 HU) and 10 mm (3 HU) was scanned at dose levels ranging from 25 mAs to 270 mAs and reconstructed using filtered backprojection. The area under the curve values for CHO (AUC) and EFROC (AFE) were plotted with respect to different dose levels. The number of images required to estimate the non-parametric AFE metric was calculated for varying tasks and found to be less than the number of images required for parametric CHO estimation. The AFE metric was found to be more sensitive to changes in dose than the CHO metric. This increased sensitivity and the assumption of unknown signal location may be useful for investigating and optimizing CT imaging methods. Future work is required to validate the AFE metric against human observers.
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.
2015-03-01
Traumatic brain injury (TBI) is a major cause of death and disability. The current front-line imaging modality for TBI detection is CT, which reliably detects intracranial hemorrhage (fresh blood contrast 30-50 HU, size down to 1 mm) in non-contrast-enhanced exams. Compared to CT, flat-panel detector (FPD) cone-beam CT (CBCT) systems offer lower cost, greater portability, and smaller footprint suitable for point-of-care deployment. We are developing FPD-CBCT to facilitate TBI detection at the point-of-care such as in emergent, ambulance, sports, and military applications. However, current FPD-CBCT systems generally face challenges in low-contrast, soft-tissue imaging. Model-based reconstruction can improve image quality in soft-tissue imaging compared to conventional filtered back-projection (FBP) by leveraging high-fidelity forward model and sophisticated regularization. In FPD-CBCT TBI imaging, measurement noise characteristics undergo substantial change following artifact correction, resulting in non-negligible noise amplification. In this work, we extend the penalized weighted least-squares (PWLS) image reconstruction to include the two dominant artifact corrections (scatter and beam hardening) in FPD-CBCT TBI imaging by correctly modeling the variance change following each correction. Experiments were performed on a CBCT test-bench using an anthropomorphic phantom emulating intra-parenchymal hemorrhage in acute TBI, and the proposed method demonstrated an improvement in blood-brain contrast-to-noise ratio (CNR = 14.2) compared to FBP (CNR = 9.6) and PWLS using conventional weights (CNR = 11.6) at fixed spatial resolution (1 mm edge-spread width at the target contrast). The results support the hypothesis that FPD-CBCT can fulfill the image quality requirements for reliable TBI detection, using high-fidelity artifact correction and statistical reconstruction with accurate post-artifact-correction noise models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Siewerdsen, J; Stayman, J
Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFMmore » objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task-based optimization framework provides a promising approach that maximizes imaging performance under the same total dose constraint.« less
The effect of truncation on very small cardiac SPECT camerasystems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohmer, Damien; Eisner, Robert L.; Gullberg, Grant T.
2006-08-01
Background: The limited transaxial field-of-view (FOV) of avery small cardiac SPECT camera system causes view-dependent truncationof the projection of structures exterior to, but near the heart. Basictomographic principles suggest that the reconstruction of non-attenuatedtruncated data gives a distortion-free image in the interior of thetruncated region, but the DC term of the Fourier spectrum of thereconstructed image is incorrect, meaning that the intensity scale of thereconstruction is inaccurate. The purpose of this study was tocharacterize the reconstructed image artifacts from truncated data, andto quantify their effects on the measurement of tracer uptake in themyocardial. Particular attention was given to instances wheremore » the heartwall is close to hot structures (structures of high activity uptake).Methods: The MCAT phantom was used to simulate a 2D slice of the heartregion. Truncated and non-truncated projections were formed both with andwithout attenuation. The reconstructions were analyzed for artifacts inthe myocardium caused by truncation, and for the effect that attenuationhas relative to increasing those artifacts. Results: The inaccuracy dueto truncation is primarily caused by an incorrect DC component. Forvisualizing theleft ventricular wall, this error is not worse than theeffect of attenuation. The addition of a small hot bowel-like structurenear the left ventricle causes few changes in counts on the wall. Largerartifacts due to the truncation are located at the boundary of thetruncation and can be eliminated by sinogram interpolation. Finally,algebraic reconstruction methods are shown to give better reconstructionresults than an analytical filtered back-projection reconstructionalgorithm. Conclusion: Small inaccuracies in reconstructed images fromsmall FOV camera systems should have little effect on clinicalinterpretation. However, changes in the degree of inaccuracy in countsfrom slice toslice are due to changes in the truncated structures. Thesecan result in a visual 3-dimensional distortion. As with conventionallarge FOV systems attenuation effects have a much more significant effecton image accuracy.« less
Slow Earthquakes in the Alaska-Aleutian Subduction Zone Detected by Multiple Mini Seismic Arrays
NASA Astrophysics Data System (ADS)
LI, B.; Ghosh, A.; Thurber, C. H.; Lanza, F.
2017-12-01
The Alaska-Aleutian subduction zone is one of the most seismically and volcanically active plate boundaries on earth. Compared to other subduction zones, the slow earthquakes, such as tectonic tremors (TTs) and low frequency earthquakes (LFEs), are relatively poorly studied due to the limited data availability and difficult logistics. The analysis of two-months of continuous data from a mini array deployed in 2012 shows abundant tremor and LFE activities under Unalaska Island that is heterogeneously distributed [Li & Ghosh, 2017]. To better study slow earthquakes and understand their physical characteristics in the study region, we deployed a hybrid array of arrays, consisting of three well-designed mini seismic arrays and five stand alone stations, in the Unalaska Island in 2014. They were operational for between one and two years. Using the beam back-projection method [Ghosh et al., 2009, 2012], we detect continuous tremor activities for over a year when all three arrays are running. The sources of tremors are located south of the Unalaska and Akutan Islands, at the eastern and down-dip edge of the rupture zone of the 1957 Mw 8.6 earthquake, and they are clustered in several patches, with a gap between the two major clusters. Tremors show multiple migration patterns with propagation in both along-strike and dip directions and a wide range of velocities. We also identify tens of LFE families and use them as templates to search for repeating LFE events with the matched-filter method. Hundreds to thousands of LFEs for each family are detected and their activities are spatiotemporally consistent with tremor activities. The array techniques are revealing a near-continuous tremor activity in this area with remarkable spatiotemporal details. It helps us to better recognize the physical properties of the transition zone, provides new insights into the slow earthquake activities in this area, and explores their relation with the local earthquakes and the potential slow slip events.
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.
2015-08-01
Non-contrast CT reliably detects fresh blood in the brain and is the current front-line imaging modality for intracranial hemorrhage such as that occurring in acute traumatic brain injury (contrast ~40-80 HU, size > 1 mm). We are developing flat-panel detector (FPD) cone-beam CT (CBCT) to facilitate such diagnosis in a low-cost, mobile platform suitable for point-of-care deployment. Such a system may offer benefits in the ICU, urgent care/concussion clinic, ambulance, and sports and military theatres. However, current FPD-CBCT systems face significant challenges that confound low-contrast, soft-tissue imaging. Artifact correction can overcome major sources of bias in FPD-CBCT but imparts noise amplification in filtered backprojection (FBP). Model-based reconstruction improves soft-tissue image quality compared to FBP by leveraging a high-fidelity forward model and image regularization. In this work, we develop a novel penalized weighted least-squares (PWLS) image reconstruction method with a noise model that includes accurate modeling of the noise characteristics associated with the two dominant artifact corrections (scatter and beam-hardening) in CBCT and utilizes modified weights to compensate for noise amplification imparted by each correction. Experiments included real data acquired on a FPD-CBCT test-bench and an anthropomorphic head phantom emulating intra-parenchymal hemorrhage. The proposed PWLS method demonstrated superior noise-resolution tradeoffs in comparison to FBP and PWLS with conventional weights (viz. at matched 0.50 mm spatial resolution, CNR = 11.9 compared to CNR = 5.6 and CNR = 9.9, respectively) and substantially reduced image noise especially in challenging regions such as skull base. The results support the hypothesis that with high-fidelity artifact correction and statistical reconstruction using an accurate post-artifact-correction noise model, FPD-CBCT can achieve image quality allowing reliable detection of intracranial hemorrhage.
Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li
2018-01-01
Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.
Discrete tomography in an in vivo small animal bone study.
Van de Casteele, Elke; Perilli, Egon; Van Aarle, Wim; Reynolds, Karen J; Sijbers, Jan
2018-01-01
This study aimed at assessing the feasibility of a discrete algebraic reconstruction technique (DART) to be used in in vivo small animal bone studies. The advantage of discrete tomography is the possibility to reduce the amount of X-ray projection images, which makes scans faster and implies also a significant reduction of radiation dose, without compromising the reconstruction results. Bone studies are ideal for being performed with discrete tomography, due to the relatively small number of attenuation coefficients contained in the image [namely three: background (air), soft tissue and bone]. In this paper, a validation is made by comparing trabecular bone morphometric parameters calculated from images obtained by using DART and the commonly used standard filtered back-projection (FBP). Female rats were divided into an ovariectomized (OVX) and a sham-operated group. In vivo micro-CT scanning of the tibia was done at baseline and at 2, 4, 8 and 12 weeks after surgery. The cross-section images were reconstructed using first the full set of projection images and afterwards reducing them in number to a quarter and one-sixth (248, 62, 42 projection images, respectively). For both reconstruction methods, similar changes in morphometric parameters were observed over time: bone loss for OVX and bone growth for sham-operated rats, although for DART the actual values were systematically higher (bone volume fraction) or lower (structure model index) compared to FBP, depending on the morphometric parameter. The DART algorithm was, however, more robust when using fewer projection images, where the standard FBP reconstruction was more prone to noise, showing a significantly bigger deviation from the morphometric parameters obtained using all projection images. This study supports the use of DART as a potential alternative method to FBP in X-ray micro-CT animal studies, in particular, when the number of projections has to be drastically minimized, which directly reduces scanning time and dose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kronfeld, Andrea; Müller-Forell, Wibke; Buchholz, Hans-Georg
Purpose: Image registration is one prerequisite for the analysis of brain regions in magnetic-resonance-imaging (MRI) or positron-emission-tomography (PET) studies. Diffeomorphic anatomical registration through exponentiated Lie algebra (DARTEL) is a nonlinear, diffeomorphic algorithm for image registration and construction of image templates. The goal of this small animal study was (1) the evaluation of a MRI and calculation of several cannabinoid type 1 (CB1) receptor PET templates constructed using DARTEL and (2) the analysis of the image registration accuracy of MR and PET images to their DARTEL templates with reference to analytical and iterative PET reconstruction algorithms. Methods: Five male Sprague Dawleymore » rats were investigated for template construction using MRI and [{sup 18}F]MK-9470 PET for CB1 receptor representation. PET images were reconstructed using the algorithms filtered back-projection, ordered subset expectation maximization in 2D, and maximum a posteriori in 3D. Landmarks were defined on each MR image, and templates were constructed under different settings, i.e., based on different tissue class images [gray matter (GM), white matter (WM), and GM + WM] and regularization forms (“linear elastic energy,” “membrane energy,” and “bending energy”). Registration accuracy for MRI and PET templates was evaluated by means of the distance between landmark coordinates. Results: The best MRI template was constructed based on gray and white matter images and the regularization form linear elastic energy. In this case, most distances between landmark coordinates were <1 mm. Accordingly, MRI-based spatial normalization was most accurate, but results of the PET-based spatial normalization were quite comparable. Conclusions: Image registration using DARTEL provides a standardized and automatic framework for small animal brain data analysis. The authors were able to show that this method works with high reliability and validity. Using DARTEL templates together with nonlinear registration algorithms allows for accurate spatial normalization of combined MRI/PET or PET-only studies.« less
SU-D-BRF-04: Digital Tomosynthesis for Improved Daily Setup in Treatment of Liver Lesions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, H; Jones, B; Miften, M
Purpose: Daily localization of liver lesions with cone-beam CT (CBCT) is difficult due to poor image quality caused by scatter, respiratory motion, and the lack of radiographic contrast between the liver parenchyma and the lesion(s). Digital tomosynthesis (DTS) is investigated as a modality to improve liver visualization and lesion/parenchyma contrast for daily setup. Methods: An in-house tool was developed to generate DTS images using a point-by-point filtered back-projection method from on-board CBCT projection data. DTS image planes are generated in a user defined orientation to visualize the anatomy at various depths. Reference DTS images are obtained from forward projection ofmore » the planning CT dataset at each projection angle. The CBCT DTS image set can then be registered to the reference DTS image set as a means for localization. Contour data from the planning CT's associate RT Structure file and forward projected similarly to the planning CT data. DTS images are created for each contoured structure, which can then be overlaid onto the DTS images for organ volume visualization. Results: High resolution DTS images generated from CBCT projections show fine anatomical detail, including small blood vessels, within the patient. However, the reference DTS images generated from forward projection of the planning CT lacks this level of detail due to the low resolution of the CT voxels as compared to the pixel size in the projection images; typically 1mm-by-1mm-by-3mm (lat, vrt, lng) for the planning CT vs. 0.4mm-by-0.4mm for CBCT projections. Overlaying of the contours onto the DTS image allows for visualization of structures of interest. Conclusion: The ability to generate DTS images over a limited range of projection angles allows for reduction in the amount of respiratory motion within each acquisition. DTS may provide improved visualization of structures and lesions as compared to CBCT for highly mobile tumors.« less
Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data
NASA Astrophysics Data System (ADS)
Hung, C. H.; Chang, W. C.; Chen, L. C.
2016-06-01
With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.
NASA Astrophysics Data System (ADS)
Meng, L.; Zhang, A.; Yagi, Y.
2015-12-01
The 2015 Mw 7.8 Nepal-Gorkha earthquake with casualties of over 9,000 people is the most devastating disaster to strike Nepal since the 1934 Nepal-Bihar earthquake. Its rupture process is well imaged by the teleseismic MUSIC back-projections (BP). Here, we perform independent back-projections of high-frequency recordings (0.5-2 Hz) from the Australian seismic network (AU), the North America network (NA) and the European seismic network (EU), located in complementary orientations. Our results of all three arrays show unilateral linear rupture path to the east of the hypocenter. But the propagating directions and the inferred rupture speeds differ significantly among different arrays. To understand the spatial uncertainties of the BP analysis, we image four moderate-size (M5~6) aftershocks based on the timing correction derived from the alignment of the initial P-wave of the mainshock. We find that the apparent source locations inferred from BP are systematically biased along the source-array orientation, which can be explained by the uncertainty of the 3D velocity structure deviated from the 1D reference model (e.g. IASP91). We introduced a slowness error term in travel time as a first-order calibration that successfully mitigates the source location discrepancies of different arrays. The calibrated BP results of three arrays are mutually consistent and reveal a unilateral rupture propagating eastward at a speed of 2.7 km/s along the down-dip edge of the locked Himalaya thrust zone over ~ 150 km, in agreement with a narrow slip distribution inferred from finite source inversions.
Li, Jonathan; Samant, Sanjiv
2011-01-01
Two‐dimensional array dosimeters are commonly used to perform pretreatment quality assurance procedures, which makes them highly desirable for measuring transit fluences for in vivo dose reconstruction. The purpose of this study was to determine if an in vivo dose reconstruction via transit dosimetry using a 2D array dosimeter was possible. To test the accuracy of measuring transit dose distribution using a 2D array dosimeter, we evaluated it against the measurements made using ionization chamber and radiochromic film (RCF) profiles for various air gap distances (distance from the exit side of the solid water slabs to the detector distance; 0 cm, 30 cm, 40 cm, 50 cm, and 60 cm) and solid water slab thicknesses (10 cm and 20 cm). The backprojection dose reconstruction algorithm was described and evaluated. The agreement between the ionization chamber and RCF profiles for the transit dose distribution measurements ranged from ‐0.2%~ 4.0% (average 1.79%). Using the backprojection dose reconstruction algorithm, we found that, of the six conformal fields, four had a 100% gamma index passing rate (3%/3 mm gamma index criteria), and two had gamma index passing rates of 99.4% and 99.6%. Of the five IMRT fields, three had a 100% gamma index passing rate, and two had gamma index passing rates of 99.6% and 98.8%. It was found that a 2D array dosimeter could be used for backprojection dose reconstruction for in vivo dosimetry. PACS number: 87.55.N‐
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Pan, X; Stayman, J
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less
An improved exact inversion formula for solenoidal fields in cone beam vector tomography
NASA Astrophysics Data System (ADS)
Katsevich, Alexander; Rothermel, Dimitri; Schuster, Thomas
2017-06-01
In this paper we present an improved inversion formula for the 3D cone beam transform of vector fields supported in the unit ball which is exact for solenoidal fields. It is well known that only the solenoidal part of a vector field can be determined from the longitudinal ray transform of a vector field in cone beam geometry. The inversion formula, as it was developed in Katsevich and Schuster (2013 An exact inversion formula for cone beam vector tomography Inverse Problems 29 065013), consists of two parts. The first part is of the filtered backprojection type, whereas the second part is a costly 4D integration and very inefficient. In this article we tackle this second term and obtain an improved formula, which is easy to implement and saves one order of integration. We also show that the first part contains all information about the curl of the field, whereas the second part has information about the boundary values. More precisely, the second part vanishes if the solenoidal part of the original field is tangential at the boundary. A number of numerical tests presented in the paper confirm the theoretical results and the exactness of the formula. Also, we obtain an inversion algorithm that works for general convex domains.
NASA Astrophysics Data System (ADS)
Hong, Daeki; Cho, Heemoon; Cho, Hyosung; Choi, Sungil; Je, Uikyu; Park, Yeonok; Park, Chulkyu; Lim, Hyunwoo; Park, Soyoung; Woo, Taeho
2015-11-01
In this work, we performed a feasibility study on the three-dimensional (3D) image reconstruction in a truncated Archimedean-like spiral geometry with a long-rectangular detector for application to high-accurate, cost-effective dental x-ray imaging. Here an x-ray tube and a detector rotate together around the rotational axis several times and, concurrently, the detector moves horizontally in the detector coordinate at a constant speed to cover the whole imaging volume during the projection data acquisition. We established a table-top setup which mainly consists of an x-ray tube (60 kVp, 5 mA), a narrow CMOS-type detector (198-μm pixel resolution, 184 (W)×1176 (H) pixel dimension), and a rotational stage for sample mounting and performed a systematic experiment to demonstrate the viability of the proposed approach to volumetric dental imaging. For the image reconstruction, we employed a compressed-sensing (CS)-based algorithm, rather than a common filtered-backprojection (FBP) one, for more accurate reconstruction. We successfully reconstructed 3D images of considerably high quality and investigated the image characteristics in terms of the image value profile, the contrast-to-noise ratio (CNR), and the spatial resolution.
A synchrotron radiation microtomography system for the analysis of trabecular bone samples.
Salomé, M; Peyrin, F; Cloetens, P; Odet, C; Laval-Jeantet, A M; Baruchel, J; Spanne, P
1999-10-01
X-ray computed microtomography is particularly well suited for studying trabecular bone architecture, which requires three-dimensional (3-D) images with high spatial resolution. For this purpose, we describe a three-dimensional computed microtomography (microCT) system using synchrotron radiation, developed at ESRF. Since synchrotron radiation provides a monochromatic and high photon flux x-ray beam, it allows high resolution and a high signal-to-noise ratio imaging. The principle of the system is based on truly three-dimensional parallel tomographic acquisition. It uses a two-dimensional (2-D) CCD-based detector to record 2-D radiographs of the transmitted beam through the sample under different angles of view. The 3-D tomographic reconstruction, performed by an exact 3-D filtered backprojection algorithm, yields 3-D images with cubic voxels. The spatial resolution of the detector was experimentally measured. For the application to bone investigation, the voxel size was set to 6.65 microm, and the experimental spatial resolution was found to be 11 microm. The reconstructed linear attenuation coefficient was calibrated from hydroxyapatite phantoms. Image processing tools are being developed to extract structural parameters quantifying trabecular bone architecture from the 3-D microCT images. First results on human trabecular bone samples are presented.
Favazza, Christopher P; Ferrero, Andrea; Yu, Lifeng; Leng, Shuai; McMillan, Kyle L; McCollough, Cynthia H
2017-07-01
The use of iterative reconstruction (IR) algorithms in CT generally decreases image noise and enables dose reduction. However, the amount of dose reduction possible using IR without sacrificing diagnostic performance is difficult to assess with conventional image quality metrics. Through this investigation, achievable dose reduction using a commercially available IR algorithm without loss of low contrast spatial resolution was determined with a channelized Hotelling observer (CHO) model and used to optimize a clinical abdomen/pelvis exam protocol. A phantom containing 21 low contrast disks-three different contrast levels and seven different diameters-was imaged at different dose levels. Images were created with filtered backprojection (FBP) and IR. The CHO was tasked with detecting the low contrast disks. CHO performance indicated dose could be reduced by 22% to 25% without compromising low contrast detectability (as compared to full-dose FBP images) whereas 50% or more dose reduction significantly reduced detection performance. Importantly, default settings for the scanner and protocol investigated reduced dose by upward of 75%. Subsequently, CHO-based protocol changes to the default protocol yielded images of higher quality and doses more consistent with values from a larger, dose-optimized scanner fleet. CHO assessment provided objective data to successfully optimize a clinical CT acquisition protocol.
Jiang, Shanghai
2017-01-01
X-ray fluorescence computed tomography (XFCT) based on sheet beam can save a huge amount of time to obtain a whole set of projections using synchrotron. However, it is clearly unpractical for most biomedical research laboratories. In this paper, polychromatic X-ray fluorescence computed tomography with sheet-beam geometry is tested by Monte Carlo simulation. First, two phantoms (A and B) filled with PMMA are used to simulate imaging process through GEANT 4. Phantom A contains several GNP-loaded regions with the same size (10 mm) in height and diameter but different Au weight concentration ranging from 0.3% to 1.8%. Phantom B contains twelve GNP-loaded regions with the same Au weight concentration (1.6%) but different diameter ranging from 1 mm to 9 mm. Second, discretized presentation of imaging model is established to reconstruct more accurate XFCT images. Third, XFCT images of phantoms A and B are reconstructed by filter back-projection (FBP) and maximum likelihood expectation maximization (MLEM) with and without correction, respectively. Contrast-to-noise ratio (CNR) is calculated to evaluate all the reconstructed images. Our results show that it is feasible for sheet-beam XFCT system based on polychromatic X-ray source and the discretized imaging model can be used to reconstruct more accurate images. PMID:28567054
Uses of megavoltage digital tomosynthesis in radiotherapy
NASA Astrophysics Data System (ADS)
Sarkar, Vikren
With the advent of intensity modulated radiotherapy, radiation treatment plans are becoming more conformal to the tumor with the decreasing margins. It is therefore of prime importance that the patient be positioned correctly prior to treatment. Therefore, image guided treatment is necessary for intensity modulated radiotherapy plans to be implemented successfully. Current advanced imaging devices require costly hardware and software upgrade, and radiation imaging solutions, such as cone beam computed tomography, may introduce extra radiation dose to the patient in order to acquire better quality images. Thus, there is a need to extend current existing imaging device ability and functions while reducing cost and radiation dose. Existing electronic portal imaging devices can be used to generate computed tomography-like tomograms through projection images acquired over a small angle using the technique of cone-beam digital tomosynthesis. Since it uses a fraction of the images required for computed tomography reconstruction, use of this technique correspondingly delivers only a fraction of the imaging dose to the patient. Furthermore, cone-beam digital tomosynthesis can be offered as a software-only solution as long as a portal imaging device is available. In this study, the feasibility of performing digital tomosynthesis using individually-acquired megavoltage images from a charge coupled device-based electronic portal imaging device was investigated. Three digital tomosynthesis reconstruction algorithms, the shift-and-add, filtered back-projection, and simultaneous algebraic reconstruction technique, were compared considering the final image quality and radiation dose during imaging. A software platform, DART, was created using a combination of the Matlab and C++ languages. The platform allows for the registration of a reference Cone Beam Digital Tomosynthesis (CBDT) image against a daily acquired set to determine how to shift the patient prior to treatment. Finally, the software was extended to investigate if the digital tomosynthesis dataset could be used in an adaptive radiotherapy regimen through the use of the Pinnacle treatment planning software to recalculate dose delivered. The feasibility study showed that the megavoltage CBDT visually agreed with corresponding megavoltage computed tomography images. The comparative study showed that the best compromise between imaging quality and imaging dose is obtained when 11 projection images, acquired over an imaging angle of 40°, are used with the filtered back-projection algorithm. DART was successfully used to register reference and daily image sets to within 1 mm in-plane and 2.5 mm out of plane. The DART platform was also effectively used to generate updated files that the Pinnacle treatment planning system used to calculate updated dose in a rigidly shifted patient. These doses were then used to calculate a cumulative dose distribution that could be used by a physician as reference to decide when the treatment plan should be updated. In conclusion, this study showed that a software solution is possible to extend existing electronic portal imaging devices to function as cone-beam digital tomosynthesis devices and achieve daily requirement for image guided intensity modulated radiotherapy treatments. The DART platform also has the potential to be used as a part of adaptive radiotherapy solution.
NASA Astrophysics Data System (ADS)
Lee, Taek-Soo; Frey, Eric C.; Tsui, Benjamin M. W.
2015-04-01
This paper presents two 4D mathematical observer models for the detection of motion defects in 4D gated medical images. Their performance was compared with results from human observers in detecting a regional motion abnormality in simulated 4D gated myocardial perfusion (MP) SPECT images. The first 4D mathematical observer model extends the conventional channelized Hotelling observer (CHO) based on a set of 2D spatial channels and the second is a proposed model that uses a set of 4D space-time channels. Simulated projection data were generated using the 4D NURBS-based cardiac-torso (NCAT) phantom with 16 gates/cardiac cycle. The activity distribution modelled uptake of 99mTc MIBI with normal perfusion and a regional wall motion defect. An analytical projector was used in the simulation and the filtered backprojection (FBP) algorithm was used in image reconstruction followed by spatial and temporal low-pass filtering with various cut-off frequencies. Then, we extracted 2D image slices from each time frame and reorganized them into a set of cine images. For the first model, we applied 2D spatial channels to the cine images and generated a set of feature vectors that were stacked for the images from different slices of the heart. The process was repeated for each of the 1,024 noise realizations, and CHO and receiver operating characteristics (ROC) analysis methodologies were applied to the ensemble of the feature vectors to compute areas under the ROC curves (AUCs). For the second model, a set of 4D space-time channels was developed and applied to the sets of cine images to produce space-time feature vectors to which the CHO methodology was applied. The AUC values of the second model showed better agreement (Spearman’s rank correlation (SRC) coefficient = 0.8) to human observer results than those from the first model (SRC coefficient = 0.4). The agreement with human observers indicates the proposed 4D mathematical observer model provides a good predictor of the performance of human observers in detecting regional motion defects in 4D gated MP SPECT images. The result supports the use of the observer model in the optimization and evaluation of 4D image reconstruction and compensation methods for improving the detection of motion abnormalities in 4D gated MP SPECT images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, J; Martin, T; Young, S
Purpose: CT neuro perfusion scans are one of the highest dose exams. Methods to reduce dose include decreasing the number of projections acquired per gantry rotation, however conventional reconstruction of such scans leads to sampling artifacts. In this study we investigated a projection view-sharing reconstruction algorithm used in dynamic MRI – “K-space Weighted Image Contrast” (KWIC) – applied to simulated perfusion exams and evaluated dose savings and impacts on perfusion metrics. Methods: A FORBILD head phantom containing simulated time-varying objects was developed and a set of parallel-beam CT projection data was created. The simulated scans were 60 seconds long, 1152more » projections per turn, with a rotation time of one second. No noise was simulated. 5mm, 10mm, and 50mm objects were modeled in the brain. A baseline, “full dose” simulation used all projections and reduced dose cases were simulated by downsampling the number of projections per turn from 1152 to 576 (50% dose), 288 (25% dose), and 144 (12.5% dose). KWIC was further evaluated at 72 projections per rotation (6.25%). One image per second was reconstructed using filtered backprojection (FBP) and KWIC. KWIC reconstructions utilized view cores of 36, 72, 144, and 288 views and 16, 8, 4, and 2 subapertures respectively. From the reconstructed images, time-to-peak (TTP), cerebral blood flow (CBF) and the FWHM of the perfusion curve were calculated and compared against reference values from the full-dose FBP data. Results: TTP, CBF, and the FWHM were unaffected by dose reduction (to 12.5%) and reconstruction method, however image quality was improved when using KWIC. Conclusion: This pilot study suggests that KWIC preserves image quality and perfusion metrics when under-sampling projections and that the unique contrast weighting of KWIC could provided substantial dose-savings for perfusion CT scans. Evaluation of KWIC in clinical CT data will be performed in the near future. R01 EB014922, NCI Grant U01 CA181156 (Quantitative Imaging Network), and Tobacco Related Disease Research Project grant 22RT-0131.« less
Effects of refractive index mismatch in optical CT imaging of polymer gel dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manjappa, Rakesh; Makki S, Sharath; Kanhirodan, Rajan, E-mail: rajan@physics.iisc.ernet.in
2015-02-15
Purpose: Proposing an image reconstruction technique, algebraic reconstruction technique-refraction correction (ART-rc). The proposed method takes care of refractive index mismatches present in gel dosimeter scanner at the boundary, and also corrects for the interior ray refraction. Polymer gel dosimeters with high dose regions have higher refractive index and optical density compared to the background medium, these changes in refractive index at high dose results in interior ray bending. Methods: The inclusion of the effects of refraction is an important step in reconstruction of optical density in gel dosimeters. The proposed ray tracing algorithm models the interior multiple refraction at themore » inhomogeneities. Jacob’s ray tracing algorithm has been modified to calculate the pathlengths of the ray that traverses through the higher dose regions. The algorithm computes the length of the ray in each pixel along its path and is used as the weight matrix. Algebraic reconstruction technique and pixel based reconstruction algorithms are used for solving the reconstruction problem. The proposed method is tested with numerical phantoms for various noise levels. The experimental dosimetric results are also presented. Results: The results show that the proposed scheme ART-rc is able to reconstruct optical density inside the dosimeter better than the results obtained using filtered backprojection and conventional algebraic reconstruction approaches. The quantitative improvement using ART-rc is evaluated using gamma-index. The refraction errors due to regions of different refractive indices are discussed. The effects of modeling of interior refraction in the dose region are presented. Conclusions: The errors propagated due to multiple refraction effects have been modeled and the improvements in reconstruction using proposed model is presented. The refractive index of the dosimeter has a mismatch with the surrounding medium (for dry air or water scanning). The algorithm reconstructs the dose profiles by estimating refractive indices of multiple inhomogeneities having different refractive indices and optical densities embedded in the dosimeter. This is achieved by tracking the path of the ray that traverses through the dosimeter. Extensive simulation studies have been carried out and results are found to be matching that of experimental results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hanming; Wang, Linyuan; Li, Lei
2016-06-15
Purpose: Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. Methods: The proposed method consists of twomore » steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as “iFBP-TV” and “TV-FADM,” respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. Results: The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by inspecting regions of interest and by comparing the root-mean-square errors, normalized mean absolute distance, and universal quality index metrics of the images. Both iFBP-TV and TV-FADM methods outperform other counterparts in all cases. Unlike the conventional iterative methods, the proposed strategy utilizing unmatched projector/backprojector pairs shows excellent performance in detail preservation and prevention of the introduction of new artifacts. Conclusions: Qualitative and quantitative evaluations of experimental results indicate that the developed method outperforms classical MAR algorithms in suppressing streak artifacts and preserving the edge structural information of the object. In particular, structures lying close to metals can be gradually recovered because of the reduction of artifacts caused by inconsistency effects.« less
Functional Validation and Comparison Framework for EIT Lung Imaging
Meybohm, Patrick; Weiler, Norbert; Frerichs, Inéz; Adler, Andy
2014-01-01
Introduction Electrical impedance tomography (EIT) is an emerging clinical tool for monitoring ventilation distribution in mechanically ventilated patients, for which many image reconstruction algorithms have been suggested. We propose an experimental framework to assess such algorithms with respect to their ability to correctly represent well-defined physiological changes. We defined a set of clinically relevant ventilation conditions and induced them experimentally in 8 pigs by controlling three ventilator settings (tidal volume, positive end-expiratory pressure and the fraction of inspired oxygen). In this way, large and discrete shifts in global and regional lung air content were elicited. Methods We use the framework to compare twelve 2D EIT reconstruction algorithms, including backprojection (the original and still most frequently used algorithm), GREIT (a more recent consensus algorithm for lung imaging), truncated singular value decomposition (TSVD), several variants of the one-step Gauss-Newton approach and two iterative algorithms. We consider the effects of using a 3D finite element model, assuming non-uniform background conductivity, noise modeling, reconstructing for electrode movement, total variation (TV) reconstruction, robust error norms, smoothing priors, and using difference vs. normalized difference data. Results and Conclusions Our results indicate that, while variation in appearance of images reconstructed from the same data is not negligible, clinically relevant parameters do not vary considerably among the advanced algorithms. Among the analysed algorithms, several advanced algorithms perform well, while some others are significantly worse. Given its vintage and ad-hoc formulation backprojection works surprisingly well, supporting the validity of previous studies in lung EIT. PMID:25110887
NASA Astrophysics Data System (ADS)
Yin, Jiuxun; Denolle, Marine A.; Yao, Huajian
2018-01-01
We develop a methodology that combines compressive sensing backprojection (CS-BP) and source spectral analysis of teleseismic P waves to provide metrics relevant to earthquake dynamics of large events. We improve the CS-BP method by an autoadaptive source grid refinement as well as a reference source adjustment technique to gain better spatial and temporal resolution of the locations of the radiated bursts. We also use a two-step source spectral analysis based on (i) simple theoretical Green's functions that include depth phases and water reverberations and on (ii) empirical P wave Green's functions. Furthermore, we propose a source spectrogram methodology that provides the temporal evolution of dynamic parameters such as radiated energy and falloff rates. Bridging backprojection and spectrogram analysis provides a spatial and temporal evolution of these dynamic source parameters. We apply our technique to the recent 2015 Mw 8.3 megathrust Illapel earthquake (Chile). The results from both techniques are consistent and reveal a depth-varying seismic radiation that is also found in other megathrust earthquakes. The low-frequency content of the seismic radiation is located in the shallow part of the megathrust, propagating unilaterally from the hypocenter toward the trench while most of the high-frequency content comes from the downdip part of the fault. Interpretation of multiple rupture stages in the radiation is also supported by the temporal variations of radiated energy and falloff rates. Finally, we discuss the possible mechanisms, either from prestress, fault geometry, and/or frictional properties to explain our observables. Our methodology is an attempt to bridge kinematic observations with earthquake dynamics.
NASA Astrophysics Data System (ADS)
Shani-Kadmiel, Shahar; Assink, Jelle D.; Smets, Pieter S. M.; Evers, Läslo G.
2018-01-01
In this study we analyze infrasound signals from three earthquakes in central Italy. The Mw 6.0 Amatrice, Mw 5.9 Visso, and Mw 6.5 Norcia earthquakes generated significant epicentral ground motions that couple to the atmosphere and produce infrasonic waves. Epicentral seismic and infrasonic signals are detected at I26DE; however, a third type of signal, which arrives after the seismic wave train and before the epicentral infrasound signal, is also detected. This peculiar signal propagates across the array at acoustic wave speeds, but the celerity associated with it is 3 times the speed of sound. Atmosphere-independent backprojections and full 3-D ray tracing using atmospheric conditions of the European Centre for Medium-Range Weather Forecasts are used to demonstrate that this apparently fast-arriving infrasound signal originates from ground motions more than 400 km away from the epicenter. The location of the secondary infrasound patch coincides with the closest bounce point to I26DE as depicted by ray tracing backprojections.
Ha, S; Matej, S; Ispiryan, M; Mueller, K
2013-02-01
We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.
NASA Astrophysics Data System (ADS)
Fan, W.; Bassett, D.; Denolle, M.; Shearer, P. M.; Ji, C.; Jiang, J.
2017-12-01
The 2006 Mw 7.8 Java earthquake was a tsunami earthquake, exhibiting frequency-dependent seismic radiation along strike. High-frequency global back-projection results suggest two distinct rupture stages. The first stage lasted 65 s with a rupture speed of 1.2 km/s, while the second stage lasted from 65 to 150 s with a rupture speed of 2.7 km/s. In addition, P-wave high-frequency radiated energy and fall-off rates indicate a rupture transition at 60 s. High-frequency radiators resolved with back-projection during the second stage spatially correlate with splay fault traces mapped from residual free-air gravity anomalies. These splay faults also collocate with a major tsunami source associated with the earthquake inferred from tsunami first-crest back-propagation simulation. These correlations suggest that the splay faults may have been reactivated during the Java earthquake, as has been proposed for other tsunamigenic earthquakes, such as the 1944 Mw 8.1 Tonankai earthquake in the Nankai Trough.
NASA Astrophysics Data System (ADS)
Ha, S.; Matej, S.; Ispiryan, M.; Mueller, K.
2013-02-01
We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.
NASA Astrophysics Data System (ADS)
Yu, Qifeng; Liu, Xiaolin; Sun, Xiangyi
1998-07-01
Generalized spin filters, including several directional filters such as the directional median filter and the directional binary filter, are proposed for removal of the noise of fringe patterns and the extraction of fringe skeletons with the help of fringe-orientation maps (FOM s). The generalized spin filters can filter off noise on fringe patterns and binary fringe patterns efficiently, without distortion of fringe features. A quadrantal angle filter is developed to filter off the FOM. With these new filters, the derivative-sign binary image (DSBI) method for extraction of fringe skeletons is improved considerably. The improved DSBI method can extract high-density skeletons as well as common density skeletons.
Application of Seismic Array Processing to Tsunami Early Warning
NASA Astrophysics Data System (ADS)
An, C.; Meng, L.
2015-12-01
Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800 instruments) and the Earthscope USArray Transportable Array (~400 instruments), are established.
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
Replacement of filters for respirable quartz measurement in coal mine dust by infrared spectroscopy.
Farcas, Daniel; Lee, Taekhee; Chisholm, William P; Soo, Jhy-Charm; Harper, Martin
2016-01-01
The objective of this article is to compare and characterize nylon, polypropylene (PP), and polyvinyl chloride (PVC) membrane filters that might be used to replace the vinyl/acrylic co-polymer (DM-450) filter currently used in the Mine Safety and Health Administration (MSHA) P-7 method (Quartz Analytical Method) and the National Institute for Occupational Safety and Health (NIOSH) Manual of Analytical Methods 7603 method (QUARTZ in coal mine dust, by IR re-deposition). This effort is necessary because the DM-450 filters are no longer commercially available. There is an impending shortage of DM-450 filters. For example, the MSHA Pittsburgh laboratory alone analyzes annually approximately 15,000 samples according to the MSHA P-7 method that requires DM-450 filters. Membrane filters suitable for on-filter analysis should have high infrared (IR) transmittance in the spectral region 600-1000 cm(-1). Nylon (47 mm, 0.45 µm pore size), PP (47 mm, 0.45 µm pore size), and PVC (47 mm, 5 µm pore size) filters meet this specification. Limits of detection and limits of quantification were determined from Fourier transform infrared spectroscopy (FTIR) measurements of blank filters. The average measured quartz mass and coefficient of variation were determined from test filters spiked with respirable α-quartz following MSHA P-7 and NIOSH 7603 methods. Quartz was also quantified in samples of respirable coal dust on each test filter type using the MSHA and NIOSH analysis methods. The results indicate that PP and PVC filters may replace the DM-450 filters for quartz measurement in coal dust by FTIR. PVC filters of 5 µm pore size seemed to be suitable replacement although their ability to retain small particulates should be checked by further experiment.
Automated Determination of Magnitude and Source Length of Large Earthquakes
NASA Astrophysics Data System (ADS)
Wang, D.; Kawakatsu, H.; Zhuang, J.; Mori, J. J.; Maeda, T.; Tsuruoka, H.; Zhao, X.
2017-12-01
Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus source duration time) depending on the epicenter distances. It may be a promising aid for disaster mitigation right after a damaging earthquake, especially when dealing with the tsunami evacuation and emergency rescue.
Automated Determination of Magnitude and Source Extent of Large Earthquakes
NASA Astrophysics Data System (ADS)
Wang, Dun
2017-04-01
Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus source duration time) depending on the epicenter distances. It may be a promising aid for disaster mitigation right after a damaging earthquake, especially when dealing with the tsunami evacuation and emergency rescue.
Method of treating contaminated HEPA filter media in pulp process
Hu, Jian S.; Argyle, Mark D.; Demmer, Ricky L.; Mondok, Emilio P.
2003-07-29
A method for reducing contamination of HEPA filters with radioactive and/or hazardous materials is described. The method includes pre-processing of the filter for removing loose particles. Next, the filter medium is removed from the housing, and the housing is decontaminated. Finally, the filter medium is processed as pulp for removing contaminated particles by physical and/or chemical methods, including gravity, flotation, and dissolution of the particles. The decontaminated filter medium is then disposed of as non-RCRA waste; the particles are collected, stabilized, and disposed of according to well known methods of handling such materials; and the liquid medium in which the pulp was processed is recycled.
Developing Topic-Specific Search Filters for PubMed with Click-Through Data
Li, Jiao; Lu, Zhiyong
2013-01-01
Summary Objectives Search filters have been developed and demonstrated for better information access to the immense and ever-growing body of publications in the biomedical domain. However, to date the number of filters remains quite limited because the current filter development methods require significant human efforts in manual document review and filter term selection. In this regard, we aim to investigate automatic methods for generating search filters. Methods We present an automated method to develop topic-specific filters on the basis of users’ search logs in PubMed. Specifically, for a given topic, we first detect its relevant user queries and then include their corresponding clicked articles to serve as the topic-relevant document set accordingly. Next, we statistically identify informative terms that best represent the topic-relevant document set using a background set composed of topic irrelevant articles. Lastly, the selected representative terms are combined with Boolean operators and evaluated on benchmark datasets to derive the final filter with the best performance. Results We applied our method to develop filters for four clinical topics: nephrology, diabetes, pregnancy, and depression. For the nephrology filter, our method obtained performance comparable to the state of the art (sensitivity of 91.3%, specificity of 98.7%, precision of 94.6%, and accuracy of 97.2%). Similarly, high-performing results (over 90% in all measures) were obtained for the other three search filters. Conclusion Based on PubMed click-through data, we successfully developed a high-performance method for generating topic-specific search filters that is significantly more efficient than existing manual methods. All data sets (topic-relevant and irrelevant document sets) used in this study and a demonstration system are publicly available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/downloads/CQ_filter/ PMID:23666447
A new method for electric impedance imaging using an eddy current with a tetrapolar circuit.
Ahsan-Ul-Ambia; Toda, Shogo; Takemae, Tadashi; Kosugi, Yukio; Hongo, Minoru
2009-02-01
A new contactless technique for electrical impedance imaging, using an eddy current managed along with the tetrapolar circuit method, is proposed. The eddy current produced by a magnetic field is superimposed on a constant current that is normally used in the tetrapolar circuit method, and thus is used to control the current distribution in the body. By changing the current distribution, a set of voltage differences is measured with a pair of electrodes. This set of voltage differences is used in the image reconstruction of the resistivity distribution. The least square error minimization method is used in the reconstruction algorithm. The principle of this method is explained theoretically. A backprojection algorithm was used to get 2-D images. Based on this principle, a measurement system was developed and model experiments were conducted with a saline-filled phantom. The estimated shape of each model in the reconstructed image was similar to that of the corresponding model. From the results of these experiments, it is confirmed that the proposed method is applicable to the realization of electrical conductivity imaging.
Lyu, Weiwei; Cheng, Xianghong
2017-11-28
Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method.
Microwave active filters based on coupled negative resistance method
NASA Astrophysics Data System (ADS)
Chang, Chi-Yang; Itoh, Tatsuo
1990-12-01
A novel coupled negative resistance method for building a microwave active bandpass filter is introduced. Based on this method, four microstrip line end-coupled filters were built. Two are fixed-frequency one-pole and two-pole filters, and two are tunable one-pole and two-pole filters. In order to broaden the bandwidth of the end-coupled filter, a modified end-coupled structure is proposed. Using the modified structure, an active filter with a bandwidth up to 7.5 percent was built. All of the filters show significant passband performance improvement. Specifically, the passband bandwidth was broadened by a factor of 5 to 20.
Testing the Stability of 2-D Recursive QP, NSHP and General Digital Filters of Second Order
NASA Astrophysics Data System (ADS)
Rathinam, Ananthanarayanan; Ramesh, Rengaswamy; Reddy, P. Subbarami; Ramaswami, Ramaswamy
Several methods for testing stability of first quadrant quarter-plane two dimensional (2-D) recursive digital filters have been suggested in 1970's and 80's. Though Jury's row and column algorithms, row and column concatenation stability tests have been considered as highly efficient mapping methods. They still fall short of accuracy as they need infinite number of steps to conclude about the exact stability of the filters and also the computational time required is enormous. In this paper, we present procedurally very simple algebraic method requiring only two steps when applied to the second order 2-D quarter - plane filter. We extend the same method to the second order Non-Symmetric Half-plane (NSHP) filters. Enough examples are given for both these types of filters as well as some lower order general recursive 2-D digital filters. We applied our method to barely stable or barely unstable filter examples available in the literature and got the same decisions thus showing that our method is accurate enough.
A hybrid filtering method based on a novel empirical mode decomposition for friction signals
NASA Astrophysics Data System (ADS)
Li, Chengwei; Zhan, Liwei
2015-12-01
During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods.
Extracting tissue deformation using Gabor filter banks
NASA Astrophysics Data System (ADS)
Montillo, Albert; Metaxas, Dimitris; Axel, Leon
2004-04-01
This paper presents a new approach for accurate extraction of tissue deformation imaged with tagged MR. Our method, based on banks of Gabor filters, adjusts (1) the aspect and (2) orientation of the filter"s envelope and adjusts (3) the radial frequency and (4) angle of the filter"s sinusoidal grating to extract information about the deformation of tissue. The method accurately extracts tag line spacing, orientation, displacement and effective contrast. Existing, non-adaptive methods often fail to recover useful displacement information in the proximity of tissue boundaries while our method works in the proximity of the boundaries. We also present an interpolation method to recover all tag information at a finer resolution than the filter bank parameters. Results are shown on simulated images of translating and contracting tissue.
The Power Plant Operating Data Based on Real-time Digital Filtration Technology
NASA Astrophysics Data System (ADS)
Zhao, Ning; Chen, Ya-mi; Wang, Hui-jie
2018-03-01
Real-time monitoring of the data of the thermal power plant was the basis of accurate analyzing thermal economy and accurate reconstruction of the operating state. Due to noise interference was inevitable; we need real-time monitoring data filtering to get accurate information of the units and equipment operating data of the thermal power plant. Real-time filtering algorithm couldn’t be used to correct the current data with future data. Compared with traditional filtering algorithm, there were a lot of constraints. First-order lag filtering method and weighted recursive average filtering method could be used for real-time filtering. This paper analyzes the characteristics of the two filtering methods and applications for real-time processing of the positive spin simulation data, and the thermal power plant operating data. The analysis was revealed that the weighted recursive average filtering method applied to the simulation and real-time plant data filtering achieved very good results.
NASA Astrophysics Data System (ADS)
Gu, Chengwei; Zeng, Dong; Lin, Jiahui; Li, Sui; He, Ji; Zhang, Hao; Bian, Zhaoying; Niu, Shanzhou; Zhang, Zhang; Huang, Jing; Chen, Bo; Zhao, Dazhe; Chen, Wufan; Ma, Jianhua
2018-06-01
Myocardial perfusion computed tomography (MPCT) imaging is commonly used to detect myocardial ischemia quantitatively. A limitation in MPCT is that an additional radiation dose is required compared to unenhanced CT due to its repeated dynamic data acquisition. Meanwhile, noise and streak artifacts in low-dose cases are the main factors that degrade the accuracy of quantifying myocardial ischemia and hamper the diagnostic utility of the filtered backprojection reconstructed MPCT images. Moreover, it is noted that the MPCT images are composed of a series of 2/3D images, which can be naturally regarded as a 3/4-order tensor, and the MPCT images are globally correlated along time and are sparse across space. To obtain higher fidelity ischemia from low-dose MPCT acquisitions quantitatively, we propose a robust statistical iterative MPCT image reconstruction algorithm by incorporating tensor total generalized variation (TTGV) regularization into a penalized weighted least-squares framework. Specifically, the TTGV regularization fuses the spatial correlation of the myocardial structure and the temporal continuation of the contrast agent intake during the perfusion. Then, an efficient iterative strategy is developed for the objective function optimization. Comprehensive evaluations have been conducted on a digital XCAT phantom and a preclinical porcine dataset regarding the accuracy of the reconstructed MPCT images, the quantitative differentiation of ischemia and the algorithm’s robustness and efficiency.
Comparison study of image quality and effective dose in dual energy chest digital tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Donghoon; Choi, Sunghoon; Lee, Haenghwa; Kim, Dohyeon; Choi, Seungyeon; Kim, Hee-Joung
2018-07-01
The present study aimed to introduce a recently developed digital tomosynthesis system for the chest and describe the procedure for acquiring dual energy bone decomposed tomosynthesis images. Various beam quality and reconstruction algorithms were evaluated for acquiring dual energy chest digital tomosynthesis (CDT) images and the effective dose was calculated with ion chamber and Monte Carlo simulations. The results demonstrated that dual energy CDT improved visualization of the lung field by eliminating the bony structures. In addition, qualitative and quantitative image quality of dual energy CDT using iterative reconstruction was better than that with filtered backprojection (FBP) algorithm. The contrast-to-noise ratio and figure of merit values of dual energy CDT acquired with iterative reconstruction were three times better than those acquired with FBP reconstruction. The difference in the image quality according to the acquisition conditions was not noticeable, but the effective dose was significantly affected by the acquisition condition. The high energy acquisition condition using 130 kVp recorded a relatively high effective dose. We conclude that dual energy CDT has the potential to compensate for major problems in CDT due to decomposed bony structures, which induce significant artifacts. Although there are many variables in the clinical practice, our results regarding reconstruction algorithms and acquisition conditions may be used as the basis for clinical use of dual energy CDT imaging.
Cho, Seungryong; Pearson, Erik; Pelizzari, Charles A.; Pan, Xiaochuan
2009-01-01
Imaging plays a vital role in radiation therapy and with recent advances in technology considerable emphasis has been placed on cone-beam CT (CBCT). Attaching a kV x-ray source and a flat panel detector directly to the linear accelerator gantry has enabled progress in target localization techniques, which can include daily CBCT setup scans for some treatments. However, with an increasing number of CT scans there is also an increasing concern for patient exposure. An intensity-weighted region-of-interest (IWROI) technique, which has the potential to greatly reduce CBCT dose, in conjunction with the chord-based backprojection-filtration (BPF) reconstruction algorithm, has been developed and its feasibility in clinical use is demonstrated in this article. A nonuniform filter is placed in the x-ray beam to create regions of two different beam intensities. In this manner, regions outside the target area can be given a reduced dose but still visualized with a lower contrast to noise ratio. Image artifacts due to transverse data truncation, which would have occurred in conventional reconstruction algorithms, are avoided and image noise levels of the low- and high-intensity regions are well controlled by use of the chord-based BPF reconstruction algorithm. The proposed IWROI technique can play an important role in image-guided radiation therapy. PMID:19472624
The development rainfall forecasting using kalman filter
NASA Astrophysics Data System (ADS)
Zulfi, Mohammad; Hasan, Moh.; Dwidja Purnomo, Kosala
2018-04-01
Rainfall forecasting is very interesting for agricultural planing. Rainfall information is useful to make decisions about the plan planting certain commodities. In this studies, the rainfall forecasting by ARIMA and Kalman Filter method. Kalman Filter method is used to declare a time series model of which is shown in the form of linear state space to determine the future forecast. This method used a recursive solution to minimize error. The rainfall data in this research clustered by K-means clustering. Implementation of Kalman Filter method is for modelling and forecasting rainfall in each cluster. We used ARIMA (p,d,q) to construct a state space for KalmanFilter model. So, we have four group of the data and one model in each group. In conclusions, Kalman Filter method is better than ARIMA model for rainfall forecasting in each group. It can be showed from error of Kalman Filter method that smaller than error of ARIMA model.
Temporal resolution and motion artifacts in single-source and dual-source cardiac CT.
Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas
2013-03-01
The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same amount of data being used in the reconstruction process. The concept of assessing temporal resolution by means of the data employed for reconstruction can nicely be extended from single-source to dual-source CT. However, for advanced (possibly nonlinear iterative) reconstruction algorithms the examined approach fails to deliver accurate results. New methods and measures to assess the temporal resolution of CT images need to be developed to be able to accurately compare the performance of such algorithms.
Developing topic-specific search filters for PubMed with click-through data.
Li, J; Lu, Z
2013-01-01
Search filters have been developed and demonstrated for better information access to the immense and ever-growing body of publications in the biomedical domain. However, to date the number of filters remains quite limited because the current filter development methods require significant human efforts in manual document review and filter term selection. In this regard, we aim to investigate automatic methods for generating search filters. We present an automated method to develop topic-specific filters on the basis of users' search logs in PubMed. Specifically, for a given topic, we first detect its relevant user queries and then include their corresponding clicked articles to serve as the topic-relevant document set accordingly. Next, we statistically identify informative terms that best represent the topic-relevant document set using a background set composed of topic irrelevant articles. Lastly, the selected representative terms are combined with Boolean operators and evaluated on benchmark datasets to derive the final filter with the best performance. We applied our method to develop filters for four clinical topics: nephrology, diabetes, pregnancy, and depression. For the nephrology filter, our method obtained performance comparable to the state of the art (sensitivity of 91.3%, specificity of 98.7%, precision of 94.6%, and accuracy of 97.2%). Similarly, high-performing results (over 90% in all measures) were obtained for the other three search filters. Based on PubMed click-through data, we successfully developed a high-performance method for generating topic-specific search filters that is significantly more efficient than existing manual methods. All data sets (topic-relevant and irrelevant document sets) used in this study and a demonstration system are publicly available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/downloads/CQ_filter/
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S
2017-02-01
B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.
Stevens, S; Dvorak, P; Spevacek, V; Pilarova, K; Bray-Parry, M; Gesner, J; Richmond, A
2018-01-01
To provide a 3D dosimetric evaluation of a commercial portal dosimetry system using 2D/3D detectors under ideal conditions using VMAT. A 2D ion chamber array, radiochromic film and gel dosimeter were utilised to provide a dosimetric evaluation of transit phantom and pre-treatment 'fluence' EPID back-projected dose distributions for a standard VMAT plan. In-house 2D and 3D gamma methods compared pass statistics relative to each dosimeter and TPS dose distributions. Fluence mode and transit EPID dose distributions back-projected onto phantom geometry produced 2D gamma pass rates in excess of 97% relative to other tested detectors and exported TPS dose planes when a 3%, 3 mm global gamma criterion was applied. Use of a gel dosimeter within a glass vial allowed comparison of measured 3D dose distributions versus EPID 3D dose and TPS calculated distributions. 3D gamma comparisons between modalities at 3%, 3 mm gave pass rates in excess of 92%. Use of fluence mode was indicative of transit results under ideal conditions with slightly reduced dose definition. 3D EPID back projected dose distributions were validated against detectors in both 2D and 3D. Cross validation of transit dose delivered to a patient is limited due to reasons of practicality and the tests presented are recommended as a guideline for 3D EPID dosimetry commissioning; allowing direct comparison between detector, TPS, fluence and transit modes. The results indicate achievable gamma scores for a complex VMAT plan in a homogenous phantom geometry and contributes to growing experience of 3D EPID dosimetry. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method
NASA Astrophysics Data System (ADS)
Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao
2017-03-01
Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.
Metal artifact correction for x-ray computed tomography using kV and selective MV imaging.
Wu, Meng; Keil, Andreas; Constantin, Dragos; Star-Lack, Josh; Zhu, Lei; Fahrig, Rebecca
2014-12-01
The overall goal of this work is to improve the computed tomography (CT) image quality for patients with metal implants or fillings by completing the missing kilovoltage (kV) projection data with selectively acquired megavoltage (MV) data that do not suffer from photon starvation. When both of these imaging systems, which are available on current radiotherapy devices, are used, metal streak artifacts are avoided, and the soft-tissue contrast is restored, even for regions in which the kV data cannot contribute any information. Three image-reconstruction methods, including two filtered back-projection (FBP)-based analytic methods and one iterative method, for combining kV and MV projection data from the two on-board imaging systems of a radiotherapy device are presented in this work. The analytic reconstruction methods modify the MV data based on the information in the projection or image domains and then patch the data onto the kV projections for a FBP reconstruction. In the iterative reconstruction, the authors used dual-energy (DE) penalized weighted least-squares (PWLS) methods to simultaneously combine the kV/MV data and perform the reconstruction. The authors compared kV/MV reconstructions to kV-only reconstructions using a dental phantom with fillings and a hip-implant numerical phantom. Simulation results indicated that dual-energy sinogram patch FBP and the modified dual-energy PWLS method can successfully suppress metal streak artifacts and restore information lost due to photon starvation in the kV projections. The root-mean-square errors of soft-tissue patterns obtained using combined kV/MV data are 10-15 Hounsfield units smaller than those of the kV-only images, and the structural similarity index measure also indicates a 5%-10% improvement in the image quality. The added dose from the MV scan is much less than the dose from the kV scan if a high efficiency MV detector is assumed. The authors have shown that it is possible to improve the image quality of kV CTs for patients with metal implants or fillings by completing the missing kV projection data with selectively acquired MV data that do not suffer from photon starvation. Numerical simulations demonstrated that dual-energy sinogram patch FBP and a modified kV/MV PWLS method can successfully suppress metal streak artifacts and restore information lost due to photon starvation in kV projections. Combined kV/MV images may permit the improved delineation of structures of interest in CT images for patients with metal implants or fillings.
Filter and method of fabricating
Janney, Mark A.
2006-02-14
A method of making a filter includes the steps of: providing a substrate having a porous surface; applying to the porous surface a coating of dry powder comprising particles to form a filter preform; and heating the filter preform to bind the substrate and the particles together to form a filter.
Investigation on filter method for smoothing spiral phase plate
NASA Astrophysics Data System (ADS)
Zhang, Yuanhang; Wen, Shenglin; Luo, Zijian; Tang, Caixue; Yan, Hao; Yang, Chunlin; Liu, Mincai; Zhang, Qinghua; Wang, Jian
2018-03-01
Spiral phase plate (SPP) for generating vortex hollow beams has high efficiency in various applications. However, it is difficult to obtain an ideal spiral phase plate because of its continuous-varying helical phase and discontinued phase step. This paper describes the demonstration of continuous spiral phase plate using filter methods. The numerical simulations indicate that different filter method including spatial domain filter, frequency domain filter has unique impact on surface topography of SPP and optical vortex characteristics. The experimental results reveal that the spatial Gaussian filter method for smoothing SPP is suitable for Computer Controlled Optical Surfacing (CCOS) technique and obtains good optical properties.
Lyu, Weiwei
2017-01-01
Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method. PMID:29182592
An Improved Filtering Method for Quantum Color Image in Frequency Domain
NASA Astrophysics Data System (ADS)
Li, Panchi; Xiao, Hong
2018-01-01
In this paper we investigate the use of quantum Fourier transform (QFT) in the field of image processing. We consider QFT-based color image filtering operations and their applications in image smoothing, sharpening, and selective filtering using quantum frequency domain filters. The underlying principle used for constructing the proposed quantum filters is to use the principle of the quantum Oracle to implement the filter function. Compared with the existing methods, our method is not only suitable for color images, but also can flexibly design the notch filters. We provide the quantum circuit that implements the filtering task and present the results of several simulation experiments on color images. The major advantages of the quantum frequency filtering lies in the exploitation of the efficient implementation of the quantum Fourier transform.
Distortion analysis of subband adaptive filtering methods for FMRI active noise control systems.
Milani, Ali A; Panahi, Issa M; Briggs, Richard
2007-01-01
Delayless subband filtering structure, as a high performance frequency domain filtering technique, is used for canceling broadband fMRI noise (8 kHz bandwidth). In this method, adaptive filtering is done in subbands and the coefficients of the main canceling filter are computed by stacking the subband weights together. There are two types of stacking methods called FFT and FFT-2. In this paper, we analyze the distortion introduced by these two stacking methods. The effect of the stacking distortion on the performance of different adaptive filters in FXLMS algorithm with non-minimum phase secondary path is explored. The investigation is done for different adaptive algorithms (nLMS, APA and RLS), different weight stacking methods, and different number of subbands.
A preliminary investigation of ROI-image reconstruction with the rebinned BPF algorithm
NASA Astrophysics Data System (ADS)
Bian, Junguo; Xia, Dan; Yu, Lifeng; Sidky, Emil Y.; Pan, Xiaochuan
2008-03-01
The back-projection filtration (BPF)algorithm is capable of reconstructing ROI images from truncated data acquired with a wide class of general trajectories. However, it has been observed that, similar to other algorithms for convergent beam geometries, the BPF algorithm involves a spatially varying weighting factor in the backprojection step. This weighting factor can not only increase the computation load, but also amplify the noise in reconstructed images The weighting factor can be eliminated by appropriately rebinning the measured cone-beam data into fan-parallel-beam data. Such an appropriate data rebinning not only removes the weighting factor, but also retain other favorable properties of the BPF algorithm. In this work, we conduct a preliminary study of the rebinned BPF algorithm and its noise property. Specifically, we consider an application in which the detector and source can move in several directions for achieving ROI data acquisition. The combined motion of the detector and source generally forms a complex trajectory. We investigate in this work image reconstruction within an ROI from data acquired in this kind of applications.
von Spiczak, Jochen; Mannil, Manoj; Peters, Benjamin; Hickethier, Tilman; Baer, Matthias; Henning, André; Schmidt, Bernhard; Flohr, Thomas; Manka, Robert; Maintz, David; Alkadhi, Hatem
2018-05-23
The aims of this study were to assess the value of a dedicated sharp convolution kernel for photon counting detector (PCD) computed tomography (CT) for coronary stent imaging and to evaluate to which extent iterative reconstructions can compensate for potential increases in image noise. For this in vitro study, a phantom simulating coronary artery stenting was prepared. Eighteen different coronary stents were expanded in plastic tubes of 3 mm diameter. Tubes were filled with diluted contrast agent, sealed, and immersed in oil calibrated to an attenuation of -100 HU simulating epicardial fat. The phantom was scanned in a modified second generation 128-slice dual-source CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Erlangen, Germany) equipped with both a conventional energy integrating detector and PCD. Image data were acquired using the PCD part of the scanner with 48 × 0.25 mm slices, a tube voltage of 100 kVp, and tube current-time product of 100 mAs. Images were reconstructed using a conventional convolution kernel for stent imaging with filtered back-projection (B46) and with sinogram-affirmed iterative reconstruction (SAFIRE) at level 3 (I463). For comparison, a dedicated sharp convolution kernel with filtered back-projection (D70) and SAFIRE level 3 (Q703) and level 5 (Q705) was used. The D70 and Q70 kernels were specifically designed for coronary stent imaging with PCD CT by optimizing the image modulation transfer function and the separation of contrast edges. Two independent, blinded readers evaluated subjective image quality (Likert scale 0-3, where 3 = excellent), in-stent diameter difference, in-stent attenuation difference, mathematically defined image sharpness, and noise of each reconstruction. Interreader reliability was calculated using Goodman and Kruskal's γ and intraclass correlation coefficients (ICCs). Differences in image quality were evaluated using a Wilcoxon signed-rank test. Differences in in-stent diameter difference, in-stent attenuation difference, image sharpness, and image noise were tested using a paired-sample t test corrected for multiple comparisons. Interreader and intrareader reliability were excellent (γ = 0.953, ICCs = 0.891-0.999, and γ = 0.996, ICCs = 0.918-0.999, respectively). Reconstructions using the dedicated sharp convolution kernel yielded significantly better results regarding image quality (B46: 0.4 ± 0.5 vs D70: 2.9 ± 0.3; P < 0.001), in-stent diameter difference (1.5 ± 0.3 vs 1.0 ± 0.3 mm; P < 0.001), and image sharpness (728 ± 246 vs 2069 ± 411 CT numbers/voxel; P < 0.001). Regarding in-stent attenuation difference, no significant difference was observed between the 2 kernels (151 ± 76 vs 158 ± 92 CT numbers; P = 0.627). Noise was significantly higher in all sharp convolution kernel images but was reduced by 41% and 59% by applying SAFIRE levels 3 and 5, respectively (B46: 16 ± 1, D70: 111 ± 3, Q703: 65 ± 2, Q705: 46 ± 2 CT numbers; P < 0.001 for all comparisons). A dedicated sharp convolution kernel for PCD CT imaging of coronary stents yields superior qualitative and quantitative image characteristics compared with conventional reconstruction kernels. Resulting higher noise levels in sharp kernel PCD imaging can be partially compensated with iterative image reconstruction techniques.
Complex noise suppression using a sparse representation and 3D filtering of images
NASA Astrophysics Data System (ADS)
Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.; Palacios-Enriquez, A.
2017-08-01
A novel method for the filtering of images corrupted by complex noise composed of randomly distributed impulses and additive Gaussian noise has been substantiated for the first time. The method consists of three main stages: the detection and filtering of pixels corrupted by impulsive noise, the subsequent image processing to suppress the additive noise based on 3D filtering and a sparse representation of signals in a basis of wavelets, and the concluding image processing procedure to clean the final image of the errors emerged at the previous stages. A physical interpretation of the filtering method under complex noise conditions is given. A filtering block diagram has been developed in accordance with the novel approach. Simulations of the novel image filtering method have shown an advantage of the proposed filtering scheme in terms of generally recognized criteria, such as the structural similarity index measure and the peak signal-to-noise ratio, and when visually comparing the filtered images.
Filter replacement lifetime prediction
Hamann, Hendrik F.; Klein, Levente I.; Manzer, Dennis G.; Marianno, Fernando J.
2017-10-25
Methods and systems for predicting a filter lifetime include building a filter effectiveness history based on contaminant sensor information associated with a filter; determining a rate of filter consumption with a processor based on the filter effectiveness history; and determining a remaining filter lifetime based on the determined rate of filter consumption. Methods and systems for increasing filter economy include measuring contaminants in an internal and an external environment; determining a cost of a corrosion rate increase if unfiltered external air intake is increased for cooling; determining a cost of increased air pressure to filter external air; and if the cost of filtering external air exceeds the cost of the corrosion rate increase, increasing an intake of unfiltered external air.
Lessons learned in preparing method 29 filters for compliance testing audits.
Martz, R F; McCartney, J E; Bursey, J T; Riley, C E
2000-01-01
Companies conducting compliance testing are required to analyze audit samples at the time they collect and analyze the stack samples if audit samples are available. Eastern Research Group (ERG) provides technical support to the EPA's Emission Measurements Center's Stationary Source Audit Program (SSAP) for developing, preparing, and distributing performance evaluation samples and audit materials. These audit samples are requested via the regulatory Agency and include spiked audit materials for EPA Method 29-Metals Emissions from Stationary Sources, as well as other methods. To provide appropriate audit materials to federal, state, tribal, and local governments, as well as agencies performing environmental activities and conducting emission compliance tests, ERG has recently performed testing of blank filter materials and preparation of spiked filters for EPA Method 29. For sampling stationary sources using an EPA Method 29 sampling train, the use of filters without organic binders containing less than 1.3 microg/in.2 of each of the metals to be measured is required. Risk Assessment testing imposes even stricter requirements for clean filter background levels. Three vendor sources of quartz fiber filters were evaluated for background contamination to ensure that audit samples would be prepared using filters with the lowest metal background levels. A procedure was developed to test new filters, and a cleaning procedure was evaluated to see if a greater level of cleanliness could be achieved using an acid rinse with new filters. Background levels for filters supplied by different vendors and within lots of filters from the same vendor showed a wide variation, confirmed through contact with several analytical laboratories that frequently perform EPA Method 29 analyses. It has been necessary to repeat more than one compliance test because of suspect metals background contamination levels. An acid cleaning step produced improvement in contamination level, but the difference was not significant for most of the Method 29 target metals. As a result of our studies, we conclude: Filters for Method 29 testing should be purchased in lots as large as possible. Testing firms should pre-screen new boxes and/or new lots of filters used for Method 29 testing. Random analysis of three filters (top, middle, bottom of the box) from a new box of vendor filters before allowing them to be used in field tests is a prudent approach. A box of filters from a given vendor should be screened, and filters from this screened box should be used both for testing and as field blanks in each test scenario to provide the level of quality assurance required for stationary source testing.
Hepa filter dissolution process
Brewer, Ken N.; Murphy, James A.
1994-01-01
A process for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal.
A target detection multi-layer matched filter for color and hyperspectral cameras
NASA Astrophysics Data System (ADS)
Miyanishi, Tomoya; Preece, Bradley L.; Reynolds, Joseph P.
2018-05-01
In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.
Method of and apparatus for testing the integrity of filters
Herman, R.L.
1985-05-07
A method of and apparatus are disclosed for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage. 5 figs.
Method of and apparatus for testing the integrity of filters
Herman, Raymond L [Richland, WA
1985-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
Methods of and apparatus for testing the integrity of filters
Herman, R.L.
1984-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstram upstream and downstream of such filter stage. Samples of the particel concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.
Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng
To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.
Electronic filters, signal conversion apparatus, hearing aids and methods
NASA Technical Reports Server (NTRS)
Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)
1994-01-01
An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.
Effects of refractive index mismatch in optical CT imaging of polymer gel dosimeters.
Manjappa, Rakesh; Makki S, Sharath; Kumar, Rajesh; Kanhirodan, Rajan
2015-02-01
Proposing an image reconstruction technique, algebraic reconstruction technique-refraction correction (ART-rc). The proposed method takes care of refractive index mismatches present in gel dosimeter scanner at the boundary, and also corrects for the interior ray refraction. Polymer gel dosimeters with high dose regions have higher refractive index and optical density compared to the background medium, these changes in refractive index at high dose results in interior ray bending. The inclusion of the effects of refraction is an important step in reconstruction of optical density in gel dosimeters. The proposed ray tracing algorithm models the interior multiple refraction at the inhomogeneities. Jacob's ray tracing algorithm has been modified to calculate the pathlengths of the ray that traverses through the higher dose regions. The algorithm computes the length of the ray in each pixel along its path and is used as the weight matrix. Algebraic reconstruction technique and pixel based reconstruction algorithms are used for solving the reconstruction problem. The proposed method is tested with numerical phantoms for various noise levels. The experimental dosimetric results are also presented. The results show that the proposed scheme ART-rc is able to reconstruct optical density inside the dosimeter better than the results obtained using filtered backprojection and conventional algebraic reconstruction approaches. The quantitative improvement using ART-rc is evaluated using gamma-index. The refraction errors due to regions of different refractive indices are discussed. The effects of modeling of interior refraction in the dose region are presented. The errors propagated due to multiple refraction effects have been modeled and the improvements in reconstruction using proposed model is presented. The refractive index of the dosimeter has a mismatch with the surrounding medium (for dry air or water scanning). The algorithm reconstructs the dose profiles by estimating refractive indices of multiple inhomogeneities having different refractive indices and optical densities embedded in the dosimeter. This is achieved by tracking the path of the ray that traverses through the dosimeter. Extensive simulation studies have been carried out and results are found to be matching that of experimental results.
Feasibility of RACT for 3D dose measurement and range verification in a water phantom.
Alsanea, Fahed; Moskvin, Vadim; Stantz, Keith M
2015-02-01
The objective of this study is to establish the feasibility of using radiation-induced acoustics to measure the range and Bragg peak dose from a pulsed proton beam. Simulation studies implementing a prototype scanner design based on computed tomographic methods were performed to investigate the sensitivity to proton range and integral dose. Derived from thermodynamic wave equation, the pressure signals generated from the dose deposited from a pulsed proton beam with a 1 cm lateral beam width and a range of 16, 20, and 27 cm in water using Monte Carlo methods were simulated. The resulting dosimetric images were reconstructed implementing a 3D filtered backprojection algorithm and the pressure signals acquired from a 71-transducer array with a cylindrical geometry (30 × 40 cm) rotated over 2π about its central axis. Dependencies on the detector bandwidth and proton beam pulse width were performed, after which, different noise levels were added to the detector signals (using 1 μs pulse width and a 0.5 MHz cutoff frequency/hydrophone) to investigate the statistical and systematic errors in the proton range (at 20 cm) and Bragg peak dose (of 1 cGy). The reconstructed radioacoustic computed tomographic image intensity was shown to be linearly correlated to the dose within the Bragg peak. And, based on noise dependent studies, a detector sensitivity of 38 mPa was necessary to determine the proton range to within 1.0 mm (full-width at half-maximum) (systematic error < 150 μm) for a 1 cGy Bragg peak dose, where the integral dose within the Bragg peak was measured to within 2%. For existing hydrophone detector sensitivities, a Bragg peak dose of 1.6 cGy is possible. This study demonstrates that computed tomographic scanner based on ionizing radiation-induced acoustics can be used to verify dose distribution and proton range with centi-Gray sensitivity. Realizing this technology into the clinic has the potential to significantly impact beam commissioning, treatment verification during particle beam therapy and image guided techniques.
Reliability evaluation of I-123 ADAM SPECT imaging using SPM software and AAL ROI methods
NASA Astrophysics Data System (ADS)
Yang, Bang-Hung; Tsai, Sung-Yi; Wang, Shyh-Jen; Su, Tung-Ping; Chou, Yuan-Hwa; Chen, Chia-Chieh; Chen, Jyh-Cheng
2011-08-01
The level of serotonin was regulated by serotonin transporter (SERT), which is a decisive protein in regulation of serotonin neurotransmission system. Many psychiatric disorders and therapies were also related to concentration of cerebral serotonin. I-123 ADAM was the novel radiopharmaceutical to image SERT in brain. The aim of this study was to measure reliability of SERT densities of healthy volunteers by automated anatomical labeling (AAL) method. Furthermore, we also used statistic parametric mapping (SPM) on a voxel by voxel analysis to find difference of cortex between test and retest of I-123 ADAM single photon emission computed tomography (SPECT) images.Twenty-one healthy volunteers were scanned twice with SPECT at 4 h after intravenous administration of 185 MBq of 123I-ADAM. The image matrix size was 128×128 and pixel size was 3.9 mm. All images were obtained through filtered back-projection (FBP) reconstruction algorithm. Region of interest (ROI) definition was performed based on the AAL brain template in PMOD version 2.95 software package. ROI demarcations were placed on midbrain, pons, striatum, and cerebellum. All images were spatially normalized to the SPECT MNI (Montreal Neurological Institute) templates supplied with SPM2. And each image was transformed into standard stereotactic space, which was matched to the Talairach and Tournoux atlas. Then differences across scans were statistically estimated on a voxel by voxel analysis using paired t-test (population main effect: 2 cond's, 1 scan/cond.), which was applied to compare concentration of SERT between the test and retest cerebral scans.The average of specific uptake ratio (SUR: target/cerebellum-1) of 123I-ADAM binding to SERT in midbrain was 1.78±0.27, pons was 1.21±0.53, and striatum was 0.79±0.13. The cronbach's α of intra-class correlation coefficient (ICC) was 0.92. Besides, there was also no significant statistical finding in cerebral area using SPM2 analysis. This finding might help us to understand reliability of I-123 ADAM SPECT imaging and further develop new strategy for the treatment of psychiatric disorders.
Wang, Jin; Zhang, Chen; Wang, Yuanyuan
2017-05-30
In photoacoustic tomography (PAT), total variation (TV) based iteration algorithm is reported to have a good performance in PAT image reconstruction. However, classical TV based algorithm fails to preserve the edges and texture details of the image because it is not sensitive to the direction of the image. Therefore, it is of great significance to develop a new PAT reconstruction algorithm to effectively solve the drawback of TV. In this paper, a directional total variation with adaptive directivity (DDTV) model-based PAT image reconstruction algorithm, which weightedly sums the image gradients based on the spatially varying directivity pattern of the image is proposed to overcome the shortcomings of TV. The orientation field of the image is adaptively estimated through a gradient-based approach. The image gradients are weighted at every pixel based on both its anisotropic direction and another parameter, which evaluates the estimated orientation field reliability. An efficient algorithm is derived to solve the iteration problem associated with DDTV and possessing directivity of the image adaptively updated for each iteration step. Several texture images with various directivity patterns are chosen as the phantoms for the numerical simulations. The 180-, 90- and 30-view circular scans are conducted. Results obtained show that the DDTV-based PAT reconstructed algorithm outperforms the filtered back-projection method (FBP) and TV algorithms in the quality of reconstructed images with the peak signal-to-noise rations (PSNR) exceeding those of TV and FBP by about 10 and 18 dB, respectively, for all cases. The Shepp-Logan phantom is studied with further discussion of multimode scanning, convergence speed, robustness and universality aspects. In-vitro experiments are performed for both the sparse-view circular scanning and linear scanning. The results further prove the effectiveness of the DDTV, which shows better results than that of the TV with sharper image edges and clearer texture details. Both numerical simulation and in vitro experiments confirm that the DDTV provides a significant quality improvement of PAT reconstructed images for various directivity patterns.
Dual Megathrust Slip Behaviors of the 2014 Iquique Earthquake Sequence
NASA Astrophysics Data System (ADS)
Meng, L.; Huang, H.; Burgmann, R.; Ampuero, J. P.; Strader, A. E.
2014-12-01
The transition between seismic rupture and aseismic creep is of central interest to better understand the mechanics of subduction processes. A M 8.2 earthquake occurred on April 1st, 2014 in the Iquique seismic gap of Northern Chile. This event was preceded by a 2-week-long foreshock sequence including a M 6.7 earthquake. Repeating earthquakes are found among the foreshock sequence that migrated towards the mainshock area, suggesting a large scale slow-slip event on the megathrust preceding the mainshock. The variations of the recurrence time of repeating earthquakes highlights the diverse seismic and aseismic slip behaviors on different megathrust segments. The repeaters that were active only before the mainshock recurred more often and were distributed in areas of substantial coseismic slip, while other repeaters occurred both before and after the mainshock in the area complementary to the mainshock rupture. The spatial and temporal distribution of the repeating earthquakes illustrate the essential role of propagating aseismic slip in leading up to the mainshock and aftershock activities. Various finite fault models indicate that the coseismic slip generally occurred down-dip from the foreshock activity and the mainshock hypocenter. Source imaging by teleseismic back-projection indicates an initial down-dip propagation stage followed by a rupture-expansion stage. In the first stage, the finite fault models show slow initiation with low amplitude moment rate at low frequency (< 0.1 Hz), while back-projection shows a steady initiation at high frequency (> 0.5 Hz). This indicates frequency-dependent manifestations of seismic radiation in the low-stress foreshock region. In the second stage, the high-frequency rupture remains within an area of low gravity anomaly, suggesting possible upper-crustal structures that promote high-frequency generation. Back-projection also shows an episode of reverse rupture propagation which suggests a delayed failure of asperities in the foreshock area. Our results highlight the complexity of the interactions between large-scale aseismic slow-slip and dynamic ruptures of megathrust earthquakes.
NASA Astrophysics Data System (ADS)
Floberg, J. M.; Holden, J. E.
2013-02-01
We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.
A generalized adaptive mathematical morphological filter for LIDAR data
NASA Astrophysics Data System (ADS)
Cui, Zheng
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.
Method for filtering solvent and tar sand mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelterborn, J. C.; Stone, R. A.
1985-09-03
A method for filtering spent tar sands from a bitumen and organic solvent solution comprises separating the solution into two streams wherein the bulk of the coarser spent tar sand is in a first stream and has an average particle size of about 10 to about 100 mesh and the bulk of the finer spent tar sand is in a second stream; producing a filter cake by filtering the coarser spent tar sand from the first stream; and filtering the finer spent tar sand from the second stream with the filter cake. The method is particularly useful for filtering solutionsmore » of bitumen extracted from bitumen containing diatomite, spent diatomite and organic solvent.« less
HEPA filter dissolution process
Brewer, K.N.; Murphy, J.A.
1994-02-22
A process is described for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal. 4 figures.
An adaptive spatio-temporal Gaussian filter for processing cardiac optical mapping data.
Pollnow, S; Pilia, N; Schwaderlapp, G; Loewe, A; Dössel, O; Lenis, G
2018-06-04
Optical mapping is widely used as a tool to investigate cardiac electrophysiology in ex vivo preparations. Digital filtering of fluorescence-optical data is an important requirement for robust subsequent data analysis and still a challenge when processing data acquired from thin mammalian myocardium. Therefore, we propose and investigate the use of an adaptive spatio-temporal Gaussian filter for processing optical mapping signals from these kinds of tissue usually having low signal-to-noise ratio (SNR). We demonstrate how filtering parameters can be chosen automatically without additional user input. For systematic comparison of this filter with standard filtering methods from the literature, we generated synthetic signals representing optical recordings from atrial myocardium of a rat heart with varying SNR. Furthermore, all filter methods were applied to experimental data from an ex vivo setup. Our developed filter outperformed the other filter methods regarding local activation time detection at SNRs smaller than 3 dB which are typical noise ratios expected in these signals. At higher SNRs, the proposed filter performed slightly worse than the methods from literature. In conclusion, the proposed adaptive spatio-temporal Gaussian filter is an appropriate tool for investigating fluorescence-optical data with low SNR. The spatio-temporal filter parameters were automatically adapted in contrast to the other investigated filters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Method and apparatus for filtering gas with a moving granular filter bed
Brown, Robert C.; Wistrom, Corey; Smeenk, Jerod L.
2007-12-18
A method and apparatus for filtering gas (58) with a moving granular filter bed (48) involves moving a mass of particulate filter material (48) downwardly through a filter compartment (35); tangentially introducing gas into the compartment (54) to move in a cyclonic path downwardly around the moving filter material (48); diverting the cyclonic path (58) to a vertical path (62) to cause the gas to directly interface with the particulate filter material (48); thence causing the gas to move upwardly through the filter material (48) through a screened partition (24, 32) into a static upper compartment (22) of a filter compartment for exodus (56) of the gas which has passed through the particulate filter material (48).
Lefebvre, Carol; Glanville, Julie; Beale, Sophie; Boachie, Charles; Duffy, Steven; Fraser, Cynthia; Harbour, Jenny; McCool, Rachael; Smith, Lynne
2017-01-01
BACKGROUND Effective study identification is essential for conducting health research, developing clinical guidance and health policy and supporting health-care decision-making. Methodological search filters (combinations of search terms to capture a specific study design) can assist in searching to achieve this. OBJECTIVES This project investigated the methods used to assess the performance of methodological search filters, the information that searchers require when choosing search filters and how that information could be better provided. METHODS Five literature reviews were undertaken in 2010/11: search filter development and testing; comparison of search filters; decision-making in choosing search filters; diagnostic test accuracy (DTA) study methods; and decision-making in choosing diagnostic tests. We conducted interviews and a questionnaire with experienced searchers to learn what information assists in the choice of search filters and how filters are used. These investigations informed the development of various approaches to gathering and reporting search filter performance data. We acknowledge that there has been a regrettable delay between carrying out the project, including the searches, and the publication of this report, because of serious illness of the principal investigator. RESULTS The development of filters most frequently involved using a reference standard derived from hand-searching journals. Most filters were validated internally only. Reporting of methods was generally poor. Sensitivity, precision and specificity were the most commonly reported performance measures and were presented in tables. Aspects of DTA study methods are applicable to search filters, particularly in the development of the reference standard. There is limited evidence on how clinicians choose between diagnostic tests. No published literature was found on how searchers select filters. Interviewing and questioning searchers via a questionnaire found that filters were not appropriate for all tasks but were predominantly used to reduce large numbers of retrieved records and to introduce focus. The Inter Technology Appraisal Support Collaboration (InterTASC) Information Specialists' Sub-Group (ISSG) Search Filters Resource was most frequently mentioned by both groups as the resource consulted to select a filter. Randomised controlled trial (RCT) and systematic review filters, in particular the Cochrane RCT and the McMaster Hedges filters, were most frequently mentioned. The majority indicated that they used different filters depending on the requirement for sensitivity or precision. Over half of the respondents used the filters available in databases. Interviewees used various approaches when using and adapting search filters. Respondents suggested that the main factors that would make choosing a filter easier were the availability of critical appraisals and more detailed performance information. Provenance and having the filter available in a central storage location were also important. LIMITATIONS The questionnaire could have been shorter and could have included more multiple choice questions, and the reviews of filter performance focused on only four study designs. CONCLUSIONS Search filter studies should use a representative reference standard and explicitly report methods and results. Performance measures should be presented systematically and clearly. Searchers find filters useful in certain circumstances but expressed a need for more user-friendly performance information to aid filter choice. We suggest approaches to use, adapt and report search filter performance. Future work could include research around search filters and performance measures for study designs not addressed here, exploration of alternative methods of displaying performance results and numerical synthesis of performance comparison results. FUNDING The National Institute for Health Research (NIHR) Health Technology Assessment programme and Medical Research Council-NIHR Methodology Research Programme (grant number G0901496). PMID:29188764
Zhao, C; Vassiljev, N; Konstantinidis, A C; Speller, R D; Kanicki, J
2017-03-07
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm -1 ) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
Geyer, Lucas L; Glenn, G Russell; De Cecco, Carlo Nicola; Van Horn, Mark; Canstein, Christian; Silverman, Justin R; Krazinski, Aleksander W; Kemper, Jenny M; Bucher, Andreas; Ebersberger, Ullrich; Costello, Philip; Bamberg, Fabian; Schoepf, U Joseph
2015-09-01
To use suitable objective methods of analysis to assess the influence of the combination of an integrated-circuit computed tomographic (CT) detector and iterative reconstruction (IR) algorithms on the visualization of small (≤3-mm) coronary artery stents. By using a moving heart phantom, 18 data sets obtained from three coronary artery stents with small diameters were investigated. A second-generation dual-source CT system equipped with an integrated-circuit detector was used. Images were reconstructed with filtered back-projection (FBP) and IR at a section thickness of 0.75 mm (FBP75 and IR75, respectively) and IR at a section thickness of 0.50 mm (IR50). Multirow intensity profiles in Hounsfield units were modeled by using a sum-of-Gaussians fit to analyze in-plane image characteristics. Out-of-plane image characteristics were analyzed with z upslope of multicolumn intensity profiles in Hounsfield units. Statistical analysis was conducted with one-way analysis of variance and the Student t test. Independent of stent diameter and heart rate, IR75 resulted in significantly increased xy sharpness, signal-to-noise ratio, and contrast-to-noise ratio, as well as decreased blurring and noise compared with FBP75 (eg, 2.25-mm stent, 0 beats per minute; xy sharpness, 278.2 vs 252.3; signal-to-noise ratio, 46.6 vs 33.5; contrast-to-noise ratio, 26.0 vs 16.8; blurring, 1.4 vs 1.5; noise, 15.4 vs 21.2; all P < .001). In the z direction, the upslopes were substantially higher in the IR50 reconstructions (2.25-mm stent: IR50, 94.0; IR75, 53.1; and FBP75, 48.1; P < .001). The implementation of an integrated-circuit CT detector provides substantially sharper out-of-plane resolution of coronary artery stents at 0.5-mm section thickness, while the use of iterative image reconstruction mostly improves in-plane stent visualization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidtlein, CR; Hwang, S; Veeraraghavan, H
Purpose: This study demonstrates a methodology for tracking changes in metastatic bone disease using trajectories in material basis space in serial dual energy computed tomography (DECT) studies. Methods: This study includes patients with bone metastases from breast cancer that had clinical surveillance CT scans using a General Electric CT750HD in dual energy mode. A radiologist defined regions-of-interested (ROI) for bone metastasis, normal bone, and marrow across the serial DECT scans. Our approach employs a Radon transform to forward-projection the basis images, namely, water and iodine, into sinogram space. This data is then repartitioned into fat/bone and effective density/Z image pairsmore » using assumed energy spectrums for the x-ray energies. This approach both helps remove negative material densities and avoids adding spectrum-hardening artifacts. These new basis data sets were then reconstructed via filtered back-projection to create new material basis pair images. The trajectories of these pairs were then plotted in the new basis space providing a means to both visualize and quantitatively measure changes in the material properties of the tumors. Results: ROI containing radiologist defined metastatic bone disease showed well-defined trajectories in both fat/bone and effective density/Z space. ROI that contained radiologist defined normal bone and marrow did not exhibit any discernible trajectories and were stable from scan to scan. Conclusions: The preliminary results show that changes in material composition and effective density/Z image pairs were seen primarily in metastasis and not in normal tissue. This study indicates that by using routine clinical DECT it may be possible to monitor therapy response of bone metastases because healing or worsening bone metastases change material composition of bone. Additional studies are needed to further validate these results and to test for their correlation with outcome.« less
Image reconstruction from cone-beam projections with attenuation correction
NASA Astrophysics Data System (ADS)
Weng, Yi
1997-07-01
In single photon emission computered tomography (SPECT) imaging, photon attenuation within the body is a major factor contributing to the quantitative inaccuracy in measuring the distribution of radioactivity. Cone-beam SPECT provides improved sensitivity for imaging small organs. This thesis extends the results for 2D parallel- beam and fan-beam geometry to 3D parallel-beam and cone- beam geometries in order to derive filtered backprojection reconstruction algorithms for the 3D exponential parallel-beam transform and for the exponential cone-beam transform with sampling on a sphere. An exact inversion formula for the 3D exponential parallel-beam transform is obtained and is extended to the 3D exponential cone-beam transform. Sampling on a sphere is not useful clinically and current cone-beam tomography, with the focal point traversing a planar orbit, does not acquire sufficient data to give an accurate reconstruction. Thus a data acquisition method that obtains complete data for cone-beam SPECT by simultaneously rotating the gamma camera and translating the patient bed, so that cone-beam projections can be obtained with the focal point traversing a helix that surrounds the patient was developed. First, an implementation of Grangeat's algorithm for helical cone- beam projections was developed without attenuation correction. A fast new rebinning scheme was developed that uses all of the detected data to reconstruct the image and properly normalizes any multiply scanned data. In the case of attenuation no theorem analogous to Tuy's has been proven. We hypothesized that an artifact-free reconstruction could be obtained even if the cone-beam data are attenuated, provided the imaging orbit satisfies Tuy's condition and the exact attenuation map is known. Cone-beam emission data were acquired by using a circle- and-line and a helix orbit on a clinical SPECT system. An iterative conjugate gradient reconstruction algorithm was used to reconstruct projection data with a known attenuation map. The quantitative accuracy of the attenuation-corrected emission reconstruction was significantly improved.
NASA Astrophysics Data System (ADS)
Zhao, C.; Vassiljev, N.; Konstantinidis, A. C.; Speller, R. D.; Kanicki, J.
2017-03-01
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm-1) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
MO-DE-BRA-06: 3D Image Acquisition and Reconstruction Explained with Online Animations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kesner, A
Purpose: Understanding the principles of 3D imaging and image reconstruction is fundamental to the field of medical imaging. Clinicians, technologists, physicists, patients, students, and inquisitive minds all stand to benefit from greater comprehension of the supporting technologies. To help explain the basic principles of 3D imaging, we developed multi-frame animations that convey the concepts of tomographic imaging. The series of free (gif) animations are accessible online, and provide a multimedia introduction to the main concepts of image reconstruction. Methods: Text and animations were created to convey the principles of analytic tomography in CT, PET, and SPECT. Specific topics covered included:more » principles of sinograms/image data storage, forward projection, principles of PET acquisitions, and filtered backprojection. A total of 8 animations were created and presented for CT, PET, and digital phantom formats. In addition, a free executable is also provided to allow users to create their own tomographic animations – providing an opportunity for interaction and personalization to help foster user interest. Results: Tutorial text and animations have been posted online, freely available to view or download. The animations are in first position in a google search of “image reconstruction animations”. The website currently receives approximately 200 hits/month, from all over the world, and the usage is growing. Positive feedback has been collected from users. Conclusion: We identified a need for improved teaching tools to help visualize the (temporally variant) concepts of image reconstruction, and have shown that animations can be a useful tool for this aspect of education. Furthermore, posting animations freely on the web has shown to be a good way to maximize their impact in the community. In future endeavors, we hope to expand this animated content, to cover principles of iterative reconstruction, as well as other phenomena relating to imaging.« less
Validation of Left Ventricular Ejection Fraction with the IQ•SPECT System in Small-Heart Patients.
Yoneyama, Hiroto; Shibutani, Takayuki; Konishi, Takahiro; Mizutani, Asuka; Hashimoto, Ryosuke; Onoguchi, Masahisa; Okuda, Koichi; Matsuo, Shinro; Nakajima, Kenichi; Kinuya, Seigo
2017-09-01
The IQ•SPECT system, which is equipped with multifocal collimators ( SMART ZOOM) and uses ordered-subset conjugate gradient minimization as the reconstruction algorithm, reduces the acquisition time of myocardial perfusion imaging compared with conventional SPECT systems equipped with low-energy high-resolution collimators. We compared the IQ•SPECT system with a conventional SPECT system for estimating left ventricular ejection fraction (LVEF) in patients with a small heart (end-systolic volume < 20 mL). Methods: The study consisted of 98 consecutive patients who underwent a 1-d stress-rest myocardial perfusion imaging study with a 99m Tc-labeled agent for preoperative risk assessment. Data were reconstructed using filtered backprojection for conventional SPECT and ordered-subset conjugate gradient minimization for IQ•SPECT. End-systolic volume, end-diastolic volume, and LVEF were calculated using quantitative gated SPECT (QGS) and cardioREPO software. We compared the LVEF from gated myocardial perfusion SPECT to that from echocardiographic measurements. Results: End-diastolic volume, end-systolic volume, and LVEF as obtained from conventional SPECT, IQ•SPECT, and echocardiography showed a good to excellent correlation regardless of whether they were calculated using QGS or using cardioREPO. Although LVEF calculated using QGS significantly differed between conventional SPECT and IQ•SPECT (65.4% ± 13.8% vs. 68.4% ± 15.2%) ( P = 0.0002), LVEF calculated using cardioREPO did not (69.5% ± 10.6% vs. 69.5% ± 11.0%). Likewise, although LVEF calculated using QGS significantly differed between conventional SPECT and IQ•SPECT (75.0 ± 9.6 vs. 79.5 ± 8.3) ( P = 0.0005), LVEF calculated using cardioREPO did not (72.3% ± 9.0% vs. 74.3% ± 8.3%). Conclusion: In small-heart patients, the difference in LVEF between IQ•SPECT and conventional SPECT was less when calculated using cardioREPO than when calculated using QGS. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Angelis, L; Landry, G; Dedes, G
Purpose: Proton CT (pCT) is a promising imaging modality for reducing range uncertainty in image-guided proton therapy. Range uncertainties partially originate from X-ray CT number conversion to stopping power ratio (SPR) and are limiting the exploitation of the full potential of proton therapy. In this study we explore the concept of spatially dependent fluence modulated proton CT (FMpCT), for achieving optimal image quality in a clinical region of interest (ROI), while reducing significantly the imaging dose to the patient. Methods: The study was based on simulated ideal pCT using pencil beam (PB) scanning. A set of 250 MeV protons PBsmore » was used to create 360 projections of a cylindrical water phantom and a head and neck cancer patient. The tomographic images were reconstructed using a filtered backprojection (FBP) as well as an iterative algorithm (ITR). Different fluence modulation levels were investigated and their impact on the image was quantified in terms of SPR accuracy as well as noise within and outside selected ROIs, as a function of imaging dose. The unmodulated image served as reference. Results: Both FBP reconstruction and ITR without total variation (TV) yielded image quality in the ROIs similar to the reference images, for modulation down to 0.1 of the full proton fluence. The average dose was reduced by 75% for the water phantom and by 40% for the patient. FMpCT does not improve the noise for ITR with TV and modulation 0.1. Conclusion: This is the first work proposing and investigating FMpCT for producing optimal image quality for treatment planning and image guidance, while simultaneously reducing imaging dose. Future work will address spatial resolution effects and the impact of FMpCT on the quality of proton treatment plans for a prototype pCT scanner capable of list mode data acquisition. Acknowledgement: DFG-MAP DFG - Munich-Centre for Advanced Photonics (MAP)« less
TH-AB-209-07: High Resolution X-Ray-Induced Acoustic Computed Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiang, L; Tang, S; Ahmad, M
Purpose: X-ray radiographic absorption imaging is an invaluable tool in medical diagnostics, biology and materials science. However, the use of conventional CT is limited by two factors: the detection sensitivity to weak absorption material and the radiation dose from CT scanning. The purpose of this study is to explore X-ray induced acoustic computed tomography (XACT), a new imaging modality, which combines X-ray absorption contrast and high ultrasonic resolution to address these challenges. Methods: First, theoretical models was built to analyze the XACT sensitivity to X-ray absorption and calculate the minimal radiation dose in XACT imaging. Then, an XACT system comprisedmore » of an ultrashort X-ray pulse, a low noise ultrasound detector and a signal acquisition system was built to evaluate the X-ray induced acoustic signal generation. A piece of chicken bone and a phantom with two golden fiducial markers were exposed to 270 kVp X-ray source with 60 ns exposure time, and the X-ray induced acoustic signal was received by a 2.25MHz ultrasound transducer in 200 positions. XACT images were reconstructed by a filtered back-projection algorithm. Results: The theoretical analysis shows that X-ray induced acoustic signals have 100% relative sensitivity to X-ray absorption, but not to X-ray scattering. Applying this innovative technology to breast imaging, we can reduce radiation dose by a factor of 50 compared with newly FDA approved breast CT. The reconstructed images of chicken bone and golden fiducial marker phantom reveal that the spatial resolution of the built XACT system is 350µm. Conclusion: In XACT, the imaging sensitivity to X-ray absorption is improved and the imaging dose is dramatically reduced by using ultrashort pulsed X-ray. Taking advantage of the high ultrasonic resolution, we can also perform 3D imaging with a single X-ray pulse. This new modality has the potential to revolutionize x-ray imaging applications in medicine and biology.« less
NASA Astrophysics Data System (ADS)
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-09-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.
DEMONSTRATION BULLETIN: COLLOID POLISHING FILTER METHOD - FILTER FLOW TECHNOLOGY, INC.
The Filter Flow Technology, Inc. (FFT) Colloid Polishing Filter Method (CPFM) was tested as a transportable, trailer mounted, system that uses sorption and chemical complexing phenomena to remove heavy metals and nontritium radionuclides from water. Contaminated waters can be pro...
Fout, G. Shay; Cashdollar, Jennifer L.; Varughese, Eunice A.; Parshionikar, Sandhya U.; Grimm, Ann C.
2015-01-01
EPA Method 1615 was developed with a goal of providing a standard method for measuring enteroviruses and noroviruses in environmental and drinking waters. The standardized sampling component of the method concentrates viruses that may be present in water by passage of a minimum specified volume of water through an electropositive cartridge filter. The minimum specified volumes for surface and finished/ground water are 300 L and 1,500 L, respectively. A major method limitation is the tendency for the filters to clog before meeting the sample volume requirement. Studies using two different, but equivalent, cartridge filter options showed that filter clogging was a problem with 10% of the samples with one of the filter types compared to 6% with the other filter type. Clogging tends to increase with turbidity, but cannot be predicted based on turbidity measurements only. From a cost standpoint one of the filter options is preferable over the other, but the water quality and experience with the water system to be sampled should be taken into consideration in making filter selections. PMID:25867928
Latent component-based gear tooth fault detection filter using advanced parametric modeling
NASA Astrophysics Data System (ADS)
Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.
2009-10-01
In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.
Interior reconstruction method based on rotation-translation scanning model.
Wang, Xianchao; Tang, Ziyue; Yan, Bin; Li, Lei; Bao, Shanglian
2014-01-01
In various applications of computed tomography (CT), it is common that the reconstructed object is over the field of view (FOV) or we may intend to sue a FOV which only covers the region of interest (ROI) for the sake of reducing radiation dose. These kinds of imaging situations often lead to interior reconstruction problems which are difficult cases in the reconstruction field of CT, due to the truncated projection data at every view angle. In this paper, an interior reconstruction method is developed based on a rotation-translation (RT) scanning model. The method is implemented by first scanning the reconstructed region, and then scanning a small region outside the support of the reconstructed object after translating the rotation centre. The differentiated backprojection (DBP) images of the reconstruction region and the small region outside the object can be respectively obtained from the two-time scanning data without data rebinning process. At last, the projection onto convex sets (POCS) algorithm is applied to reconstruct the interior region. Numerical simulations are conducted to validate the proposed reconstruction method.
SPONGY (SPam ONtoloGY): Email Classification Using Two-Level Dynamic Ontology
2014-01-01
Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240
SPONGY (SPam ONtoloGY): email classification using two-level dynamic ontology.
Youn, Seongwook
2014-01-01
Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance.
Radiation Hard Bandpass Filters for Mid- to Far-IR Planetary Instruments
NASA Technical Reports Server (NTRS)
Brown, Ari D.; Aslam, Shahid; Chervenack, James A.; Huang, Wei-Chung; Merrell, Willie C.; Quijada, Manuel; Steptoe-Jackson, Rosalind; Wollack, Edward J.
2012-01-01
We present a novel method to fabricate compact metal mesh bandpass filters for use in mid- to far-infrared planetary instruments operating in the 20-600 micron wavelength spectral regime. Our target applications include thermal mapping instruments on ESA's JUICE as well as on a de-scoped JEO. These filters are novel because they are compact, customizable, free-standing copper mesh resonant bandpass filters with micromachined silicon support frames. The filters are well suited for thermal mapping mission to the outer planets and their moons because the filter material is radiation hard. Furthermore, the silicon support frame allows for effective hybridization with sensors made on silicon substrates. Using a Fourier Transform Spectrometer, we have demonstrated high transmittance within the passband as well as good out-of-band rejection [1]. In addition, we have developed a unique method of filter stacking in order to increase the bandwidth and sharpen the roll-off of the filters. This method allows one to reliably control the spacing between filters to within 2 microns. Furthermore, our method allows for reliable control over the relative position and orienta-tion between the shared faces of the filters.
SITE TECHNOLOGY CAPSULE: FILTER FLOW TECHNOLOGY, INC. - COLLOID POLISHING FILTER METHOD
The Filter Flow Technology, Inc. (FFT) Coloid Polishing Filter Method (CPFM) was demonstrated at the U.S Department of Energy's (DOE) Rock Flats Plant (RFP) as part of the U.S. Environmental Protection Agency's (EPA) Superfund and Innovative Technology Evaluation (SITE) program. ...
Ruhlandt, A; Töpperwien, M; Krenkel, M; Mokso, R; Salditt, T
2017-07-26
We present an approach towards four dimensional (4d) movies of materials, showing dynamic processes within the entire 3d structure. The method is based on tomographic reconstruction on dynamically curved paths using a motion model estimated by optical flow techniques, considerably reducing the typical motion artefacts of dynamic tomography. At the same time we exploit x-ray phase contrast based on free propagation to enhance the signal from micron scale structure recorded with illumination times down to a millisecond (ms). The concept is demonstrated by observing the burning process of a match stick in 4d, using high speed synchrotron phase contrast x-ray tomography recordings. The resulting movies reveal the structural changes of the wood cells during the combustion.
Electronic filters, repeated signal charge conversion apparatus, hearing aids and methods
NASA Technical Reports Server (NTRS)
Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)
1993-01-01
An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.
Jeong, Jinsoo
2011-01-01
This paper presents an acoustic noise cancelling technique using an inverse kepstrum system as an innovations-based whitening application for an adaptive finite impulse response (FIR) filter in beamforming structure. The inverse kepstrum method uses an innovations-whitened form from one acoustic path transfer function between a reference microphone sensor and a noise source so that the rear-end reference signal will then be a whitened sequence to a cascaded adaptive FIR filter in the beamforming structure. By using an inverse kepstrum filter as a whitening filter with the use of a delay filter, the cascaded adaptive FIR filter estimates only the numerator of the polynomial part from the ratio of overall combined transfer functions. The test results have shown that the adaptive FIR filter is more effective in beamforming structure than an adaptive noise cancelling (ANC) structure in terms of signal distortion in the desired signal and noise reduction in noise with nonminimum phase components. In addition, the inverse kepstrum method shows almost the same convergence level in estimate of noise statistics with the use of a smaller amount of adaptive FIR filter weights than the kepstrum method, hence it could provide better computational simplicity in processing. Furthermore, the rear-end inverse kepstrum method in beamforming structure has shown less signal distortion in the desired signal than the front-end kepstrum method and the front-end inverse kepstrum method in beamforming structure. PMID:22163987
Dense grid sibling frames with linear phase filters
NASA Astrophysics Data System (ADS)
Abdelnour, Farras
2013-09-01
We introduce new 5-band dyadic sibling frames with dense time-frequency grid. Given a lowpass filter satisfying certain conditions, the remaining filters are obtained using spectral factorization. The analysis and synthesis filterbanks share the same lowpass and bandpass filters but have different and oversampled highpass filters. This leads to wavelets approximating shift-invariance. The filters are FIR, have linear phase, and the resulting wavelets have vanishing moments. The filters are designed using spectral factorization method. The proposed method leads to smooth limit functions with higher approximation order, and computationally stable filterbanks.
Three-stage Fabry-Perot liquid crystal tunable filter with extended spectral range.
Zheng, Zhenrong; Yang, Guowei; Li, Haifeng; Liu, Xu
2011-01-31
A method to extend spectral range of tunable optical filter is proposed in this paper. Two same tunable Fabry-Perot filters and an additional tunable filter with different free spectral range are cascaded to extend spectral range and reduce sidelobes. Over 400 nm of free spectral range and 4 nm of full width at half maximum of the filter were achieved. Design procedure and simulation are described in detail. An experimental 3-stage tunable Fabry-Perot filter with visible and infrared spectra is demonstrated. The experimental results and the theoretical analysis are presented in detail to verify this method. The results revealed that a compact and extended tunable spectral range of Fabry-Perot filter can be easily attainable by this method.
An efficient incremental learning mechanism for tracking concept drift in spam filtering
Sheu, Jyh-Jian; Chu, Ko-Tsung; Li, Nien-Feng; Lee, Cheng-Chi
2017-01-01
This research manages in-depth analysis on the knowledge about spams and expects to propose an efficient spam filtering method with the ability of adapting to the dynamic environment. We focus on the analysis of email’s header and apply decision tree data mining technique to look for the association rules about spams. Then, we propose an efficient systematic filtering method based on these association rules. Our systematic method has the following major advantages: (1) Checking only the header sections of emails, which is different from those spam filtering methods at present that have to analyze fully the email’s content. Meanwhile, the email filtering accuracy is expected to be enhanced. (2) Regarding the solution to the problem of concept drift, we propose a window-based technique to estimate for the condition of concept drift for each unknown email, which will help our filtering method in recognizing the occurrence of spam. (3) We propose an incremental learning mechanism for our filtering method to strengthen the ability of adapting to the dynamic environment. PMID:28182691
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vedantham, S; Shrestha, S; Shi, L
Purpose: To optimize the cesium iodide (CsI:Tl) scintillator thickness in a complimentary metal-oxide semiconductor (CMOS)-based detector for use in dedicated cone-beam breast CT. Methods: The imaging task considered was the detection of a microcalcification cluster comprising six 220µm diameter calcium carbonate spheres, arranged in the form of a regular pentagon with 2 mm spacing on its sides and a central calcification, similar to that in ACR-recommended mammography accreditation phantom, at a mean glandular dose of 4.5 mGy. Generalized parallel-cascades based linear systems analysis was used to determine Fourier-domain image quality metrics in reconstructed object space, from which the detectability indexmore » inclusive of anatomical noise was determined for a non-prewhitening numerical observer. For 300 projections over 2π, magnification-associated focal-spot blur, Monte Carlo derived x-ray scatter, K-fluorescent emission and reabsorption within CsI:Tl, CsI:Tl quantum efficiency and optical blur, fiberoptic plate transmission efficiency and blur, CMOS quantum efficiency, pixel aperture function and additive noise, and filtered back-projection to isotropic 105µm voxel pitch with bilinear interpolation were modeled. Imaging geometry of a clinical prototype breast CT system, a 60 kV Cu/Al filtered x-ray spectrum from 0.3 mm focal spot incident on a 14 cm diameter semi-ellipsoidal breast were used to determine the detectability index for 300–600 µm thick (75µm increments) CsI:Tl. The CsI:Tl thickness that maximized the detectability index was considered optimal. Results: The limiting resolution (10% modulation transfer function, MTF) progressively decreased with increasing CsI:Tl thickness. The zero-frequency detective quantum efficiency, DQE(0), in projection space increased with increasing CsI:Tl thickness. The maximum detectability index was achieved with 525µm thick CsI:Tl scintillator. Reduced MTF at mid-to-high frequencies for 600µm thick CsI:Tl lowered the detectability index than 525µm CsI:Tl. Conclusion: For the x-ray spectrum and imaging conditions considered, a 525µm thick CsI:Tl scintillator integrated with the CMOS detector is optimal for detecting microcalcification cluster. Funding support: Supported in part by NIH R01 CA195512. The contents are solely the responsibility of the authors and do not reflect the official views of the NIH or the NCI. Disclosures: SV, GV and AK - Research collaboration, Koning Corp., West Henrietta, NY.« less
An automated method of tuning an attitude estimator
NASA Technical Reports Server (NTRS)
Mason, Paul A. C.; Mook, D. Joseph
1995-01-01
Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.
Application of optical broadband monitoring to quasi-rugate filters by ion-beam sputtering
NASA Astrophysics Data System (ADS)
Lappschies, Marc; Görtz, Björn; Ristau, Detlev
2006-03-01
Methods for the manufacture of rugate filters by the ion-beam-sputtering process are presented. The first approach gives an example of a digitized version of a continuous-layer notch filter. This method allows the comparison of the basic theory of interference coatings containing thin layers with practical results. For the other methods, a movable zone target is employed to fabricate graded and gradual rugate filters. The examples demonstrate the potential of broadband optical monitoring in conjunction with the ion-beam-sputtering process. First-characterization results indicate that these types of filter may exhibit higher laser-induced damage-threshold values than those of classical filters.
Method for enhanced longevity of in situ microbial filter used for bioremediation
Carman, M. Leslie; Taylor, Robert T.
1999-01-01
An improved method for in situ microbial filter bioremediation having increasingly operational longevity of an in situ microbial filter emplaced into an aquifer. A method for generating a microbial filter of sufficient catalytic density and thickness, which has increased replenishment interval, improved bacteria attachment and detachment characteristics and the endogenous stability under in situ conditions. A system for in situ field water remediation.
Adaptive marginal median filter for colour images.
Morillas, Samuel; Gregori, Valentín; Sapena, Almanzor
2011-01-01
This paper describes a new filter for impulse noise reduction in colour images which is aimed at improving the noise reduction capability of the classical vector median filter. The filter is inspired by the application of a vector marginal median filtering process over a selected group of pixels in each filtering window. This selection, which is based on the vector median, along with the application of the marginal median operation constitutes an adaptive process that leads to a more robust filter design. Also, the proposed method is able to process colour images without introducing colour artifacts. Experimental results show that the images filtered with the proposed method contain less noisy pixels than those obtained through the vector median filter.
Comparative Study of Speckle Filtering Methods in PolSAR Radar Images
NASA Astrophysics Data System (ADS)
Boutarfa, S.; Bouchemakh, L.; Smara, Y.
2015-04-01
Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.
Backus, Sterling J [Erie, CO; Kapteyn, Henry C [Boulder, CO
2007-07-10
A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.
System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging Modalities
NASA Astrophysics Data System (ADS)
Guan, Huifeng
In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the disadvantages of the conventional approach that was extremely sensitive to noise corruption. In the final part, we described the modified filtered backprojection and iterative image reconstruction algorithms specifically developed for TBCT. Special parallelization strategies are designed to facilitate the use of GPU computing, showing demonstrated capability of producing high quality reconstructed volumetric images with a super fast computational speed. For all the investigations mentioned above, both simulation and experimental studies have been conducted to demonstrate the feasibility and effectiveness of the proposed methodologies.
Zou, X H; Zhu, Y P; Ren, G Q; Li, G C; Zhang, J; Zou, L J; Feng, Z B; Li, B H
2017-02-20
Objective: To evaluate the significance of bacteria detection with filter paper method on diagnosis of diabetic foot wound infection. Methods: Eighteen patients with diabetic foot ulcer conforming to the study criteria were hospitalized in Liyuan Hospital Affiliated to Tongji Medical College of Huazhong University of Science and Technology from July 2014 to July 2015. Diabetic foot ulcer wounds were classified according to the University of Texas diabetic foot classification (hereinafter referred to as Texas grade) system, and general condition of patients with wounds in different Texas grade was compared. Exudate and tissue of wounds were obtained, and filter paper method and biopsy method were adopted to detect the bacteria of wounds of patients respectively. Filter paper method was regarded as the evaluation method, and biopsy method was regarded as the control method. The relevance, difference, and consistency of the detection results of two methods were tested. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of filter paper method in bacteria detection were calculated. Receiver operating characteristic (ROC) curve was drawn based on the specificity and sensitivity of filter paper method in bacteria detection of 18 patients to predict the detection effect of the method. Data were processed with one-way analysis of variance and Fisher's exact test. In patients tested positive for bacteria by biopsy method, the correlation between bacteria number detected by biopsy method and that by filter paper method was analyzed with Pearson correlation analysis. Results: (1) There were no statistically significant differences among patients with wounds in Texas grade 1, 2, and 3 in age, duration of diabetes, duration of wound, wound area, ankle brachial index, glycosylated hemoglobin, fasting blood sugar, blood platelet count, erythrocyte sedimentation rate, C-reactive protein, aspartate aminotransferase, serum creatinine, and urea nitrogen (with F values from 0.029 to 2.916, P values above 0.05), while there were statistically significant differences among patients with wounds in Texas grade 1, 2, and 3 in white blood cell count and alanine aminotransferase (with F values 4.688 and 6.833 respectively, P <0.05 or P <0.01). (2) According to the results of biopsy method, 6 patients were tested negative for bacteria, and 12 patients were tested positive for bacteria, among which 10 patients were with bacterial number above 1×10(5)/g, and 2 patients with bacterial number below 1×10(5)/g. According to the results of filter paper method, 8 patients were tested negative for bacteria, and 10 patients were tested positive for bacteria, among which 7 patients were with bacterial number above 1×10(5)/g, and 3 patients with bacterial number below 1×10(5)/g. There were 7 patients tested positive for bacteria both by biopsy method and filter paper method, 8 patients tested negative for bacteria both by biopsy method and filter paper method, and 3 patients tested positive for bacteria by biopsy method but negative by filter paper method. Patients tested negative for bacteria by biopsy method did not tested positive for bacteria by filter paper method. There was directional association between the detection results of two methods ( P =0.004), i. e. if result of biopsy method was positive, result of filter paper method could also be positive. There was no obvious difference in the detection results of two methods ( P =0.250). The consistency between the detection results of two methods was ordinary (Kappa=0.68, P =0.002). (3) The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of filter paper method in bacteria detection were 70%, 100%, 1.00, 0.73, and 83.3%, respectively. Total area under ROC curve of bacteria detection by filter paper method in 18 patients was 0.919 (with 95% confidence interval 0-1.000, P =0.030). (4) There were 13 strains of bacteria detected by biopsy method, with 5 strains of Acinetobacter baumannii, 5 strains of Staphylococcus aureus, 1 strain of Pseudomonas aeruginosa, 1 strain of Streptococcus bovis, and 1 strain of bird Enterococcus . There were 11 strains of bacteria detected by filter paper method, with 5 strains of Acinetobacter baumannii, 3 strains of Staphylococcus aureus, 1 strain of Pseudomonas aeruginosa, 1 strain of Streptococcus bovis, and 1 strain of bird Enterococcus . Except for Staphylococcus aureus, the sensitivity and specificity of filter paper method in the detection of the other 4 bacteria were all 100%. The consistency between filter paper method and biopsy method in detecting Acinetobacter baumannii was good (Kappa=1.00, P <0.01), while that in detecting Staphylococcus aureus was ordinary (Kappa=0.68, P <0.05). (5) There was no obvious correlation between the bacteria number of wounds detected by filter paper method and that by biopsy method ( r =0.257, P =0.419). There was obvious correlation between the bacteria numbers detected by two methods in wounds with Texas grade 1 and 2 (with r values as 0.999, P values as 0.001). There was no obvious correlation between the bacteria numbers detected by two methods in wounds with Texas grade 3 ( r =-0.053, P =0.947). Conclusions: The detection result of filter paper method is in accordance with that of biopsy method in the determination of bacterial infection, and it is of great importance in the diagnosis of local infection of diabetic foot wound.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-01
... Hydrogen Peroxide Filter Extraction'' In this method, total suspended particulate matter (TSP) is collected on glass fiber filters according to 40 CFR Appendix G to part 50, EPA Reference Method for the Determination of Lead in Suspended Particulate Matter Collected From Ambient Air. The filter samples are...
NASA Astrophysics Data System (ADS)
Gonzalez, Pablo J.
2017-04-01
Automatic interferometric processing of satellite radar data has emerged as a solution to the increasing amount of acquired SAR data. Automatic SAR and InSAR processing ranges from focusing raw echoes to the computation of displacement time series using large stacks of co-registered radar images. However, this type of interferometric processing approach demands the pre-described or adaptive selection of multiple processing parameters. One of the interferometric processing steps that much strongly influences the final results (displacement maps) is the interferometric phase filtering. There are a large number of phase filtering methods, however the "so-called" Goldstein filtering method is the most popular [Goldstein and Werner, 1998; Baran et al., 2003]. The Goldstein filter needs basically two parameters, the size of the window filter and a parameter to indicate the filter smoothing intensity. The modified Goldstein method removes the need to select the smoothing parameter based on the local interferometric coherence level, but still requires to specify the dimension of the filtering window. An optimal filtered phase quality usually requires careful selection of those parameters. Therefore, there is an strong need to develop automatic filtering methods to adapt for automatic processing, while maximizing filtered phase quality. Here, in this paper, I present a recursive adaptive phase filtering algorithm for accurate estimation of differential interferometric ground deformation and local coherence measurements. The proposed filter is based upon the modified Goldstein filter [Baran et al., 2003]. This filtering method improves the quality of the interferograms by performing a recursive iteration using variable (cascade) kernel sizes, and improving the coherence estimation by locally defringing the interferometric phase. The method has been tested using simulations and real cases relevant to the characteristics of the Sentinel-1 mission. Here, I present real examples from C-band interferograms showing strong and weak deformation gradients, with moderate baselines ( 100-200 m) and variable temporal baselines of 70 and 190 days over variable vegetated volcanoes (Mt. Etna, Hawaii and Nyragongo-Nyamulagira). The differential phase of those examples show intense localized volcano deformation and also vast areas of small differential phase variation. The proposed method outperforms the classical Goldstein and modified Goldstein filters by preserving subtle phase variations where the deformation fringe rate is high, and effectively suppressing phase noise in smoothly phase variation regions. Finally, this method also has the additional advantage of not requiring input parameters, except for the maximum filtering kernel size. References: Baran, I., Stewart, M.P., Kampes, B.M., Perski, Z., Lilly, P., (2003) A modification to the Goldstein radar interferogram filter. IEEE Transactions on Geoscience and Remote Sensing, vol. 41, No. 9., doi:10.1109/TGRS.2003.817212 Goldstein, R.M., Werner, C.L. (1998) Radar interferogram filtering for geophysical applications, Geophysical Research Letters, vol. 25, No. 21, 4035-4038, doi:10.1029/1998GL900033
Method and apparatus for a self-cleaning filter
Diebold, James P.; Lilley, Arthur; Browne, III, Kingsbury; Walt, Robb Ray; Duncan, Dustin; Walker, Michael; Steele, John; Fields, Michael
2013-09-10
A method and apparatus for removing fine particulate matter from a fluid stream without interrupting the overall process or flow. The flowing fluid inflates and expands the flexible filter, and particulate is deposited on the filter media while clean fluid is permitted to pass through the filter. This filter is cleaned when the fluid flow is stopped, the filter collapses, and a force is applied to distort the flexible filter media to dislodge the built-up filter cake. The dislodged filter cake falls to a location that allows undisrupted flow of the fluid after flow is restored. The shed particulate is removed to a bin for periodic collection. A plurality of filter cells can operate independently or in concert, in parallel, or in series to permit cleaning the filters without shutting off the overall fluid flow. The self-cleaning filter is low cost, has low power consumption, and exhibits low differential pressures.
Method and apparatus for a self-cleaning filter
Diebold, James P.; Lilley, Arthur; Browne, III, Kingsbury; Walt, Robb Ray; Duncan, Dustin; Walker, Michael; Steele, John; Fields, Michael
2010-11-16
A method and apparatus for removing fine particulate matter from a fluid stream without interrupting the overall process or flow. The flowing fluid inflates and expands the flexible filter, and particulate is deposited on the filter media while clean fluid is permitted to pass through the filter. This filter is cleaned when the fluid flow is stopped, the filter collapses, and a force is applied to distort the flexible filter media to dislodge the built-up filter cake. The dislodged filter cake falls to a location that allows undisrupted flow of the fluid after flow is restored. The shed particulate is removed to a bin for periodic collection. A plurality of filter cells can operate independently or in concert, in parallel, or in series to permit cleaning the filters without shutting off the overall fluid flow. The self-cleaning filter is low cost, has low power consumption, and exhibits low differential pressures.
Lefebvre, Carol; Glanville, Julie; Beale, Sophie; Boachie, Charles; Duffy, Steven; Fraser, Cynthia; Harbour, Jenny; McCool, Rachael; Smith, Lynne
2017-11-01
Effective study identification is essential for conducting health research, developing clinical guidance and health policy and supporting health-care decision-making. Methodological search filters (combinations of search terms to capture a specific study design) can assist in searching to achieve this. This project investigated the methods used to assess the performance of methodological search filters, the information that searchers require when choosing search filters and how that information could be better provided. Five literature reviews were undertaken in 2010/11: search filter development and testing; comparison of search filters; decision-making in choosing search filters; diagnostic test accuracy (DTA) study methods; and decision-making in choosing diagnostic tests. We conducted interviews and a questionnaire with experienced searchers to learn what information assists in the choice of search filters and how filters are used. These investigations informed the development of various approaches to gathering and reporting search filter performance data. We acknowledge that there has been a regrettable delay between carrying out the project, including the searches, and the publication of this report, because of serious illness of the principal investigator. The development of filters most frequently involved using a reference standard derived from hand-searching journals. Most filters were validated internally only. Reporting of methods was generally poor. Sensitivity, precision and specificity were the most commonly reported performance measures and were presented in tables. Aspects of DTA study methods are applicable to search filters, particularly in the development of the reference standard. There is limited evidence on how clinicians choose between diagnostic tests. No published literature was found on how searchers select filters. Interviewing and questioning searchers via a questionnaire found that filters were not appropriate for all tasks but were predominantly used to reduce large numbers of retrieved records and to introduce focus. The Inter Technology Appraisal Support Collaboration (InterTASC) Information Specialists' Sub-Group (ISSG) Search Filters Resource was most frequently mentioned by both groups as the resource consulted to select a filter. Randomised controlled trial (RCT) and systematic review filters, in particular the Cochrane RCT and the McMaster Hedges filters, were most frequently mentioned. The majority indicated that they used different filters depending on the requirement for sensitivity or precision. Over half of the respondents used the filters available in databases. Interviewees used various approaches when using and adapting search filters. Respondents suggested that the main factors that would make choosing a filter easier were the availability of critical appraisals and more detailed performance information. Provenance and having the filter available in a central storage location were also important. The questionnaire could have been shorter and could have included more multiple choice questions, and the reviews of filter performance focused on only four study designs. Search filter studies should use a representative reference standard and explicitly report methods and results. Performance measures should be presented systematically and clearly. Searchers find filters useful in certain circumstances but expressed a need for more user-friendly performance information to aid filter choice. We suggest approaches to use, adapt and report search filter performance. Future work could include research around search filters and performance measures for study designs not addressed here, exploration of alternative methods of displaying performance results and numerical synthesis of performance comparison results. The National Institute for Health Research (NIHR) Health Technology Assessment programme and Medical Research Council-NIHR Methodology Research Programme (grant number G0901496).
Su, Gui-yang; Li, Jian-hua; Ma, Ying-hua; Li, Sheng-hong
2004-09-01
With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.
Robotic fish tracking method based on suboptimal interval Kalman filter
NASA Astrophysics Data System (ADS)
Tong, Xiaohong; Tang, Chao
2017-11-01
Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.
Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines
NASA Astrophysics Data System (ADS)
Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž
2017-05-01
This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces
Gibbons, C D; Rodríguez, R A; Tallon, L; Sobsey, M D
2010-08-01
To evaluate the electropositive, alumina nanofibre (NanoCeram) cartridge filter as a primary concentration method for recovering adenovirus, norovirus and male-specific coliphages from natural seawater. Viruses were concentrated from 40 l of natural seawater using a NanoCeram cartridge filter and eluted from the filter either by soaking the filter in eluent or by recirculating the eluent continuously through the filter using a peristaltic pump. The elution solution consisted of 3% beef extract and 0.1 mol l(-1) of glycine. The method using a peristaltic pump was more effective in removing the viruses from the filter. High recoveries of norovirus and male-specific coliphages (>96%) but not adenovirus (<3%) were observed from seawater. High adsorption to the filter was observed for adenovirus and male-specific coliphages (>98%). The adsorption and recovery of adenovirus and male-specific coliphages were also determined for fresh finished water and source water. The NanoCeram cartridge filter was an effective primary concentration method for the concentration of norovirus and male-specific coliphages from natural seawater, but not for adenovirus, in spite of the high adsorption of adenovirus to the filter. This study demonstrates that NanoCeram cartridge filter is an effective primary method for concentrating noroviruses and male-specific coliphages from seawater, thereby simplifying collection and processing of water samples for virus recovery.
Method for reducing pressure drop through filters, and filter exhibiting reduced pressure drop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sappok, Alexander; Wong, Victor
Methods for generating and applying coatings to filters with porous material in order to reduce large pressure drop increases as material accumulates in a filter, as well as the filter exhibiting reduced and/or more uniform pressure drop. The filter can be a diesel particulate trap for removing particulate matter such as soot from the exhaust of a diesel engine. Porous material such as ash is loaded on the surface of the substrate or filter walls, such as by coating, depositing, distributing or layering the porous material along the channel walls of the filter in an amount effective for minimizing ormore » preventing depth filtration during use of the filter. Efficient filtration at acceptable flow rates is achieved.« less
NASA Astrophysics Data System (ADS)
LI, B.; Ghosh, A.
2016-12-01
The 2015 Mw 7.8 Gorkha earthquake provides a good opportunity to study the tectonics and earthquake hazards in the Himalayas, one of the most seismically active plate boundaries. Details of the seismicity patterns and associated structures in the Himalayas are poorly understood mainly due to limited instrumentation. Here, we apply a back-projection method to study the mainshock rupture and the following aftershock sequence using four large aperture global seismic arrays. All the arrays show eastward rupture propagation of about 130 km and reveal similar evolution of seismic energy radiation, with strong high-frequency energy burst about 50 km north of Kathmandu. Each single array, however, is typically limited by large azimuthal gap, low resolution, and artifacts due to unmodeled velocity structures. Therefore, we use a self-consistent empirical calibration method to combine four different arrays to image the Gorkha event. It greatly improves the resolution, can better track rupture and reveal details that cannot be resolved by any individual array. In addition, we also use the same arrays at teleseismic distances and apply a back-projection technique to detect and locate the aftershocks immediately following the Gorkha earthquake. We detect about 2.5 times the aftershocks recorded by the Advance National Seismic System comprehensive earthquake catalog during the 19 days following the mainshock. The aftershocks detected by the arrays show an east-west trend in general, with majority of the aftershocks located at the eastern part of the rupture patch and surrounding the rupture zone of the largest Mw 7.3 aftershock. Overall spatiotemporal aftershock pattern agrees well with global catalog, with our catalog showing more details relative to the standard global catalog. The improved aftershock catalog enables us to better study the aftershock dynamics, stress evolution in this region. Moreover, rapid and better imaging of aftershock distribution may aid rapid response and hazard assessment after destructive large earthquakes. Existing multiple global seismic arrays, when properly calibrated and used in combinations, provide a high resolution image of rupture of large earthquakes and spatiotemporal distribution of aftershocks.
Discrimination of Nosiheptide Sources with Plasmonic Filters.
Wang, Delong; Ni, Haibin; Wang, Zhongqiang; Liu, Bing; Chen, Hongyuan; Gu, Zhongze; Zhao, Xiangwei
2017-04-19
Bacteria identification plays a vital role in the field of clinical diagnosis, food industry, and environmental monitoring, which is in great demand of point of care detection methods. In this paper, in order to discriminate the source of nosiheptide product, a plasmonic filter was fabricated to filtrate, capture and identify Streptomycete spores with Surface enhanced Raman Scattering (SERS). Since the plasmonic filter was derived from self-assembled photonic crystal coated with silver, the plasmonic "hot spots" on the filter surface was distributed evenly in a fare good density and the SERS enhancement factor was 7.49 × 10 7 . With this filter, a stain- and PCR-free detection was realized with only 5 μL sample solution and 5 min in a manner of "filtration and measure". Comparison to traditional Gram stain method and silver-plated nylon filter membrane, the plasmonic filter showed good sensitivity and efficiency in the discrimination of nosiheptide prepared with chemical and biological methods. It is anticipated that this simple SERS detection method with plasmonic filter has promising potentials in food safety, environmental, or clinical applications.
Switching non-local vector median filter
NASA Astrophysics Data System (ADS)
Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji
2016-04-01
This paper describes a novel image filtering method that removes random-valued impulse noise superimposed on a natural color image. In impulse noise removal, it is essential to employ a switching-type filtering method, as used in the well-known switching median filter, to preserve the detail of an original image with good quality. In color image filtering, it is generally preferable to deal with the red (R), green (G), and blue (B) components of each pixel of a color image as elements of a vectorized signal, as in the well-known vector median filter, rather than as component-wise signals to prevent a color shift after filtering. By taking these fundamentals into consideration, we propose a switching-type vector median filter with non-local processing that mainly consists of a noise detector and a noise removal filter. Concretely, we propose a noise detector that proactively detects noise-corrupted pixels by focusing attention on the isolation tendencies of pixels of interest not in an input image but in difference images between RGB components. Furthermore, as the noise removal filter, we propose an extended version of the non-local median filter, we proposed previously for grayscale image processing, named the non-local vector median filter, which is designed for color image processing. The proposed method realizes a superior balance between the preservation of detail and impulse noise removal by proactive noise detection and non-local switching vector median filtering, respectively. The effectiveness and validity of the proposed method are verified in a series of experiments using natural color images.
Glass wool filters for concentrating waterborne viruses and agricultural zoonotic pathogens
Millen, Hana T.; Gonnering, Jordan C.; Berg, Ryan K.; Spencer, Susan K.; Jokela, William E.; Pearce, John M.; Borchardt, Jackson S.; Borchardt, Mark A.
2012-01-01
The key first step in evaluating pathogen levels in suspected contaminated water is concentration. Concentration methods tend to be specific for a particular pathogen group, for example US Environmental Protection Agency Method 1623 for Giardia and Cryptosporidium1, which means multiple methods are required if the sampling program is targeting more than one pathogen group. Another drawback of current methods is the equipment can be complicated and expensive, for example the VIRADEL method with the 1MDS cartridge filter for concentrating viruses2. In this article we describe how to construct glass wool filters for concentrating waterborne pathogens. After filter elution, the concentrate is amenable to a second concentration step, such as centrifugation, followed by pathogen detection and enumeration by cultural or molecular methods. The filters have several advantages. Construction is easy and the filters can be built to any size for meeting specific sampling requirements. The filter parts are inexpensive, making it possible to collect a large number of samples without severely impacting a project budget. Large sample volumes (100s to 1,000s L) can be concentrated depending on the rate of clogging from sample turbidity. The filters are highly portable and with minimal equipment, such as a pump and flow meter, they can be implemented in the field for sampling finished drinking water, surface water, groundwater, and agricultural runoff. Lastly, glass wool filtration is effective for concentrating a variety of pathogen types so only one method is necessary. Here we report on filter effectiveness in concentrating waterborne human enterovirus, Salmonella enterica, Cryptosporidium parvum, and avian influenza virus.
Adaptive Filtering Using Recurrent Neural Networks
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.
2005-01-01
A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.
Glass Wool Filters for Concentrating Waterborne Viruses and Agricultural Zoonotic Pathogens
Millen, Hana T.; Gonnering, Jordan C.; Berg, Ryan K.; Spencer, Susan K.; Jokela, William E.; Pearce, John M.; Borchardt, Jackson S.; Borchardt, Mark A.
2012-01-01
The key first step in evaluating pathogen levels in suspected contaminated water is concentration. Concentration methods tend to be specific for a particular pathogen group, for example US Environmental Protection Agency Method 1623 for Giardia and Cryptosporidium1, which means multiple methods are required if the sampling program is targeting more than one pathogen group. Another drawback of current methods is the equipment can be complicated and expensive, for example the VIRADEL method with the 1MDS cartridge filter for concentrating viruses2. In this article we describe how to construct glass wool filters for concentrating waterborne pathogens. After filter elution, the concentrate is amenable to a second concentration step, such as centrifugation, followed by pathogen detection and enumeration by cultural or molecular methods. The filters have several advantages. Construction is easy and the filters can be built to any size for meeting specific sampling requirements. The filter parts are inexpensive, making it possible to collect a large number of samples without severely impacting a project budget. Large sample volumes (100s to 1,000s L) can be concentrated depending on the rate of clogging from sample turbidity. The filters are highly portable and with minimal equipment, such as a pump and flow meter, they can be implemented in the field for sampling finished drinking water, surface water, groundwater, and agricultural runoff. Lastly, glass wool filtration is effective for concentrating a variety of pathogen types so only one method is necessary. Here we report on filter effectiveness in concentrating waterborne human enterovirus, Salmonella enterica, Cryptosporidium parvum, and avian influenza virus. PMID:22415031
Analytical study to define a helicopter stability derivative extraction method, volume 1
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1973-01-01
A method is developed for extracting six degree-of-freedom stability and control derivatives from helicopter flight data. Different combinations of filtering and derivative estimate are investigated and used with a Bayesian approach for derivative identification. The combination of filtering and estimate found to yield the most accurate time response match to flight test data is determined and applied to CH-53A and CH-54B flight data. The method found to be most accurate consists of (1) filtering flight test data with a digital filter, followed by an extended Kalman filter (2) identifying a derivative estimate with a least square estimator, and (3) obtaining derivatives with the Bayesian derivative extraction method.
Comparison of sEMG processing methods during whole-body vibration exercise.
Lienhard, Karin; Cabasson, Aline; Meste, Olivier; Colson, Serge S
2015-12-01
The objective was to investigate the influence of surface electromyography (sEMG) processing methods on the quantification of muscle activity during whole-body vibration (WBV) exercises. sEMG activity was recorded while the participants performed squats on the platform with and without WBV. The spikes observed in the sEMG spectrum at the vibration frequency and its harmonics were deleted using state-of-the-art methods, i.e. (1) a band-stop filter, (2) a band-pass filter, and (3) spectral linear interpolation. The same filtering methods were applied on the sEMG during the no-vibration trial. The linear interpolation method showed the highest intraclass correlation coefficients (no vibration: 0.999, WBV: 0.757-0.979) with the comparison measure (unfiltered sEMG during the no-vibration trial), followed by the band-stop filter (no vibration: 0.929-0.975, WBV: 0.661-0.938). While both methods introduced a systematic bias (P < 0.001), the error increased with increasing mean values to a higher degree for the band-stop filter. After adjusting the sEMG(RMS) during WBV for the bias, the performance of the interpolation method and the band-stop filter was comparable. The band-pass filter was in poor agreement with the other methods (ICC: 0.207-0.697), unless the sEMG(RMS) was corrected for the bias (ICC ⩾ 0.931, %LOA ⩽ 32.3). In conclusion, spectral linear interpolation or a band-stop filter centered at the vibration frequency and its multiple harmonics should be applied to delete the artifacts in the sEMG signals during WBV. With the use of a band-stop filter it is recommended to correct the sEMG(RMS) for the bias as this procedure improved its performance. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Review of visual display system in flight simulator].
Xie, Guang-hui; Wei, Shao-ning
2003-06-01
Visual display system is the key part and plays a very important role in flight simulators and flight training devices. The developing history of visual display system is recalled and the principle and characters of some visual display systems including collimated display systems and back-projected collimated display systems are described. The future directions of visual display systems are analyzed.
O'Halloran, Rafael L; Holmes, James H; Wu, Yu-Chien; Alexander, Andrew; Fain, Sean B
2010-01-01
An undersampled diffusion-weighted stack-of-stars acquisition is combined with iterative highly constrained back-projection to perform hyperpolarized helium-3 MR q-space imaging with combined regional correction of radiofrequency- and T1-related signal loss in a single breath-held scan. The technique is tested in computer simulations and phantom experiments and demonstrated in a healthy human volunteer with whole-lung coverage in a 13-sec breath-hold. Measures of lung microstructure at three different lung volumes are evaluated using inhaled gas volumes of 500 mL, 1000 mL, and 1500 mL to demonstrate feasibility. Phantom results demonstrate that the proposed technique is in agreement with theoretical values, as well as with a fully sampled two-dimensional Cartesian acquisition. Results from the volunteer study demonstrate that the root mean squared diffusion distance increased significantly from the 500-mL volume to the 1000-mL volume. This technique represents the first demonstration of a spatially resolved hyperpolarized helium-3 q-space imaging technique and shows promise for microstructural evaluation of lung disease in three dimensions. Copyright (c) 2009 Wiley-Liss, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sillanpaa, Jussi; Chang Jenghwa; Mageras, Gikas
2006-09-15
We report on the capabilities of a low-dose megavoltage cone-beam computed tomography (MV CBCT) system. The high-efficiency image receptor consists of a photodiode array coupled to a scintillator composed of individual CsI crystals. The CBCT system uses the 6 MV beam from a linear accelerator. A synchronization circuit allows us to limit the exposure to one beam pulse [0.028 monitor units (MU)] per projection image. 150-500 images (4.2-13.9 MU total) are collected during a one-minute scan and reconstructed using a filtered backprojection algorithm. Anthropomorphic and contrast phantoms are imaged and the contrast-to-noise ratio of the reconstruction is studied as amore » function of the number of projections and the error in the projection angles. The detector dose response is linear (R{sup 2} value 0.9989). A 2% electron density difference is discernible using 460 projection images and a total exposure of 13 MU (corresponding to a maximum absorbed dose of about 12 cGy in a patient). We present first patient images acquired with this system. Tumors in lung are clearly visible and skeletal anatomy is observed in sufficient detail to allow reproducible registration with the planning kV CT images. The MV CBCT system is shown to be capable of obtaining good quality three-dimensional reconstructions at relatively low dose and to be clinically usable for improving the accuracy of radiotherapy patient positioning.« less
NASA Astrophysics Data System (ADS)
Patch, S. K.; Kireeff Covo, M.; Jackson, A.; Qadadha, Y. M.; Campbell, K. S.; Albright, R. A.; Bloemhard, P.; Donoghue, A. P.; Siero, C. R.; Gimpel, T. L.; Small, S. M.; Ninemire, B. F.; Johnson, M. B.; Phair, L.
2016-08-01
The potential of particle therapy due to focused dose deposition in the Bragg peak has not yet been fully realized due to inaccuracies in range verification. The purpose of this work was to correlate the Bragg peak location with target structure, by overlaying the location of the Bragg peak onto a standard ultrasound image. Pulsed delivery of 50 MeV protons was accomplished by a fast chopper installed between the ion source and the cyclotron inflector. The chopper limited the train of bunches so that 2 Gy were delivered in 2 μ \\text{s} . The ion pulse generated thermoacoustic pulses that were detected by a cardiac ultrasound array, which also produced a grayscale ultrasound image. A filtered backprojection algorithm focused the received signal to the Bragg peak location with perfect co-registration to the ultrasound images. Data was collected in a room temperature water bath and gelatin phantom with a cavity designed to mimic the intestine, in which gas pockets can displace the Bragg peak. Phantom experiments performed with the cavity both empty and filled with olive oil confirmed that displacement of the Bragg peak due to anatomical change could be detected. Thermoacoustic range measurements in the waterbath agreed with Monte Carlo simulation within 1.2 mm. In the phantom, thermoacoustic range estimates and first-order range estimates from CT images agreed to within 1.5 mm.
Stickel, Jennifer R; Qi, Jinyi; Cherry, Simon R
2007-01-01
With the increasing use of in vivo imaging in mouse models of disease, there are many interesting applications that demand imaging of organs and tissues with submillimeter resolution. Though there are other contributing factors, the spatial resolution in small-animal PET is still largely determined by the detector pixel dimensions. In this work, a pair of lutetium oxyorthosilicate (LSO) arrays with 0.5-mm pixels was coupled to multichannel photomultiplier tubes and evaluated for use as high-resolution PET detectors. Flood histograms demonstrated that most crystals were clearly identifiable. Energy resolution varied from 22% to 38%. The coincidence timing resolution was 1.42-ns full width at half maximum (FWHM). The intrinsic spatial resolution was 0.68-mm FWHM as measured with a 30-gauge needle filled with (18)F. The improvement in spatial resolution in a tomographic setting is demonstrated using images of a line source phantom reconstructed with filtered backprojection and compared with images obtained from 2 dedicated small-animal PET scanners. Finally, a projection image of the mouse foot is shown to demonstrate the application of these 0.5-mm LSO detectors to a biologic task. A pair of highly pixelated LSO detections has been constructed and characterized for use as high-spatial-resolution PET detectors. It appears that small-animal PET systems capable of a FWHM spatial resolution of 600 microm or less are feasible and should be pursued.
Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong
2011-02-21
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.
NASA Astrophysics Data System (ADS)
Xia, Huihui; Kan, Ruifeng; Xu, Zhenyu; Liu, Jianguo; He, Yabai; Yang, Chenguang; Chen, Bing; Wei, Min; Yao, Lu; Zhang, Guangle
2016-10-01
In this paper, the reconstruction of axisymmetric temperature and H2O concentration distributions in a flat flame burner is realized by tunable diode laser absorption spectroscopy (TDLAS) and filtered back-projection (FBP) algorithm. Two H2O absorption transitions (7154.354/7154.353 cm-1 and 7467.769 cm-1) are selected as line pair for temperature measurement, and time division multiplexing technology is adopted to scan this two H2O absorption transitions simultaneously at 1 kHz repetition rate. In the experiment, FBP algorithm can be used for reconstructing axisymmetric distributions of flow field parameters with only single view parallel-beam TDLAS measurements, and the same data sets from the given parallel beam are used for other virtual projection angles and beams scattered between 0° and 180°. The real-time online measurements of projection data, i.e., integrated absorbance both for pre-selected transitions on CH4/air flat flame burner are realized by Voigt on-line fitting, and the fitting residuals are less than 0.2%. By analyzing the projection data from different views based on FBP algorithm, the distributions of temperature and concentration along radial direction can be known instantly. The results demonstrate that the system and the proposed innovative FBP algorithm are capable for accurate reconstruction of axisymmetric temperature and H2O concentration distribution in combustion systems and facilities.
Micro-CT images reconstruction and 3D visualization for small animal studying
NASA Astrophysics Data System (ADS)
Gong, Hui; Liu, Qian; Zhong, Aijun; Ju, Shan; Fang, Quan; Fang, Zheng
2005-01-01
A small-animal x-ray micro computed tomography (micro-CT) system has been constructed to screen laboratory small animals and organs. The micro-CT system consists of dual fiber-optic taper-coupled CCD detectors with a field-of-view of 25x50 mm2, a microfocus x-ray source, a rotational subject holder. For accurate localization of rotation center, coincidence between the axis of rotation and centre of image was studied by calibration with a polymethylmethacrylate cylinder. Feldkamp"s filtered back-projection cone-beam algorithm is adopted for three-dimensional reconstruction on account of the effective corn-beam angle is 5.67° of the micro-CT system. 200x1024x1024 matrix data of micro-CT is obtained with the magnification of 1.77 and pixel size of 31x31μm2. In our reconstruction software, output image size of micro-CT slices data, magnification factor and rotation sample degree can be modified in the condition of different computational efficiency and reconstruction region. The reconstructed image matrix data is processed and visualization by Visualization Toolkit (VTK). Data parallelism of VTK is performed in surface rendering of reconstructed data in order to improve computing speed. Computing time of processing a 512x512x512 matrix datasets is about 1/20 compared with serial program when 30 CPU is used. The voxel size is 54x54x108 μm3. The reconstruction and 3-D visualization images of laboratory rat ear are presented.
Instantaneous brain dynamics mapped to a continuous state space.
Billings, Jacob C W; Medda, Alessio; Shakil, Sadia; Shen, Xiaohong; Kashyap, Amrit; Chen, Shiyang; Abbas, Anzar; Zhang, Xiaodi; Nezafati, Maysam; Pan, Wen-Ju; Berman, Gordon J; Keilholz, Shella D
2017-11-15
Measures of whole-brain activity, from techniques such as functional Magnetic Resonance Imaging, provide a means to observe the brain's dynamical operations. However, interpretation of whole-brain dynamics has been stymied by the inherently high-dimensional structure of brain activity. The present research addresses this challenge through a series of scale transformations in the spectral, spatial, and relational domains. Instantaneous multispectral dynamics are first developed from input data via a wavelet filter bank. Voxel-level signals are then projected onto a representative set of spatially independent components. The correlation distance over the instantaneous wavelet-ICA state vectors is a graph that may be embedded onto a lower-dimensional space to assist the interpretation of state-space dynamics. Applying this procedure to a large sample of resting-state and task-active data (acquired through the Human Connectome Project), we segment the empirical state space into a continuum of stimulus-dependent brain states. Upon observing the local neighborhood of brain-states adopted subsequent to each stimulus, we may conclude that resting brain activity includes brain states that are, at times, similar to those adopted during tasks, but that are at other times distinct from task-active brain states. As task-active brain states often populate a local neighborhood, back-projection of segments of the dynamical state space onto the brain's surface reveals the patterns of brain activity that support many experimentally-defined states. Copyright © 2017 Elsevier Inc. All rights reserved.
PET performance evaluation of MADPET4: a small animal PET insert for a 7 T MRI scanner.
Omidvari, Negar; Cabello, Jorge; Topping, Geoffrey; Schneider, Florian R; Paul, Stephan; Schwaiger, Markus; Ziegler, Sibylle I
2017-11-01
MADPET4 is the first small animal PET insert with two layers of individually read out crystals in combination with silicon photomultiplier technology. It has a novel detector arrangement, in which all crystals face the center of field of view transaxially. In this work, the PET performance of MADPET4 was evaluated and compared to other preclinical PET scanners using the NEMA NU 4 measurements, followed by imaging a mouse-size hot-rod resolution phantom and two in vivo simultaneous PET/MRI scans in a 7 T MRI scanner. The insert had a peak sensitivity of 0.49%, using an energy threshold of 350 keV. A uniform transaxial resolution was obtained up to 15 mm radial offset from the axial center, using filtered back-projection with single-slice rebinning. The measured average radial and tangential resolutions (FWHM) were 1.38 mm and 1.39 mm, respectively. The 1.2 mm rods were separable in the hot-rod phantom using an iterative image reconstruction algorithm. The scatter fraction was 7.3% and peak noise equivalent count rate was 15.5 kcps at 65.1 MBq of activity. The FDG uptake in a mouse heart and brain were visible in the two in vivo simultaneous PET/MRI scans without applying image corrections. In conclusion, the insert demonstrated a good overall performance and can be used for small animal multi-modal research applications.
PET performance evaluation of MADPET4: a small animal PET insert for a 7 T MRI scanner
NASA Astrophysics Data System (ADS)
Omidvari, Negar; Cabello, Jorge; Topping, Geoffrey; Schneider, Florian R.; Paul, Stephan; Schwaiger, Markus; Ziegler, Sibylle I.
2017-11-01
MADPET4 is the first small animal PET insert with two layers of individually read out crystals in combination with silicon photomultiplier technology. It has a novel detector arrangement, in which all crystals face the center of field of view transaxially. In this work, the PET performance of MADPET4 was evaluated and compared to other preclinical PET scanners using the NEMA NU 4 measurements, followed by imaging a mouse-size hot-rod resolution phantom and two in vivo simultaneous PET/MRI scans in a 7 T MRI scanner. The insert had a peak sensitivity of 0.49%, using an energy threshold of 350 keV. A uniform transaxial resolution was obtained up to 15 mm radial offset from the axial center, using filtered back-projection with single-slice rebinning. The measured average radial and tangential resolutions (FWHM) were 1.38 mm and 1.39 mm, respectively. The 1.2 mm rods were separable in the hot-rod phantom using an iterative image reconstruction algorithm. The scatter fraction was 7.3% and peak noise equivalent count rate was 15.5 kcps at 65.1 MBq of activity. The FDG uptake in a mouse heart and brain were visible in the two in vivo simultaneous PET/MRI scans without applying image corrections. In conclusion, the insert demonstrated a good overall performance and can be used for small animal multi-modal research applications.
NASA Astrophysics Data System (ADS)
Tang, Xiangyang
2003-05-01
In multi-slice helical CT, the single-tilted-plane-based reconstruction algorithm has been proposed to combat helical and cone beam artifacts by tilting a reconstruction plane to fit a helical source trajectory optimally. Furthermore, to improve the noise characteristics or dose efficiency of the single-tilted-plane-based reconstruction algorithm, the multi-tilted-plane-based reconstruction algorithm has been proposed, in which the reconstruction plane deviates from the pose globally optimized due to an extra rotation along the 3rd axis. As a result, the capability of suppressing helical and cone beam artifacts in the multi-tilted-plane-based reconstruction algorithm is compromised. An optomized tilted-plane-based reconstruction algorithm is proposed in this paper, in which a matched view weighting strategy is proposed to optimize the capability of suppressing helical and cone beam artifacts and noise characteristics. A helical body phantom is employed to quantitatively evaluate the imaging performance of the matched view weighting approach by tabulating artifact index and noise characteristics, showing that the matched view weighting improves both the helical artifact suppression and noise characteristics or dose efficiency significantly in comparison to the case in which non-matched view weighting is applied. Finally, it is believed that the matched view weighting approach is of practical importance in the development of multi-slive helical CT, because it maintains the computational structure of fan beam filtered backprojection and demands no extra computational services.
Pan, Xiaochuan; Sidky, Emil Y; Vannier, Michael
2010-01-01
Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues. PMID:20376330
GPU-based cone beam computed tomography.
Noël, Peter B; Walczak, Alan M; Xu, Jinhui; Corso, Jason J; Hoffmann, Kenneth R; Schafer, Sebastian
2010-06-01
The use of cone beam computed tomography (CBCT) is growing in the clinical arena due to its ability to provide 3D information during interventions, its high diagnostic quality (sub-millimeter resolution), and its short scanning times (60 s). In many situations, the short scanning time of CBCT is followed by a time-consuming 3D reconstruction. The standard reconstruction algorithm for CBCT data is the filtered backprojection, which for a volume of size 256(3) takes up to 25 min on a standard system. Recent developments in the area of Graphic Processing Units (GPUs) make it possible to have access to high-performance computing solutions at a low cost, allowing their use in many scientific problems. We have implemented an algorithm for 3D reconstruction of CBCT data using the Compute Unified Device Architecture (CUDA) provided by NVIDIA (NVIDIA Corporation, Santa Clara, California), which was executed on a NVIDIA GeForce GTX 280. Our implementation results in improved reconstruction times from minutes, and perhaps hours, to a matter of seconds, while also giving the clinician the ability to view 3D volumetric data at higher resolutions. We evaluated our implementation on ten clinical data sets and one phantom data set to observe if differences occur between CPU and GPU-based reconstructions. By using our approach, the computation time for 256(3) is reduced from 25 min on the CPU to 3.2 s on the GPU. The GPU reconstruction time for 512(3) volumes is 8.5 s. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin
2018-04-18
Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.
NASA Astrophysics Data System (ADS)
Pan, Xiaochuan; Sidky, Emil Y.; Vannier, Michael
2009-12-01
Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues.
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
Improved Kalman Filter Method for Measurement Noise Reduction in Multi Sensor RFID Systems
Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon
2011-01-01
Recently, the range of available Radio Frequency Identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less Mean Squared Error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments. PMID:22346641
Improved Kalman filter method for measurement noise reduction in multi sensor RFID systems.
Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon
2011-01-01
Recently, the range of available radio frequency identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less mean squared error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments.
Advanced Filter Technology For Nuclear Thermal Propulsion
NASA Technical Reports Server (NTRS)
Castillon, Erick
2015-01-01
The Scrubber System focuses on using HEPA filters and carbon filtration to purify the exhaust of a Nuclear Thermal Propulsion engine of its aerosols and radioactive particles; however, new technology may lend itself to alternate filtration options, which may lead to reduction in cost while at the same time have the same filtering, if not greater, filtering capabilities, as its predecessors. Extensive research on various types of filtration methods was conducted with only four showing real promise: ionization, cyclonic separation, classic filtration, and host molecules. With the four methods defined, more research was needed to find the devices suitable for each method. Each filtration option was matched with a device: cyclonic separators for the method of the same name, electrostatic separators for ionization, HEGA filters, and carcerands for the host molecule method. Through many hours of research, the best alternative for aerosol filtration was determined to be the electrostatic precipitator because of its high durability against flow rate and its ability to cleanse up to 99.99% of contaminants as small as 0.001 micron. Carcerands, which are the only alternative to filtering radioactive particles, were found to be non-existent commercially because of their status as a "work in progress" at research institutions. Nevertheless, the conclusions after the research were that HEPA filters is recommended as the best option for filtering aerosols and carbon filtration is best for filtering radioactive particles.
Pinson, Paul A.
1998-01-01
A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated in barrier material, preferably in the form of a flexible sheet, one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention.
Pinson, P.A.
1998-02-24
A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated barrier material, preferably in the form of a flexible sheet, and one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention. 3 figs.
Method for enhanced longevity of in situ microbial filter used for bioremediation
Carman, M.L.; Taylor, R.T.
1999-03-30
An improved method is disclosed for in situ microbial filter bioremediation having increasingly operational longevity of an in situ microbial filter emplaced into an aquifer. A method is presented for generating a microbial filter of sufficient catalytic density and thickness, which has increased replenishment interval, improved bacteria attachment and detachment characteristics and the endogenous stability under in situ conditions. A system is also disclosed for in situ field water remediation. 31 figs.
40 CFR 60.386 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... The sample volume for each run shall be at least 1.70 dscm (60 dscf). The sampling probe and filter... probe and filter temperature slightly above the effluent temperature (up to a maximum filter temperature of 121 °C (250 °F)) in order to prevent water condensation on the filter. (2) Method 9 and the...
Evaluation of deconvolution modelling applied to numerical combustion
NASA Astrophysics Data System (ADS)
Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît
2018-01-01
A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.
Frequency tracking and variable bandwidth for line noise filtering without a reference.
Kelly, John W; Collinger, Jennifer L; Degenhart, Alan D; Siewiorek, Daniel P; Smailagic, Asim; Wang, Wei
2011-01-01
This paper presents a method for filtering line noise using an adaptive noise canceling (ANC) technique. This method effectively eliminates the sinusoidal contamination while achieving a narrower bandwidth than typical notch filters and without relying on the availability of a noise reference signal as ANC methods normally do. A sinusoidal reference is instead digitally generated and the filter efficiently tracks the power line frequency, which drifts around a known value. The filter's learning rate is also automatically adjusted to achieve faster and more accurate convergence and to control the filter's bandwidth. In this paper the focus of the discussion and the data will be electrocorticographic (ECoG) neural signals, but the presented technique is applicable to other recordings.
Liu, Wanli; Bian, Zhengfu; Liu, Zhenguo; Zhang, Qiuzhao
2015-01-01
Differential interferometric synthetic aperture radar has been shown to be effective for monitoring subsidence in coal mining areas. Phase unwrapping can have a dramatic influence on the monitoring result. In this paper, a filtering-based phase unwrapping algorithm in combination with path-following is introduced to unwrap differential interferograms with high noise in mining areas. It can perform simultaneous noise filtering and phase unwrapping so that the pre-filtering steps can be omitted, thus usually retaining more details and improving the detectable deformation. For the method, the nonlinear measurement model of phase unwrapping is processed using a simplified Cubature Kalman filtering, which is an effective and efficient tool used in many nonlinear fields. Three case studies are designed to evaluate the performance of the method. In Case 1, two tests are designed to evaluate the performance of the method under different factors including the number of multi-looks and path-guiding indexes. The result demonstrates that the unwrapped results are sensitive to the number of multi-looks and that the Fisher Distance is the most suitable path-guiding index for our study. Two case studies are then designed to evaluate the feasibility of the proposed phase unwrapping method based on Cubature Kalman filtering. The results indicate that, compared with the popular Minimum Cost Flow method, the Cubature Kalman filtering-based phase unwrapping can achieve promising results without pre-filtering and is an appropriate method for coal mining areas with high noise. PMID:26153776
NASA Astrophysics Data System (ADS)
Bindiya T., S.; Elias, Elizabeth
2015-01-01
In this paper, multiplier-less near-perfect reconstruction tree-structured filter banks are proposed. Filters with sharp transition width are preferred in filter banks in order to reduce the aliasing between adjacent channels. When sharp transition width filters are designed as conventional finite impulse response filters, the order of the filters will become very high leading to increased complexity. The frequency response masking (FRM) method is known to result in linear-phase sharp transition width filters with low complexity. It is found that the proposed design method, which is based on FRM, gives better results compared to the earlier reported results, in terms of the number of multipliers when sharp transition width filter banks are needed. To further reduce the complexity and power consumption, the tree-structured filter bank is made totally multiplier-less by converting the continuous filter bank coefficients to finite precision coefficients in the signed power of two space. This may lead to performance degradation and calls for the use of a suitable optimisation technique. In this paper, gravitational search algorithm is proposed to be used in the design of the multiplier-less tree-structured uniform as well as non-uniform filter banks. This design method results in uniform and non-uniform filter banks which are simple, alias-free, linear phase and multiplier-less and have sharp transition width.
Major, Kevin J; Poutous, Menelaos K; Ewing, Kenneth J; Dunnill, Kevin F; Sanghera, Jasbinder S; Aggarwal, Ishwar D
2015-09-01
Optical filter-based chemical sensing techniques provide a new avenue to develop low-cost infrared sensors. These methods utilize multiple infrared optical filters to selectively measure different response functions for various chemicals, dependent on each chemical's infrared absorption. Rather than identifying distinct spectral features, which can then be used to determine the identity of a target chemical, optical filter-based approaches rely on measuring differences in the ensemble response between a given filter set and specific chemicals of interest. Therefore, the results of such methods are highly dependent on the original optical filter choice, which will dictate the selectivity, sensitivity, and stability of any filter-based sensing method. Recently, a method has been developed that utilizes unique detection vector operations defined by optical multifilter responses, to discriminate between volatile chemical vapors. This method, comparative-discrimination spectral detection (CDSD), is a technique which employs broadband optical filters to selectively discriminate between chemicals with highly overlapping infrared absorption spectra. CDSD has been shown to correctly distinguish between similar chemicals in the carbon-hydrogen stretch region of the infrared absorption spectra from 2800-3100 cm(-1). A key challenge to this approach is how to determine which optical filter sets should be utilized to achieve the greatest discrimination between target chemicals. Previous studies used empirical approaches to select the optical filter set; however this is insufficient to determine the optimum selectivity between strongly overlapping chemical spectra. Here we present a numerical approach to systematically study the effects of filter positioning and bandwidth on a number of three-chemical systems. We describe how both the filter properties, as well as the chemicals in each set, affect the CDSD results and subsequent discrimination. These results demonstrate the importance of choosing the proper filter set and chemicals for comparative discrimination, in order to identify the target chemical of interest in the presence of closely matched chemical interferents. These findings are an integral step in the development of experimental prototype sensors, which will utilize CDSD.