NASA Astrophysics Data System (ADS)
La Foy, Roderick; Vlachos, Pavlos
2011-11-01
An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy
Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.
2017-01-01
One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997
SSULI/SSUSI UV Tomographic Images of Large-Scale Plasma Structuring
NASA Astrophysics Data System (ADS)
Hei, M. A.; Budzien, S. A.; Dymond, K.; Paxton, L. J.; Schaefer, R. K.; Groves, K. M.
2015-12-01
We present a new technique that creates tomographic reconstructions of atmospheric ultraviolet emission based on data from the Special Sensor Ultraviolet Limb Imager (SSULI) and the Special Sensor Ultraviolet Spectrographic Imager (SSUSI), both flown on the Defense Meteorological Satellite Program (DMSP) Block 5D3 series satellites. Until now, the data from these two instruments have been used independently of each other. The new algorithm combines SSULI/SSUSI measurements of 135.6 nm emission using the tomographic technique; the resultant data product - whole-orbit reconstructions of atmospheric volume emission within the satellite orbital plane - is substantially improved over the original data sets. Tests using simulated atmospheric emission verify that the algorithm performs well in a variety of situations, including daytime, nighttime, and even in the challenging terminator regions. A comparison with ALTAIR radar data validates that the volume emission reconstructions can be inverted to yield maps of electron density. The algorithm incorporates several innovative new features, including the use of both SSULI and SSUSI data to create tomographic reconstructions, the use of an inversion algorithm (Richardson-Lucy; RL) that explicitly accounts for the Poisson statistics inherent in optical measurements, and a pseudo-diffusion based regularization scheme implemented between iterations of the RL code. The algorithm also explicitly accounts for extinction due to absorption by molecular oxygen.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
NASA Astrophysics Data System (ADS)
Krauze, W.; Makowski, P.; Kujawińska, M.
2015-06-01
Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.
de Lima, Camila; Salomão Helou, Elias
2018-01-01
Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.
NASA Astrophysics Data System (ADS)
Riggi, S.; Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Sciacca, E.; Vitello, F.
2013-11-01
Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.
NASA Astrophysics Data System (ADS)
Kostencka, Julianna; Kozacki, Tomasz; Hennelly, Bryan; Sheridan, John T.
2017-06-01
Holographic tomography (HT) allows noninvasive, quantitative, 3D imaging of transparent microobjects, such as living biological cells and fiber optics elements. The technique is based on acquisition of multiple scattered fields for various sample perspectives using digital holographic microscopy. Then, the captured data is processed with one of the tomographic reconstruction algorithms, which enables 3D reconstruction of refractive index distribution. In our recent works we addressed the issue of spatially variant accuracy of the HT reconstructions, which results from the insufficient model of diffraction that is applied in the widely-used tomographic reconstruction algorithms basing on the Rytov approximation. In the present study, we continue investigating the spatially variant properties of the HT imaging, however, we are now focusing on the limited spatial size of holograms as a source of this problem. Using the Wigner distribution representation and the Ewald sphere approach, we show that the limited size of the holograms results in a decreased quality of tomographic imaging in off-center regions of the HT reconstructions. This is because the finite detector extent becomes a limiting aperture that prohibits acquisition of full information about diffracted fields coming from the out-of-focus structures of a sample. The incompleteness of the data results in an effective truncation of the tomographic transfer function for the out-of-center regions of the tomographic image. In this paper, the described effect is quantitatively characterized for three types of the tomographic systems: the configuration with 1) object rotation, 2) scanning of the illumination direction, 3) the hybrid HT solution combing both previous approaches.
Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data
NASA Astrophysics Data System (ADS)
Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel
2015-08-01
Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.
Optical tomographic memories: algorithms for the efficient information readout
NASA Astrophysics Data System (ADS)
Pantelic, Dejan V.
1990-07-01
Tomographic alogithms are modified in order to reconstruct the inf ormation previously stored by focusing laser radiation in a volume of photosensitive media. Apriori information about the position of bits of inf ormation is used. 1. THE PRINCIPLES OF TOMOGRAPHIC MEMORIES Tomographic principles can be used to store and reconstruct the inf ormation artificially stored in a bulk of a photosensitive media 1 The information is stored by changing some characteristics of a memory material (e. g. refractive index). Radiation from the two independent light sources (e. g. lasers) is f ocused inside the memory material. In this way the intensity of the light is above the threshold only in the localized point where the light rays intersect. By scanning the material the information can be stored in binary or nary format. When the information is stored it can be read by tomographic methods. However the situation is quite different from the classical tomographic problem. Here a lot of apriori information is present regarding the p0- sitions of the bits of information profile representing single bit and a mode of operation (binary or n-ary). 2. ALGORITHMS FOR THE READOUT OF THE TOMOGRAPHIC MEMORIES Apriori information enables efficient reconstruction of the memory contents. In this paper a few methods for the information readout together with the simulation results will be presented. Special attention will be given to the noise considerations. Two different
Iterative Reconstruction of Volumetric Particle Distribution for 3D Velocimetry
NASA Astrophysics Data System (ADS)
Wieneke, Bernhard; Neal, Douglas
2011-11-01
A number of different volumetric flow measurement techniques exist for following the motion of illuminated particles. For experiments that have lower seeding densities, 3D-PTV uses recorded images from typically 3-4 cameras and then tracks the individual particles in space and time. This technique is effective in flows that have lower seeding densities. For flows that have a higher seeding density, tomographic PIV uses a tomographic reconstruction algorithm (e.g. MART) to reconstruct voxel intensities of the recorded volume followed by the cross-correlation of subvolumes to provide the instantaneous 3D vector fields on a regular grid. A new hybrid algorithm is presented which iteratively reconstructs the 3D-particle distribution directly using particles with certain imaging properties instead of voxels as base functions. It is shown with synthetic data that this method is capable of reconstructing densely seeded flows up to 0.05 particles per pixel (ppp) with the same or higher accuracy than 3D-PTV and tomographic PIV. Finally, this new method is validated using experimental data on a turbulent jet.
Regridding reconstruction algorithm for real-time tomographic imaging
Marone, F.; Stampanoni, M.
2012-01-01
Sub-second temporal-resolution tomographic microscopy is becoming a reality at third-generation synchrotron sources. Efficient data handling and post-processing is, however, difficult when the data rates are close to 10 GB s−1. This bottleneck still hinders exploitation of the full potential inherent in the ultrafast acquisition speed. In this paper the fast reconstruction algorithm gridrec, highly optimized for conventional CPU technology, is presented. It is shown that gridrec is a valuable alternative to standard filtered back-projection routines, despite being based on the Fourier transform method. In fact, the regridding procedure used for resampling the Fourier space from polar to Cartesian coordinates couples excellent performance with negligible accuracy degradation. The stronger dependence of the observed signal-to-noise ratio for gridrec reconstructions on the number of angular views makes the presented algorithm even superior to filtered back-projection when the tomographic problem is well sampled. Gridrec not only guarantees high-quality results but it provides up to 20-fold performance increase, making real-time monitoring of the sub-second acquisition process a reality. PMID:23093766
Robust statistical reconstruction for charged particle tomography
Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W
2013-10-08
Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.
Efficient volumetric estimation from plenoptic data
NASA Astrophysics Data System (ADS)
Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.
2013-03-01
The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.
Atwood, Robert C.; Bodey, Andrew J.; Price, Stephen W. T.; Basham, Mark; Drakopoulos, Michael
2015-01-01
Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu, a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an ‘orthogonal’ fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and ‘facility-independent’: it can run on standard cluster infrastructure at any institution. PMID:25939626
Tomographic phase microscopy: principles and applications in bioimaging [Invited
Jin, Di; Zhou, Renjie; Yaqoob, Zahid; So, Peter T. C.
2017-01-01
Tomographic phase microscopy (TPM) is an emerging optical microscopic technique for bioimaging. TPM uses digital holographic measurements of complex scattered fields to reconstruct three-dimensional refractive index (RI) maps of cells with diffraction-limited resolution by solving inverse scattering problems. In this paper, we review the developments of TPM from the fundamental physics to its applications in bioimaging. We first provide a comprehensive description of the tomographic reconstruction physical models used in TPM. The RI map reconstruction algorithms and various regularization methods are discussed. Selected TPM applications for cellular imaging, particularly in hematology, are reviewed. Finally, we examine the limitations of current TPM systems, propose future solutions, and envision promising directions in biomedical research. PMID:29386746
Image reconstruction of muon tomographic data using a density-based clustering method
NASA Astrophysics Data System (ADS)
Perry, Kimberly B.
Muons are subatomic particles capable of reaching the Earth's surface before decaying. When these particles collide with an object that has a high atomic number (Z), their path of travel changes substantially. Tracking muon movement through shielded containers can indicate what types of materials lie inside. This thesis proposes using a density-based clustering algorithm called OPTICS to perform image reconstructions using muon tomographic data. The results show that this method is capable of detecting high-Z materials quickly, and can also produce detailed reconstructions with large amounts of data.
Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua
2018-05-01
Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.
3D tomographic imaging with the γ-eye planar scintigraphic gamma camera
NASA Astrophysics Data System (ADS)
Tunnicliffe, H.; Georgiou, M.; Loudos, G. K.; Simcox, A.; Tsoumpas, C.
2017-11-01
γ-eye is a desktop planar scintigraphic gamma camera (100 mm × 50 mm field of view) designed by BET Solutions as an affordable tool for dynamic, whole body, small-animal imaging. This investigation tests the viability of using γ-eye for the collection of tomographic data for 3D SPECT reconstruction. Two software packages, QSPECT and STIR (software for tomographic image reconstruction), have been compared. Reconstructions have been performed using QSPECT’s implementation of the OSEM algorithm and STIR’s OSMAPOSL (Ordered Subset Maximum A Posteriori One Step Late) and OSSPS (Ordered Subsets Separable Paraboloidal Surrogate) algorithms. Reconstructed images of phantom and mouse data have been assessed in terms of spatial resolution, sensitivity to varying activity levels and uniformity. The effect of varying the number of iterations, the voxel size (1.25 mm default voxel size reduced to 0.625 mm and 0.3125 mm), the point spread function correction and the weight of prior terms were explored. While QSPECT demonstrated faster reconstructions, STIR outperformed it in terms of resolution (as low as 1 mm versus 3 mm), particularly when smaller voxel sizes were used, and in terms of uniformity, particularly when prior terms were used. Little difference in terms of sensitivity was seen throughout.
Sitek, Arkadiusz
2016-12-21
The origin ensemble (OE) algorithm is a new method used for image reconstruction from nuclear tomographic data. The main advantage of this algorithm is the ease of implementation for complex tomographic models and the sound statistical theory. In this comment, the author provides the basics of the statistical interpretation of OE and gives suggestions for the improvement of the algorithm in the application to prompt gamma imaging as described in Polf et al (2015 Phys. Med. Biol. 60 7085).
NASA Astrophysics Data System (ADS)
Sitek, Arkadiusz
2016-12-01
The origin ensemble (OE) algorithm is a new method used for image reconstruction from nuclear tomographic data. The main advantage of this algorithm is the ease of implementation for complex tomographic models and the sound statistical theory. In this comment, the author provides the basics of the statistical interpretation of OE and gives suggestions for the improvement of the algorithm in the application to prompt gamma imaging as described in Polf et al (2015 Phys. Med. Biol. 60 7085).
Synthetic Incoherence via Scanned Gaussian Beams
Levine, Zachary H.
2006-01-01
Tomography, in most formulations, requires an incoherent signal. For a conventional transmission electron microscope, the coherence of the beam often results in diffraction effects that limit the ability to perform a 3D reconstruction from a tilt series with conventional tomographic reconstruction algorithms. In this paper, an analytic solution is given to a scanned Gaussian beam, which reduces the beam coherence to be effectively incoherent for medium-size (of order 100 voxels thick) tomographic applications. The scanned Gaussian beam leads to more incoherence than hollow-cone illumination. PMID:27274945
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raymund, T.D.
Recently, several tomographic techniques for ionospheric electron density imaging have been proposed. These techniques reconstruct a vertical slice image of electron density using total electron content data. The data are measured between a low orbit beacon satellite and fixed receivers located along the projected orbital path of the satellite. By using such tomographic techniques, it may be possible to inexpensively (relative to incoherent scatter techniques) image the ionospheric electron density in a vertical plane several times per day. The satellite and receiver geometry used to measure the total electron content data causes the data to be incomplete; that is, themore » measured data do not contain enough information to completely specify the ionospheric electron density distribution in the region between the satellite and the receivers. A new algorithm is proposed which allows the incorporation of other complementary measurements, such as those from ionosondes, and also includes ways to include a priori information about the unknown electron density distribution in the reconstruction process. The algorithm makes use of two-dimensional basis functions. Illustrative application of this algorithm is made to simulated cases with good results. The technique is also applied to real total electron content (TEC) records collected in Scandinavia in conjunction with the EISCAT incoherent scatter radar. The tomographic reconstructions are compared with the incoherent scatter electron density images of the same region of the ionosphere.« less
Park, D Y; Fessler, J A; Yost, M G; Levine, S P
2000-03-01
Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 x 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.
Park, Doo Y; Fessier, Jeffrey A; Yost, Michael G; Levine, Steven P
2000-03-01
Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 × 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.
Interferometric tomography of fuel cells for monitoring membrane water content.
Waller, Laura; Kim, Jungik; Shao-Horn, Yang; Barbastathis, George
2009-08-17
We have developed a system that uses two 1D interferometric phase projections for reconstruction of 2D water content changes over time in situ in a proton exchange membrane (PEM) fuel cell system. By modifying the filtered backprojection tomographic algorithm, we are able to incorporate a priori information about the object distribution into a fast reconstruction algorithm which is suitable for real-time monitoring.
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging
Pryor, Alan; Yang, Yongsoo; Rana, Arjun; ...
2017-09-05
Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE), for high-resolution 3D reconstruction from a limited number of 2D projections. GENFIRE first assembles a 3D Fourier grid with oversampling and then iteratesmore » between real and reciprocal space to search for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques through numerical simulations and by experimentally reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. As a result, equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.« less
GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, Alan; Yang, Yongsoo; Rana, Arjun
Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE), for high-resolution 3D reconstruction from a limited number of 2D projections. GENFIRE first assembles a 3D Fourier grid with oversampling and then iteratesmore » between real and reciprocal space to search for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques through numerical simulations and by experimentally reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. As a result, equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.« less
Tomographic diffractive microscopy with a wavefront sensor.
Ruan, Y; Bon, P; Mudry, E; Maire, G; Chaumet, P C; Giovannini, H; Belkebir, K; Talneau, A; Wattellier, B; Monneret, S; Sentenac, A
2012-05-15
Tomographic diffractive microscopy is a recent imaging technique that reconstructs quantitatively the three-dimensional permittivity map of a sample with a resolution better than that of conventional wide-field microscopy. Its main drawbacks lie in the complexity of the setup and in the slowness of the image recording as both the amplitude and the phase of the field scattered by the sample need to be measured for hundreds of successive illumination angles. In this Letter, we show that, using a wavefront sensor, tomographic diffractive microscopy can be implemented easily on a conventional microscope. Moreover, the number of illuminations can be dramatically decreased if a constrained reconstruction algorithm is used to recover the sample map of permittivity.
NASA Technical Reports Server (NTRS)
Yin, L. I.; Trombka, J. I.; Bielefeld, M. J.; Seltzer, S. M.
1984-01-01
The results of two computer simulations demonstrate the feasibility of using the nonoverlapping redundant array (NORA) to form three-dimensional images of objects with X-rays. Pinholes admit the X-rays to nonoverlapping points on a detector. The object is reconstructed in the analog mode by optical correlation and in the digital mode by tomographic computations. Trials were run with a stick-figure pyramid and extended objects with out-of-focus backgrounds. Substitution of spherical optical lenses for the pinholes increased the light transmission sufficiently that objects could be easily viewed in a dark room. Out-of-focus aberrations in tomographic reconstruction could be eliminated using Chang's (1976) algorithm.
Tomographic image reconstruction using the cell broadband engine (CBE) general purpose hardware
NASA Astrophysics Data System (ADS)
Knaup, Michael; Steckmann, Sven; Bockenbach, Olivier; Kachelrieß, Marc
2007-02-01
Tomographic image reconstruction, such as the reconstruction of CT projection values, of tomosynthesis data, PET or SPECT events, is computational very demanding. In filtered backprojection as well as in iterative reconstruction schemes, the most time-consuming steps are forward- and backprojection which are often limited by the memory bandwidth. Recently, a novel general purpose architecture optimized for distributed computing became available: the Cell Broadband Engine (CBE). Its eight synergistic processing elements (SPEs) currently allow for a theoretical performance of 192 GFlops (3 GHz, 8 units, 4 floats per vector, 2 instructions, multiply and add, per clock). To maximize image reconstruction speed we modified our parallel-beam and perspective backprojection algorithms which are highly optimized for standard PCs, and optimized the code for the CBE processor. 1-3 In addition, we implemented an optimized perspective forwardprojection on the CBE which allows us to perform statistical image reconstructions like the ordered subset convex (OSC) algorithm. 4 Performance was measured using simulated data with 512 projections per rotation and 5122 detector elements. The data were backprojected into an image of 512 3 voxels using our PC-based approaches and the new CBE- based algorithms. Both the PC and the CBE timings were scaled to a 3 GHz clock frequency. On the CBE, we obtain total reconstruction times of 4.04 s for the parallel backprojection, 13.6 s for the perspective backprojection and 192 s for a complete OSC reconstruction, consisting of one initial Feldkamp reconstruction, followed by 4 OSC iterations.
NASA Astrophysics Data System (ADS)
Cha, Don J.; Cha, Soyoung S.
1995-09-01
A computational tomographic technique, termed the variable grid method (VGM), has been developed for improving interferometric reconstruction of flow fields under ill-posed data conditions of restricted scanning and incomplete projection. The technique is based on natural pixel decomposition, that is, division of a field into variable grid elements. The performances of two algorithms, that is, original and revised versions, are compared to investigate the effects of the data redundancy criteria and seed element forming schemes. Tests of the VGMs are conducted through computer simulation of experiments and reconstruction of fields with a limited view angel of 90 degree(s). The temperature fields at two horizontal sections of a thermal plume of two interacting isothermal cubes, produced by a finite numerical code, are analyzed as test fields. The computer simulation demonstrates the superiority of the revised VGM to either the conventional fixed grid method or the original VGM. Both the maximum and average reconstruction errors are reduced appreciably. The reconstruction shows substantial improvement in the regions with dense scanning by probing rays. These regions are usually of interest in engineering applications.
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-07-01
X-ray micro- and nanotomography has evolved into a quantitative analysis tool rather than a mere qualitative visualization technique for the study of porous natural materials. Tomographic reconstructions are subject to noise that has to be handled by image filters prior to quantitative analysis. Typically, denoising filters are designed to handle random noise, such as Gaussian or Poisson noise. In tomographic reconstructions, noise has been projected from Radon space to Euclidean space, i.e. post reconstruction noise cannot be expected to be random but to be correlated. Reconstruction artefacts, such as streak or ring artefacts, aggravate the filtering process so algorithms performing well with random noise are not guaranteed to provide satisfactory results for X-ray tomography reconstructions. With sufficient image resolution, the crystalline origin of most geomaterials results in tomography images of objects that are untextured. We developed a denoising framework for these kinds of samples that combines a noise level estimate with iterative nonlocal means denoising. This allows splitting the denoising task into several weak denoising subtasks where the later filtering steps provide a controlled level of texture removal. We describe a hands-on explanation for the use of this iterative denoising approach and the validity and quality of the image enhancement filter was evaluated in a benchmarking experiment with noise footprints of a varying level of correlation and residual artefacts. They were extracted from real tomography reconstructions. We found that our denoising solutions were superior to other denoising algorithms, over a broad range of contrast-to-noise ratios on artificial piecewise constant signals.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
Tomographic iterative reconstruction of a passive scalar in a 3D turbulent flow
NASA Astrophysics Data System (ADS)
Pisso, Ignacio; Kylling, Arve; Cassiani, Massimo; Solveig Dinger, Anne; Stebel, Kerstin; Schmidbauer, Norbert; Stohl, Andreas
2017-04-01
Turbulence in stable planetary boundary layers often encountered in high latitudes influences the exchange fluxes of heat, momentum, water vapor and greenhouse gases between the Earth's surface and the atmosphere. In climate and meteorological models, such effects of turbulence need to be parameterized, ultimately based on experimental data. A novel experimental approach is being developed within the COMTESSA project in order to study turbulence statistics at high resolution. Using controlled tracer releases, high-resolution camera images and estimates of the background radiation, different tomographic algorithms can be applied in order to obtain time series of 3D representations of the scalar dispersion. In this preliminary work, using synthetic data, we investigate different reconstruction algorithms with emphasis on algebraic methods. We study the dependence of the reconstruction quality on the discretization resolution and the geometry of the experimental device in both 2 and 3-D cases. We assess the computational aspects of the iterative algorithms focusing of the phenomenon of semi-convergence applying a variety of stopping rules. We discuss different strategies for error reduction and regularization of the ill-posed problem.
Iterative reconstruction of volumetric particle distribution
NASA Astrophysics Data System (ADS)
Wieneke, Bernhard
2013-02-01
For tracking the motion of illuminated particles in space and time several volumetric flow measurement techniques are available like 3D-particle tracking velocimetry (3D-PTV) recording images from typically three to four viewing directions. For higher seeding densities and the same experimental setup, tomographic PIV (Tomo-PIV) reconstructs voxel intensities using an iterative tomographic reconstruction algorithm (e.g. multiplicative algebraic reconstruction technique, MART) followed by cross-correlation of sub-volumes computing instantaneous 3D flow fields on a regular grid. A novel hybrid algorithm is proposed here that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume. But like 3D-PTV, particles are represented by 3D-positions instead of voxel-based intensity blobs as in MART. Detailed knowledge of the optical transfer function and the particle image shape is mandatory, which may differ for different positions in the volume and for each camera. Using synthetic data it is shown that this method is capable of reconstructing densely seeded flows up to about 0.05 ppp with similar accuracy as Tomo-PIV. Finally the method is validated with experimental data.
DART, a platform for the creation and registration of cone beam digital tomosynthesis datasets.
Sarkar, Vikren; Shi, Chengyu; Papanikolaou, Niko
2011-04-01
Digital tomosynthesis is an imaging modality that allows for tomographic reconstructions using only a fraction of the images needed for CT reconstruction. Since it offers the advantages of tomographic images with a smaller imaging dose delivered to the patient, the technique offers much promise for use in patient positioning prior to radiation delivery. This paper describes a software environment developed to help in the creation of digital tomosynthesis image sets from digital portal images using three different reconstruction algorithms. The software then allows for use of the tomograms for patient positioning or for dose recalculation if shifts are not applied, possibly as part of an adaptive radiotherapy regimen.
Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics
NASA Astrophysics Data System (ADS)
Yu, Tao; Cai, Weiwei; Liu, Yingzheng
2018-04-01
Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.
Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics.
Yu, Tao; Cai, Weiwei; Liu, Yingzheng
2018-04-01
Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
3D reconstruction of the magnetic vector potential using model based iterative reconstruction.
Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc
2017-11-01
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...
2017-07-03
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
NASA Astrophysics Data System (ADS)
Moeck, Jonas P.; Bourgouin, Jean-François; Durox, Daniel; Schuller, Thierry; Candel, Sébastien
2013-04-01
Swirl flows with vortex breakdown are widely used in industrial combustion systems for flame stabilization. This type of flow is known to sustain a hydrodynamic instability with a rotating helical structure, one common manifestation of it being the precessing vortex core. The role of this unsteady flow mode in combustion is not well understood, and its interaction with combustion instabilities and flame stabilization remains unclear. It is therefore important to assess the structure of the perturbation in the flame that is induced by this helical mode. Based on principles of tomographic reconstruction, a method is presented to determine the 3-D distribution of the heat release rate perturbation associated with the helical mode. Since this flow instability is rotating, a phase-resolved sequence of projection images of light emitted from the flame is identical to the Radon transform of the light intensity distribution in the combustor volume and thus can be used for tomographic reconstruction. This is achieved with one stationary camera only, a vast reduction in experimental and hardware requirements compared to a multi-camera setup or camera repositioning, which is typically required for tomographic reconstruction. Different approaches to extract the coherent part of the oscillation from the images are discussed. Two novel tomographic reconstruction algorithms specifically tailored to the structure of the heat release rate perturbations related to the helical mode are derived. The reconstruction techniques are first applied to an artificial field to illustrate the accuracy. High-speed imaging data acquired in a turbulent swirl-stabilized combustor setup with strong helical mode oscillations are then used to reconstruct the 3-D structure of the associated perturbation in the flame.
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Pickalov, Valery; Nagella, Srikanth; Pasca, Edoardo; Withers, Philip J.
2018-01-01
In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g. the well-known Shepp-Logan phantom. These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common. TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions. In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them. Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques. Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software. TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the "inverse crime" testing. All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access. Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.
NASA Astrophysics Data System (ADS)
Park, Y. O.; Hong, D. K.; Cho, H. S.; Je, U. K.; Oh, J. E.; Lee, M. S.; Kim, H. J.; Lee, S. H.; Jang, W. S.; Cho, H. M.; Choi, S. I.; Koo, Y. S.
2013-09-01
In this paper, we introduce an effective imaging system for digital tomosynthesis (DTS) with a circular X-ray tube, the so-called circular-DTS (CDTS) system, and its image reconstruction algorithm based on the total-variation (TV) minimization method for low-dose, high-accuracy X-ray imaging. Here, the X-ray tube is equipped with a series of cathodes distributed around a rotating anode, and the detector remains stationary throughout the image acquisition. We considered a TV-based reconstruction algorithm that exploited the sparsity of the image with substantially high image accuracy. We implemented the algorithm for the CDTS geometry and successfully reconstructed images of high accuracy. The image characteristics were investigated quantitatively by using some figures of merit, including the universal-quality index (UQI) and the depth resolution. For selected tomographic angles of 20, 40, and 60°, the corresponding UQI values in the tomographic view were estimated to be about 0.94, 0.97, and 0.98, and the depth resolutions were about 4.6, 3.1, and 1.2 voxels in full width at half maximum (FWHM), respectively. We expect the proposed method to be applicable to developing a next-generation dental or breast X-ray imaging system.
Statistical reconstruction for cosmic ray muon tomography.
Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J
2007-08-01
Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.
NASA Astrophysics Data System (ADS)
Guan, Huifeng; Anastasio, Mark A.
2017-03-01
It is well-known that properly designed image reconstruction methods can facilitate reductions in imaging doses and data-acquisition times in tomographic imaging. The ability to do so is particularly important for emerging modalities such as differential X-ray phase-contrast tomography (D-XPCT), which are currently limited by these factors. An important application of D-XPCT is high-resolution imaging of biomedical samples. However, reconstructing high-resolution images from few-view tomographic measurements remains a challenging task. In this work, a two-step sub-space reconstruction strategy is proposed and investigated for use in few-view D-XPCT image reconstruction. It is demonstrated that the resulting iterative algorithm can mitigate the high-frequency information loss caused by data incompleteness and produce images that have better preserved high spatial frequency content than those produced by use of a conventional penalized least squares (PLS) estimator.
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor)
2007-01-01
A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.
Broadband Tomography System: Direct Time-Space Reconstruction Algorithm
NASA Astrophysics Data System (ADS)
Biagi, E.; Capineri, Lorenzo; Castellini, Guido; Masotti, Leonardo F.; Rocchi, Santina
1989-10-01
In this paper a new ultrasound tomographic image algorithm is presented. A complete laboratory system is built up to test the algorithm in experimental conditions. The proposed system is based on a physical model consisting of a bidimensional distribution of single scattering elements. Multiple scattering is neglected, so Born approximation is assumed. This tomographic technique only requires two orthogonal scanning sections. For each rotational position of the object, data are collected by means of the complete data set method in transmission mode. After a numeric envelope detection, the received signals are back-projected in the space-domain through a scalar function. The reconstruction of each scattering element is accomplished by correlating the ultrasound time of flight and attenuation with the points' loci given by the possible positions of the scattering element. The points' locus is represented by an ellipse with the focuses located on the transmitter and receiver positions. In the image matrix the ellipses' contributions are coherently summed in the position of the scattering element. Computer simulations of cylindrical-shaped objects have pointed out the performances of the reconstruction algorithm. Preliminary experimental results show the laboratory system features. On the basis of these results an experimental procedure to test the confidence and repeatability of ultrasonic measurements on human carotid vessel is proposed.
Development of a high-performance noise-reduction filter for tomographic reconstruction
NASA Astrophysics Data System (ADS)
Kao, Chien-Min; Pan, Xiaochuan
2001-07-01
We propose a new noise-reduction method for tomographic reconstruction. The method incorporates a priori information on the source image for allowing the derivation of the energy spectrum of its ideal sinogram. In combination with the energy spectrum of the Poisson noise in the measured sinogram, we are able to derive a Wiener-like filter for effective suppression of the sinogram noise. The filtered backprojection (FBP) algorithm, with a ramp filter, is then applied to the filtered sinogram to produce tomographic images. The resulting filter has a closed-form expression in the frequency space and contains a single user-adjustable regularization parameter. The proposed method is hence simple to implement and easy to use. In contrast to the ad hoc apodizing windows, such as Hanning and Butterworth filters, that are commonly used in the conventional FBP reconstruction, the proposed filter is theoretically more rigorous as it is derived by basing upon an optimization criterion, subject to a known class of source image intensity distributions.
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149
Optimization of tomographic reconstruction workflows on geographically distributed resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...
2017-01-28
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Automatic alignment for three-dimensional tomographic reconstruction
NASA Astrophysics Data System (ADS)
van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.
2018-02-01
In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.
NASA Astrophysics Data System (ADS)
Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra
2016-07-01
Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used for various purposes including calculation of actual height and generation of ionograms. In this study, the performance of electron density reconstruction algorithm of IONOLAB group and standard electron density profile algorithms of ionosondes are compared with IONOLAB-RAY wave propagation simulation in near vertical incidence. The electron density reconstruction and parameter extraction algorithms of ionosondes are validated with the IONOLAB-RAY results both for quiet anddisturbed ionospheric states in Central Europe using ionosonde stations such as Pruhonice and Juliusruh . It is observed that IONOLAB ionosonde parameter extraction and electron density reconstruction algorithm performs significantly better compared to standard algorithms especially for disturbed ionospheric conditions. IONOLAB-RAY provides an efficient and reliable tool to investigate and validate ionosonde electron density reconstruction algorithms, especially in determination of reflection height (true height) of signals and critical parameters of ionosphere. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
NASA Astrophysics Data System (ADS)
Hart, V. P.; Taylor, M. J.; Doyle, T. E.; Zhao, Y.; Pautet, P.-D.; Carruth, B. L.; Rusch, D. W.; Russell, J. M.
2018-01-01
This research presents the first application of tomographic techniques for investigating gravity wave structures in polar mesospheric clouds (PMCs) imaged by the Cloud Imaging and Particle Size instrument on the NASA AIM satellite. Albedo data comprising consecutive PMC scenes were used to tomographically reconstruct a 3-D layer using the Partially Constrained Algebraic Reconstruction Technique algorithm and a previously developed "fanning" technique. For this pilot study, a large region (760 × 148 km) of the PMC layer (altitude 83 km) was sampled with a 2 km horizontal resolution, and an intensity weighted centroid technique was developed to create novel 2-D surface maps, characterizing the individual gravity waves as well as their altitude variability. Spectral analysis of seven selected wave events observed during the Northern Hemisphere 2007 PMC season exhibited dominant horizontal wavelengths of 60-90 km, consistent with previous studies. These tomographic analyses have enabled a broad range of new investigations. For example, a clear spatial anticorrelation was observed between the PMC albedo and wave-induced altitude changes, with higher-albedo structures aligning well with wave troughs, while low-intensity regions aligned with wave crests. This result appears to be consistent with current theories of PMC development in the mesopause region. This new tomographic imaging technique also provides valuable wave amplitude information enabling further mesospheric gravity wave investigations, including quantitative analysis of their hemispheric and interannual characteristics and variations.
Tomographic Neutron Imaging using SIRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregor, Jens; FINNEY, Charles E A; Toops, Todd J
2013-01-01
Neutron imaging is complementary to x-ray imaging in that materials such as water and plastic are highly attenuating while material such as metal is nearly transparent. We showcase tomographic imaging of a diesel particulate filter. Reconstruction is done using a modified version of SIRT called PSIRT. We expand on previous work and introduce Tikhonov regularization. We show that near-optimal relaxation can still be achieved. The algorithmic ideas apply to cone beam x-ray CT and other inverse problems.
Tomographic capabilities of the new GEM based SXR diagnostic of WEST
NASA Astrophysics Data System (ADS)
Jardin, A.; Mazon, D.; O'Mullane, M.; Mlynar, J.; Loffelmann, V.; Imrisek, M.; Chernyshova, M.; Czarski, T.; Kasprowicz, G.; Wojenski, A.; Bourdelle, C.; Malard, P.
2016-07-01
The tokamak WEST (Tungsten Environment in Steady-State Tokamak) will start operating by the end of 2016 as a test bed for the ITER divertor components in long pulse operation. In this context, radiative cooling of heavy impurities like tungsten (W) in the Soft X-ray (SXR) range [0.1 keV; 20 keV] is a critical issue for the plasma core performances. Thus reliable tools are required to monitor the local impurity density and avoid W accumulation. The WEST SXR diagnostic will be equipped with two new GEM (Gas Electron Multiplier) based poloidal cameras allowing to perform 2D tomographic reconstructions in tunable energy bands. In this paper tomographic capabilities of the Minimum Fisher Information (MFI) algorithm developed for Tore Supra and upgraded for WEST are investigated, in particular through a set of emissivity phantoms and the standard WEST scenario including reconstruction errors, influence of noise as well as computational time.
Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling
NASA Astrophysics Data System (ADS)
Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen
2010-04-01
OPTRA has developed an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize the design and build and detail system characterization and test of a prototype I-OP-FTIR instrument. System characterization includes radiometric performance and spectral resolution. Results from a series of tomographic reconstructions of sulfur hexafluoride plumes in a laboratory setting are also presented.
Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling
NASA Astrophysics Data System (ADS)
Rentz Dupuis, Julia; Mansur, David J.; Engel, James R.; Vaillancourt, Robert; Todd, Lori; Mottus, Kathleen
2008-04-01
OPTRA and University of North Carolina are developing an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach will be considered as a candidate referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize progress to date and overall system performance projections based on the instrument, spectroscopy, and tomographic reconstruction accuracy. We then present a preliminary optical design of the I-OP-FTIR.
RF tomography of metallic objects in free space: preliminary results
NASA Astrophysics Data System (ADS)
Li, Jia; Ewing, Robert L.; Berdanier, Charles; Baker, Christopher
2015-05-01
RF tomography has great potential in defense and homeland security applications. A distributed sensing research facility is under development at Air Force Research Lab. To develop a RF tomographic imaging system for the facility, preliminary experiments have been performed in an indoor range with 12 radar sensors distributed on a circle of 3m radius. Ultra-wideband pulses are used to illuminate single and multiple metallic targets. The echoes received by distributed sensors were processed and combined for tomography reconstruction. Traditional matched filter algorithm and truncated singular value decomposition (SVD) algorithm are compared in terms of their complexity, accuracy, and suitability for distributed processing. A new algorithm is proposed for shape reconstruction, which jointly estimates the object boundary and scatter points on the waveform's propagation path. The results show that the new algorithm allows accurate reconstruction of object shape, which is not available through the matched filter and truncated SVD algorithms.
Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction
Agulleiro, Jose-Ignacio; Fernández, José Jesús
2012-01-01
Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768
Three-dimensional propagation in near-field tomographic X-ray phase retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruhlandt, Aike, E-mail: aruhlan@gwdg.de; Salditt, Tim
An extension of phase retrieval algorithms for near-field X-ray (propagation) imaging to three dimensions is presented, enhancing the quality of the reconstruction by exploiting previously unused three-dimensional consistency constraints. This paper presents an extension of phase retrieval algorithms for near-field X-ray (propagation) imaging to three dimensions, enhancing the quality of the reconstruction by exploiting previously unused three-dimensional consistency constraints. The approach is based on a novel three-dimensional propagator and is derived for the case of optically weak objects. It can be easily implemented in current phase retrieval architectures, is computationally efficient and reduces the need for restrictive prior assumptions, resultingmore » in superior reconstruction quality.« less
Tomographic inversion of satellite photometry
NASA Technical Reports Server (NTRS)
Solomon, S. C.; Hays, P. B.; Abreu, V. J.
1984-01-01
An inversion algorithm capable of reconstructing the volume emission rate of thermospheric airglow features from satellite photometry has been developed. The accuracy and resolution of this technique are investigated using simulated data, and the inversions of several sets of observations taken by the Visible Airglow Experiment are presented.
CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models
NASA Astrophysics Data System (ADS)
Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli
2011-02-01
Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.
Jini service to reconstruct tomographic data
NASA Astrophysics Data System (ADS)
Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.
2002-06-01
A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.
NASA Astrophysics Data System (ADS)
Garay, Michael J.; Davis, Anthony B.; Diner, David J.
2016-12-01
We present initial results using computed tomography to reconstruct the three-dimensional structure of an aerosol plume from passive observations made by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. MISR views the Earth from nine different angles at four visible and near-infrared wavelengths. Adopting the 672 nm channel, we treat each view as an independent measure of aerosol optical thickness along the line of sight at 1.1 km resolution. A smoke plume over dark water is selected as it provides a more tractable lower boundary condition for the retrieval. A tomographic algorithm is used to reconstruct the horizontal and vertical aerosol extinction field for one along-track slice from the path of all camera rays passing through a regular grid. The results compare well with ground-based lidar observations from a nearby Micropulse Lidar Network site.
Intensity-enhanced MART for tomographic PIV
NASA Astrophysics Data System (ADS)
Wang, HongPing; Gao, Qi; Wei, RunJie; Wang, JinJun
2016-05-01
A novel technique to shrink the elongated particles and suppress the ghost particles in particle reconstruction of tomographic particle image velocimetry is presented. This method, named as intensity-enhanced multiplicative algebraic reconstruction technique (IntE-MART), utilizes an inverse diffusion function and an intensity suppressing factor to improve the quality of particle reconstruction and consequently the precision of velocimetry. A numerical assessment about vortex ring motion with and without image noise is performed to evaluate the new algorithm in terms of reconstruction, particle elongation and velocimetry. The simulation is performed at seven different seeding densities. The comparison of spatial filter MART and IntE-MART on the probability density function of particle peak intensity suggests that one of the local minima of the distribution can be used to separate the ghosts and actual particles. Thus, ghost removal based on IntE-MART is also introduced. To verify the application of IntE-MART, a real plate turbulent boundary layer experiment is performed. The result indicates that ghost reduction can increase the accuracy of RMS of velocity field.
Semi-Tomographic Gamma Scanning Technique for Non-Destructive Assay of Radioactive Waste Drums
NASA Astrophysics Data System (ADS)
Gu, Weiguo; Rao, Kaiyuan; Wang, Dezhong; Xiong, Jiemei
2016-12-01
Segmented gamma scanning (SGS) and tomographic gamma scanning (TGS) are two traditional detection techniques for low and intermediate level radioactive waste drum. This paper proposes one detection method named semi-tomographic gamma scanning (STGS) to avoid the poor detection accuracy of SGS and shorten detection time of TGS. This method and its algorithm synthesize the principles of SGS and TGS. In this method, each segment is divided into annual voxels and tomography is used in the radiation reconstruction. The accuracy of STGS is verified by experiments and simulations simultaneously for the 208 liter standard waste drums which contains three types of nuclides. The cases of point source or multi-point sources, uniform or nonuniform materials are employed for comparison. The results show that STGS exhibits a large improvement in the detection performance, and the reconstruction error and statistical bias are reduced by one quarter to one third or less for most cases if compared with SGS.
Zhou, C.; Liu, L.; Lane, J.W.
2001-01-01
A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.
BPF-type region-of-interest reconstruction for parallel translational computed tomography.
Wu, Weiwen; Yu, Hengyong; Wang, Shaoyu; Liu, Fenglin
2017-01-01
The objective of this study is to present and test a new ultra-low-cost linear scan based tomography architecture. Similar to linear tomosynthesis, the source and detector are translated in opposite directions and the data acquisition system targets on a region-of-interest (ROI) to acquire data for image reconstruction. This kind of tomographic architecture was named parallel translational computed tomography (PTCT). In previous studies, filtered backprojection (FBP)-type algorithms were developed to reconstruct images from PTCT. However, the reconstructed ROI images from truncated projections have severe truncation artefact. In order to overcome this limitation, we in this study proposed two backprojection filtering (BPF)-type algorithms named MP-BPF and MZ-BPF to reconstruct ROI images from truncated PTCT data. A weight function is constructed to deal with data redundancy for multi-linear translations modes. Extensive numerical simulations are performed to evaluate the proposed MP-BPF and MZ-BPF algorithms for PTCT in fan-beam geometry. Qualitative and quantitative results demonstrate that the proposed BPF-type algorithms cannot only more accurately reconstruct ROI images from truncated projections but also generate high-quality images for the entire image support in some circumstances.
Volume Segmentation and Ghost Particles
NASA Astrophysics Data System (ADS)
Ziskin, Isaac; Adrian, Ronald
2011-11-01
Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed. This research was supported by Contract 79419-001-09, Los Alamos National Laboratory.
Three-dimensional Image Reconstruction in J-PET Using Filtered Back-projection Method
NASA Astrophysics Data System (ADS)
Shopa, R. Y.; Klimaszewski, K.; Kowalski, P.; Krzemień, W.; Raczyński, L.; Wiślicki, W.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
We present a method and preliminary results of the image reconstruction in the Jagiellonian PET tomograph. Using GATE (Geant4 Application for Tomographic Emission), interactions of the 511 keV photons with a cylindrical detector were generated. Pairs of such photons, flying back-to-back, originate from e+e- annihilations inside a 1-mm spherical source. Spatial and temporal coordinates of hits were smeared using experimental resolutions of the detector. We incorporated the algorithm of the 3D Filtered Back Projection, implemented in the STIR and TomoPy software packages, which differ in approximation methods. Consistent results for the Point Spread Functions of ~5/7,mm and ~9/20, mm were obtained, using STIR, for transverse and longitudinal directions, respectively, with no time of flight information included.
The Ettention software package.
Dahmen, Tim; Marsalek, Lukas; Marniok, Nico; Turoňová, Beata; Bogachev, Sviatoslav; Trampert, Patrick; Nickels, Stefan; Slusallek, Philipp
2016-02-01
We present a novel software package for the problem "reconstruction from projections" in electron microscopy. The Ettention framework consists of a set of modular building-blocks for tomographic reconstruction algorithms. The well-known block iterative reconstruction method based on Kaczmarz algorithm is implemented using these building-blocks, including adaptations specific to electron tomography. Ettention simultaneously features (1) a modular, object-oriented software design, (2) optimized access to high-performance computing (HPC) platforms such as graphic processing units (GPU) or many-core architectures like Xeon Phi, and (3) accessibility to microscopy end-users via integration in the IMOD package and eTomo user interface. We also provide developers with a clean and well-structured application programming interface (API) that allows for extending the software easily and thus makes it an ideal platform for algorithmic research while hiding most of the technical details of high-performance computing. Copyright © 2015 Elsevier B.V. All rights reserved.
Shrink-wrapped isosurface from cross sectional images
Choi, Y. K.; Hahn, J. K.
2010-01-01
Summary This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images. PMID:20703361
Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H
2017-06-01
We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .
NASA Astrophysics Data System (ADS)
Bai, Bing
2012-03-01
There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.
Tomographic diagnostic of the hydrogen beam from a negative ion source
NASA Astrophysics Data System (ADS)
Agostini, M.; Brombin, M.; Serianni, G.; Pasqualotto, R.
2011-10-01
In this paper the tomographic diagnostic developed to characterize the 2D density distribution of a particle beam from a negative ion source is described. In particular, the reliability of this diagnostic has been tested by considering the geometry of the source for the production of ions of deuterium extracted from an rf plasma (SPIDER). SPIDER is a low energy prototype negative ion source for the international thermonuclear experimental reactor (ITER) neutral beam injector, aimed at demonstrating the capability to create and extract a current of D- (H-) ions up to 50 A (60 A) accelerated at 100 kV. The ions are extracted over a wide surface (1.52×0.56m2) with a uniform plasma density which is prescribed to remain within 10% of the mean value. The main target of the tomographic diagnostic is the measurement of the beam uniformity with sufficient spatial resolution and of its evolution throughout the pulse duration. To reach this target, a tomographic algorithm based on the simultaneous algebraic reconstruction technique is developed and the geometry of the lines of sight is optimized so as to cover the whole area of the beam. Phantoms that reproduce different experimental beam configurations are simulated and reconstructed, and the role of the noise in the signals is studied. The simulated phantoms are correctly reconstructed and their two-dimensional spatial nonuniformity is correctly estimated, up to a noise level of 10% with respect to the signal.
NASA Astrophysics Data System (ADS)
Leonard, Kevin Raymond
This dissertation concentrates on the development of two new tomographic techniques that enable wide-area inspection of pipe-like structures. By envisioning a pipe as a plate wrapped around upon itself, the previous Lamb Wave Tomography (LWT) techniques are adapted to cylindrical structures. Helical Ultrasound Tomography (HUT) uses Lamb-like guided wave modes transmitted and received by two circumferential arrays in a single crosshole geometry. Meridional Ultrasound Tomography (MUT) creates the same crosshole geometry with a linear array of transducers along the axis of the cylinder. However, even though these new scanning geometries are similar to plates, additional complexities arise because they are cylindrical structures. First, because it is a single crosshole geometry, the wave vector coverage is poorer than in the full LWT system. Second, since waves can travel in both directions around the circumference of the pipe, modes can also constructively and destructively interfere with each other. These complexities necessitate improved signal processing algorithms to produce accurate and unambiguous tomographic reconstructions. Consequently, this work also describes a new algorithm for improving the extraction of multi-mode arrivals from guided wave signals. Previous work has relied solely on the first arriving mode for the time-of-flight measurements. In order to improve the LWT, HUT and MUT systems reconstructions, improved signal processing methods are needed to extract information about the arrival times of the later arriving modes. Because each mode has different through-thickness displacement values, they are sensitive to different types of flaws, and the information gained from the multi-mode analysis improves understanding of the structural integrity of the inspected material. Both tomographic frequency compounding and mode sorting algorithms are introduced. It is also shown that each of these methods improve the reconstructed images both qualitatively and quantitatively.
Direct integration of the inverse Radon equation for X-ray computed tomography.
Libin, E E; Chakhlov, S V; Trinca, D
2016-11-22
A new mathematical appoach using the inverse Radon equation for restoration of images in problems of linear two-dimensional x-ray tomography is formulated. In this approach, Fourier transformation is not used, and it gives the chance to create the practical computing algorithms having more reliable mathematical substantiation. Results of software implementation show that for especially for low number of projections, the described approach performs better than standard X-ray tomographic reconstruction algorithms.
Varma, Hari M.; Valdes, Claudia P.; Kristoffersen, Anna K.; Culver, Joseph P.; Durduran, Turgut
2014-01-01
A novel tomographic method based on the laser speckle contrast, speckle contrast optical tomography (SCOT) is introduced that allows us to reconstruct three dimensional distribution of blood flow in deep tissues. This method is analogous to the diffuse optical tomography (DOT) but for deep tissue blood flow. We develop a reconstruction algorithm based on first Born approximation to generate three dimensional distribution of flow using the experimental data obtained from tissue simulating phantoms. PMID:24761306
NASA Astrophysics Data System (ADS)
Tornai, Martin P.; Bowsher, James E.; Archer, Caryl N.; Peter, Jörg; Jaszczak, Ronald J.; MacDonald, Lawrence R.; Patt, Bradley E.; Iwanczyk, Jan S.
2003-01-01
A novel tomographic gantry was designed, built and initially evaluated for single photon emission imaging of metabolically active lesions in the pendant breast and near chest wall. Initial emission imaging measurements with breast lesions of various uptake ratios are presented. Methods: A prototype tomograph was constructed utilizing a compact gamma camera having a field-of-view of <13×13 cm 2 with arrays of 2×2×6 mm 3 quantized NaI(Tl) scintillators coupled to position sensitive PMTs. The camera was mounted on a radially oriented support with 6 cm variable radius-of-rotation. This unit is further mounted on a goniometric cradle providing polar motion, and in turn mounted on an azimuthal rotation stage capable of indefinite vertical axis-of-rotation about the central rotation axis (RA). Initial measurements with isotopic Tc-99 m (140 keV) to evaluate the system include acquisitions with various polar tilt angles about the RA. Tomographic measurements were made of a frequency and resolution cold-rod phantom filled with aqueous Tc-99 m. Tomographic and planar measurements of 0.6 and 1.0 cm diameter fillable spheres in an available ˜950 ml hemi-ellipsoidal (uncompressed) breast phantom attached to a life-size anthropomorphic torso phantom with lesion:breast-and-body:cardiac-and-liver activity concentration ratios of 11:1:19 were compared. Various photopeak energy windows from 10-30% widths were obtained, along with a 35% scatter window below a 15% photopeak window from the list mode data. Projections with all photopeak window and camera tilt conditions were reconstructed with an ordered subsets expectation maximization (OSEM) algorithm capable of reconstructing arbitrary tomographic orbits. Results: As iteration number increased for the tomographically measured data at all polar angles, contrasts increased while signal-to-noise ratios (SNRs) decreased in the expected way with OSEM reconstruction. The rollover between contrast improvement and SNR degradation of the lesion occurred at two to three iterations. The reconstructed tomographic data yielded SNRs with or without scatter correction that were >9 times better than the planar scans. There was up to a factor of ˜2.5 increase in total primary and scatter contamination in the photopeak window with increasing tilt angle from 15° to 45°, consistent with more direct line-of-sight of myocardial and liver activity with increased camera polar angle. Conclusion: This new, ultra-compact, dedicated tomographic imaging system has the potential of providing valuable, fully 3D functional information about small, otherwise indeterminate breast lesions as an adjunct to diagnostic mammography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
PELT, DANIEL
2017-04-21
Small Python package to compute tomographic reconstructions using a reconstruction method published in: Pelt, D.M., & De Andrade, V. (2017). Improved tomographic reconstruction of large-scale real-world data by filter optimization. Advanced Structural and Chemical Imaging 2: 17; and Pelt, D. M., & Batenburg, K. J. (2015). Accurately approximating algebraic tomographic reconstruction by filtered backprojection. In Proceedings of The 13th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine (pp. 158-161).
Image-guided filtering for improving photoacoustic tomographic image reconstruction.
Awasthi, Navchetan; Kalva, Sandeep Kumar; Pramanik, Manojit; Yalavarthy, Phaneendra K
2018-06-01
Several algorithms exist to solve the photoacoustic image reconstruction problem depending on the expected reconstructed image features. These reconstruction algorithms promote typically one feature, such as being smooth or sharp, in the output image. Combining these features using a guided filtering approach was attempted in this work, which requires an input and guiding image. This approach act as a postprocessing step to improve commonly used Tikhonov or total variational regularization method. The result obtained from linear backprojection was used as a guiding image to improve these results. Using both numerical and experimental phantom cases, it was shown that the proposed guided filtering approach was able to improve (as high as 11.23 dB) the signal-to-noise ratio of the reconstructed images with the added advantage being computationally efficient. This approach was compared with state-of-the-art basis pursuit deconvolution as well as standard denoising methods and shown to outperform them. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
A Comparison of 3D3C Velocity Measurement Techniques
NASA Astrophysics Data System (ADS)
La Foy, Roderick; Vlachos, Pavlos
2013-11-01
The velocity measurement fidelity of several 3D3C PIV measurement techniques including tomographic PIV, synthetic aperture PIV, plenoptic PIV, defocusing PIV, and 3D PTV are compared in simulations. A physically realistic ray-tracing algorithm is used to generate synthetic images of a standard calibration grid and of illuminated particle fields advected by homogeneous isotropic turbulence. The simulated images for the tomographic, synthetic aperture, and plenoptic PIV cases are then used to create three-dimensional reconstructions upon which cross-correlations are performed to yield the measured velocity field. Particle tracking algorithms are applied to the images for the defocusing PIV and 3D PTV to directly yield the three-dimensional velocity field. In all cases the measured velocity fields are compared to one-another and to the true velocity field using several metrics.
Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms
NASA Astrophysics Data System (ADS)
Mohan, K. Aditya
2017-10-01
4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, S; Wang, W; Tang, X
2014-06-15
Purpose: With the major benefit in dealing with data truncation for ROI reconstruction, the algorithm of differentiated backprojection followed by Hilbert filtering (DBPF) is originally derived for image reconstruction from parallel- or fan-beam data. To extend its application for axial CB scan, we proposed the integration of the DBPF algorithm with 3-D weighting. In this work, we further propose the incorporation of Butterfly filtering into the 3-D weighted axial CB-DBPF algorithm and conduct an evaluation to verify its performance. Methods: Given an axial scan, tomographic images are reconstructed by the DBPF algorithm with 3-D weighting, in which streak artifacts existmore » along the direction of Hilbert filtering. Recognizing this orientation-specific behavior, a pair of orthogonal Butterfly filtering is applied on the reconstructed images with the horizontal and vertical Hilbert filtering correspondingly. In addition, the Butterfly filtering can also be utilized for streak artifact suppression in the scenarios wherein only partial scan data with an angular range as small as 270° are available. Results: Preliminary data show that, with the correspondingly applied Butterfly filtering, the streak artifacts existing in the images reconstructed by the 3-D weighted DBPF algorithm can be suppressed to an unnoticeable level. Moreover, the Butterfly filtering also works at the scenarios of partial scan, though the 3-D weighting scheme may have to be dropped because of no sufficient projection data are available. Conclusion: As an algorithmic step, the incorporation of Butterfly filtering enables the DBPF algorithm for CB image reconstruction from data acquired along either a full or partial axial scan.« less
Fast tomographic methods for the tokamak ISTTOK
NASA Astrophysics Data System (ADS)
Carvalho, P. J.; Thomsen, H.; Gori, S.; Toussaint, U. v.; Weller, A.; Coelho, R.; Neto, A.; Pereira, T.; Silva, C.; Fernandes, H.
2008-04-01
The achievement of long duration, alternating current discharges on the tokamak IST-TOK requires a real-time plasma position control system. The plasma position determination based on magnetic probes system has been found to be inadequate during the current inversion due to the reduced plasma current. A tomography diagnostic has been therefore installed to supply the required feedback to the control system. Several tomographic methods are available for soft X-ray or bolo-metric tomography, among which the Cormack and Neural networks methods stand out due to their inherent speed of up to 1000 reconstructions per second, with currently available technology. This paper discusses the application of these algorithms on fusion devices while comparing performance and reliability of the results. It has been found that although the Cormack based inversion proved to be faster, the neural networks reconstruction has fewer artifacts and is more accurate.
Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling
NASA Astrophysics Data System (ADS)
Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen
2009-05-01
OPTRA is developing an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill.
Hsieh, Jiang; Nilsen, Roy A.; McOlash, Scott M.
2006-01-01
A three-dimensional (3D) weighted helical cone beam filtered backprojection (CB-FBP) algorithm (namely, original 3D weighted helical CB-FBP algorithm) has already been proposed to reconstruct images from the projection data acquired along a helical trajectory in angular ranges up to [0, 2 π]. However, an overscan is usually employed in the clinic to reconstruct tomographic images with superior noise characteristics at the most challenging anatomic structures, such as head and spine, extremity imaging, and CT angiography as well. To obtain the most achievable noise characteristics or dose efficiency in a helical overscan, we extended the 3D weighted helical CB-FBP algorithm to handle helical pitches that are smaller than 1: 1 (namely extended 3D weighted helical CB-FBP algorithm). By decomposing a helical over scan with an angular range of [0, 2π + Δβ] into a union of full scans corresponding to an angular range of [0, 2π], the extended 3D weighted function is a summation of all 3D weighting functions corresponding to each full scan. An experimental evaluation shows that the extended 3D weighted helical CB-FBP algorithm can improve noise characteristics or dose efficiency of the 3D weighted helical CB-FBP algorithm at a helical pitch smaller than 1: 1, while its reconstruction accuracy and computational efficiency are maintained. It is believed that, such an efficient CB reconstruction algorithm that can provide superior noise characteristics or dose efficiency at low helical pitches may find its extensive applications in CT medical imaging. PMID:23165031
NASA Astrophysics Data System (ADS)
Rumyantseva, O. D.; Shurup, A. S.
2017-01-01
The paper considers the derivation of the wave equation and Helmholtz equation for solving the tomographic problem of reconstruction combined scalar-vector inhomogeneities describing perturbations of the sound velocity and absorption, the vector field of flows, and perturbations of the density of the medium. Restrictive conditions under which the obtained equations are meaningful are analyzed. Results of numerical simulation of the two-dimensional functional-analytical Novikov-Agaltsov algorithm for reconstructing the flow velocity using the the obtained Helmholtz equation are presented.
Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction
Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.
2016-01-01
X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902
Tensor-based Dictionary Learning for Dynamic Tomographic Reconstruction
Tan, Shengqi; Zhang, Yanbo; Wang, Ge; Mou, Xuanqin; Cao, Guohua; Wu, Zhifang; Yu, Hengyong
2015-01-01
In dynamic computed tomography (CT) reconstruction, the data acquisition speed limits the spatio-temporal resolution. Recently, compressed sensing theory has been instrumental in improving CT reconstruction from far few-view projections. In this paper, we present an adaptive method to train a tensor-based spatio-temporal dictionary for sparse representation of an image sequence during the reconstruction process. The correlations among atoms and across phases are considered to capture the characteristics of an object. The reconstruction problem is solved by the alternating direction method of multipliers. To recover fine or sharp structures such as edges, the nonlocal total variation is incorporated into the algorithmic framework. Preclinical examples including a sheep lung perfusion study and a dynamic mouse cardiac imaging demonstrate that the proposed approach outperforms the vectorized dictionary-based CT reconstruction in the case of few-view reconstruction. PMID:25779991
NASA Astrophysics Data System (ADS)
Lu, J.; Wakai, K.; Takahashi, S.; Shimizu, S.
2000-06-01
The algorithm which takes into account the effect of refraction of sound wave paths for acoustic computer tomography (CT) is developed. Incorporating the algorithm of refraction into ordinary CT algorithms which are based on Fourier transformation is very difficult. In this paper, the least-squares method, which is capable of considering the refraction effect, is employed to reconstruct the two-dimensional temperature distribution. The refraction effect is solved by writing a set of differential equations which is derived from Fermat's theorem and the calculus of variations. It is impossible to carry out refraction analysis and the reconstruction of temperature distribution simultaneously, so the problem is solved using the iteration method. The measurement field is assumed to take the shape of a circle and 16 speakers, also serving as the receivers, are set around it isometrically. The algorithm is checked through computer simulation with various kinds of temperature distributions. It is shown that the present method which takes into account the algorithm of the refraction effect can reconstruct temperature distributions with much greater accuracy than can methods which do not include the refraction effect.
Representation of photon limited data in emission tomography using origin ensembles
NASA Astrophysics Data System (ADS)
Sitek, A.
2008-06-01
Representation and reconstruction of data obtained by emission tomography scanners are challenging due to high noise levels in the data. Typically, images obtained using tomographic measurements are represented using grids. In this work, we define images as sets of origins of events detected during tomographic measurements; we call these origin ensembles (OEs). A state in the ensemble is characterized by a vector of 3N parameters Y, where the parameters are the coordinates of origins of detected events in a three-dimensional space and N is the number of detected events. The 3N-dimensional probability density function (PDF) for that ensemble is derived, and we present an algorithm for OE image estimation from tomographic measurements. A displayable image (e.g. grid based image) is derived from the OE formulation by calculating ensemble expectations based on the PDF using the Markov chain Monte Carlo method. The approach was applied to computer-simulated 3D list-mode positron emission tomography data. The reconstruction errors for a 10 000 000 event acquisition for simulated ranged from 0.1 to 34.8%, depending on object size and sampling density. The method was also applied to experimental data and the results of the OE method were consistent with those obtained by a standard maximum-likelihood approach. The method is a new approach to representation and reconstruction of data obtained by photon-limited emission tomography measurements.
Real-Space x-ray tomographic reconstruction of randomly oriented objects with sparse data frames.
Ayyer, Kartik; Philipp, Hugh T; Tate, Mark W; Elser, Veit; Gruner, Sol M
2014-02-10
Schemes for X-ray imaging single protein molecules using new x-ray sources, like x-ray free electron lasers (XFELs), require processing many frames of data that are obtained by taking temporally short snapshots of identical molecules, each with a random and unknown orientation. Due to the small size of the molecules and short exposure times, average signal levels of much less than 1 photon/pixel/frame are expected, much too low to be processed using standard methods. One approach to process the data is to use statistical methods developed in the EMC algorithm (Loh & Elser, Phys. Rev. E, 2009) which processes the data set as a whole. In this paper we apply this method to a real-space tomographic reconstruction using sparse frames of data (below 10(-2) photons/pixel/frame) obtained by performing x-ray transmission measurements of a low-contrast, randomly-oriented object. This extends the work by Philipp et al. (Optics Express, 2012) to three dimensions and is one step closer to the single molecule reconstruction problem.
Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction
Gregor, Jens; Fessler, Jeffrey A.
2015-01-01
Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security. PMID:26478906
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mlynar, J.; Weinzettl, V.; Imrisek, M.
2012-10-15
The contribution focuses on plasma tomography via the minimum Fisher regularisation (MFR) algorithm applied on data from the recently commissioned tomographic diagnostics on the COMPASS tokamak. The MFR expertise is based on previous applications at Joint European Torus (JET), as exemplified in a new case study of the plasma position analyses based on JET soft x-ray (SXR) tomographic reconstruction. Subsequent application of the MFR algorithm on COMPASS data from cameras with absolute extreme ultraviolet (AXUV) photodiodes disclosed a peaked radiating region near the limiter. Moreover, its time evolution indicates transient plasma edge cooling following a radial plasma shift. In themore » SXR data, MFR demonstrated that a high resolution plasma positioning independent of the magnetic diagnostics would be possible provided that a proper calibration of the cameras on an x-ray source is undertaken.« less
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
NASA Astrophysics Data System (ADS)
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Alignment Solution for CT Image Reconstruction using Fixed Point and Virtual Rotation Axis.
Jun, Kyungtaek; Yoon, Seokhwan
2017-01-25
Since X-ray tomography is now widely adopted in many different areas, it becomes more crucial to find a robust routine of handling tomographic data to get better quality of reconstructions. Though there are several existing techniques, it seems helpful to have a more automated method to remove the possible errors that hinder clearer image reconstruction. Here, we proposed an alternative method and new algorithm using the sinogram and the fixed point. An advanced physical concept of Center of Attenuation (CA) was also introduced to figure out how this fixed point is applied to the reconstruction of image having errors we categorized in this article. Our technique showed a promising performance in restoring images having translation and vertical tilt errors.
Leibon, Gregory; Rockmore, Daniel N.; Park, Wooram; Taintor, Robert; Chirikjian, Gregory S.
2008-01-01
We present algorithms for fast and stable approximation of the Hermite transform of a compactly supported function on the real line, attainable via an application of a fast algebraic algorithm for computing sums associated with a three-term relation. Trade-offs between approximation in bandlimit (in the Hermite sense) and size of the support region are addressed. Numerical experiments are presented that show the feasibility and utility of our approach. Generalizations to any family of orthogonal polynomials are outlined. Applications to various problems in tomographic reconstruction, including the determination of protein structure, are discussed. PMID:20027202
Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation
Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina
2014-01-01
In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467
Stress wave velocity patterns in the longitudinal-radial plane of trees for defect diagnosis
Guanghui Li; Xiang Weng; Xiaocheng Du; Xiping Wang; Hailin Feng
2016-01-01
Acoustic tomography for urban tree inspection typically uses stress wave data to reconstruct tomographic images for the trunk cross section using interpolation algorithm. This traditional technique does not take into account the stress wave velocity patterns along tree height. In this study, we proposed an analytical model for the wave velocity in the longitudinalâ...
Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms
NASA Astrophysics Data System (ADS)
Samanta, A.; Todd, L. A.
A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.
An efficient and accurate approach to MTE-MART for time-resolved tomographic PIV
NASA Astrophysics Data System (ADS)
Lynch, K. P.; Scarano, F.
2015-03-01
The motion-tracking-enhanced MART (MTE-MART; Novara et al. in Meas Sci Technol 21:035401, 2010) has demonstrated the potential to increase the accuracy of tomographic PIV by the combined use of a short sequence of non-simultaneous recordings. A clear bottleneck of the MTE-MART technique has been its computational cost. For large datasets comprising time-resolved sequences, MTE-MART becomes unaffordable and has been barely applied even for the analysis of densely seeded tomographic PIV datasets. A novel implementation is proposed for tomographic PIV image sequences, which strongly reduces the computational burden of MTE-MART, possibly below that of regular MART. The method is a sequential algorithm that produces a time-marching estimation of the object intensity field based on an enhanced guess, which is built upon the object reconstructed at the previous time instant. As the method becomes effective after a number of snapshots (typically 5-10), the sequential MTE-MART (SMTE) is most suited for time-resolved sequences. The computational cost reduction due to SMTE simply stems from the fewer MART iterations required for each time instant. Moreover, the method yields superior reconstruction quality and higher velocity field measurement precision when compared with both MART and MTE-MART. The working principle is assessed in terms of computational effort, reconstruction quality and velocity field accuracy with both synthetic time-resolved tomographic images of a turbulent boundary layer and two experimental databases documented in the literature. The first is the time-resolved data of flow past an airfoil trailing edge used in the study of Novara and Scarano (Exp Fluids 52:1027-1041, 2012); the second is a swirling jet in a water flow. In both cases, the effective elimination of ghost particles is demonstrated in number and intensity within a short temporal transient of 5-10 frames, depending on the seeding density. The increased value of the velocity space-time correlation coefficient demonstrates the increased velocity field accuracy of SMTE compared with MART.
Application Of Iterative Reconstruction Techniques To Conventional Circular Tomography
NASA Astrophysics Data System (ADS)
Ghosh Roy, D. N.; Kruger, R. A.; Yih, B. C.; Del Rio, S. P.; Power, R. L.
1985-06-01
Two "point-by-point" iteration procedures, namely, Iterative Least Square Technique (ILST) and Simultaneous Iterative Reconstructive Technique (SIRT) were applied to classical circular tomographic reconstruction. The technique of tomosynthetic DSA was used in forming the tomographic images. Reconstructions of a dog's renal and neck anatomy are presented.
Continuous analog of multiplicative algebraic reconstruction technique for computed tomography
NASA Astrophysics Data System (ADS)
Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.
Tomographic Reconstruction from a Few Views: A Multi-Marginal Optimal Transport Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abraham, I., E-mail: isabelle.abraham@cea.fr; Abraham, R., E-mail: romain.abraham@univ-orleans.fr; Bergounioux, M., E-mail: maitine.bergounioux@univ-orleans.fr
2017-02-15
In this article, we focus on tomographic reconstruction. The problem is to determine the shape of the interior interface using a tomographic approach while very few X-ray radiographs are performed. We use a multi-marginal optimal transport approach. Preliminary numerical results are presented.
High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology
NASA Astrophysics Data System (ADS)
Rajan, K.; Patnaik, L. M.; Ramakrishna, J.
1997-08-01
Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon Graphics Indigo 2 workstation, and on an EH system. The results show that an EH(3,1) using DSP chips as PEs executes the modified PBR algorithm about 100 times faster than an LBM 6000 RISC workstation. We have executed the algorithms on a 4-node IBM SP2 parallel computer. The results show that execution time of the algorithm on an EH(3,1) is better than that of a 4-node IBM SP2 system. The speed-up of an EH(3,1) system with eight PEs and one network controller is approximately 7.85.
Meaning of Interior Tomography
Wang, Ge; Yu, Hengyong
2013-01-01
The classic imaging geometry for computed tomography is for collection of un-truncated projections and reconstruction of a global image, with the Fourier transform as the theoretical foundation that is intrinsically non-local. Recently, interior tomography research has led to theoretically exact relationships between localities in the projection and image spaces and practically promising reconstruction algorithms. Initially, interior tomography was developed for x-ray computed tomography. Then, it has been elevated as a general imaging principle. Finally, a novel framework known as “omni-tomography” is being developed for grand fusion of multiple imaging modalities, allowing tomographic synchrony of diversified features. PMID:23912256
Penalized maximum likelihood reconstruction for x-ray differential phase-contrast tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brendel, Bernhard, E-mail: bernhard.brendel@philips.com; Teuffenbach, Maximilian von; Noël, Peter B.
2016-01-15
Purpose: The purpose of this work is to propose a cost function with regularization to iteratively reconstruct attenuation, phase, and scatter images simultaneously from differential phase contrast (DPC) acquisitions, without the need of phase retrieval, and examine its properties. Furthermore this reconstruction method is applied to an acquisition pattern that is suitable for a DPC tomographic system with continuously rotating gantry (sliding window acquisition), overcoming the severe smearing in noniterative reconstruction. Methods: We derive a penalized maximum likelihood reconstruction algorithm to directly reconstruct attenuation, phase, and scatter image from the measured detector values of a DPC acquisition. The proposed penaltymore » comprises, for each of the three images, an independent smoothing prior. Image quality of the proposed reconstruction is compared to images generated with FBP and iterative reconstruction after phase retrieval. Furthermore, the influence between the priors is analyzed. Finally, the proposed reconstruction algorithm is applied to experimental sliding window data acquired at a synchrotron and results are compared to reconstructions based on phase retrieval. Results: The results show that the proposed algorithm significantly increases image quality in comparison to reconstructions based on phase retrieval. No significant mutual influence between the proposed independent priors could be observed. Further it could be illustrated that the iterative reconstruction of a sliding window acquisition results in images with substantially reduced smearing artifacts. Conclusions: Although the proposed cost function is inherently nonconvex, it can be used to reconstruct images with less aliasing artifacts and less streak artifacts than reconstruction methods based on phase retrieval. Furthermore, the proposed method can be used to reconstruct images of sliding window acquisitions with negligible smearing artifacts.« less
Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography
Sidky, Emil Y.; Kraemer, David N.; Roth, Erin G.; Ullberg, Christer; Reiser, Ingrid S.; Pan, Xiaochuan
2014-01-01
Abstract. One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data. PMID:25685824
Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography.
Sidky, Emil Y; Kraemer, David N; Roth, Erin G; Ullberg, Christer; Reiser, Ingrid S; Pan, Xiaochuan
2014-10-03
One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data.
ODTbrain: a Python library for full-view, dense diffraction tomography.
Müller, Paul; Schürmann, Mirjam; Guck, Jochen
2015-11-04
Analyzing the three-dimensional (3D) refractive index distribution of a single cell makes it possible to describe and characterize its inner structure in a marker-free manner. A dense, full-view tomographic data set is a set of images of a cell acquired for multiple rotational positions, densely distributed from 0 to 360 degrees. The reconstruction is commonly realized by projection tomography, which is based on the inversion of the Radon transform. The reconstruction quality of projection tomography is greatly improved when first order scattering, which becomes relevant when the imaging wavelength is comparable to the characteristic object size, is taken into account. This advanced reconstruction technique is called diffraction tomography. While many implementations of projection tomography are available today, there is no publicly available implementation of diffraction tomography so far. We present a Python library that implements the backpropagation algorithm for diffraction tomography in 3D. By establishing benchmarks based on finite-difference time-domain (FDTD) simulations, we showcase the superiority of the backpropagation algorithm over the backprojection algorithm. Furthermore, we discuss how measurment parameters influence the reconstructed refractive index distribution and we also give insights into the applicability of diffraction tomography to biological cells. The present software library contains a robust implementation of the backpropagation algorithm. The algorithm is ideally suited for the application to biological cells. Furthermore, the implementation is a drop-in replacement for the classical backprojection algorithm and is made available to the large user community of the Python programming language.
Influence of Iterative Reconstruction Algorithms on PET Image Resolution
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.
NASA Astrophysics Data System (ADS)
Staib, Michael; Bhopatkar, Vallary; Bittner, William; Hohlmann, Marcus; Locke, Judson; Twigger, Jessie; Gnanvo, Kondo
2012-03-01
Muon tomography for homeland security aims at detecting well-shielded nuclear contraband in cargo and imaging it in 3D. The technique exploits multiple scattering of atmospheric cosmic ray muons, which is stronger in dense, high-Z materials, e.g. enriched uranium, than in low-Z and medium-Z shielding materials. We have constructed and are operating a compact Muon Tomography Station (MTS) that tracks muons with eight 30 cm x 30 cm Triple Gas Electron Multiplier (GEM) detectors placed on the sides of a cubic-foot imaging volume. A point-of-closest-approach algorithm applied to reconstructed incident and exiting tracks is used to create a tomographic reconstruction of the material within the active volume. We discuss the performance of this MTS prototype including characterization and commissioning of the GEM detectors and the data acquisition systems. We also present experimental tomographic images of small high-Z objects including depleted uranium with and without shielding and discuss the performance of material discrimination using this method.
NASA Astrophysics Data System (ADS)
Li, Hechao
An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for quantitative structure-property relations establishment and its performance prediction and optimization. X-ray tomography has provided a non-destructive means for microstructure characterization in both 3D and 4D (i.e., structural evolution over time). Traditional reconstruction algorithms like filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART) require huge number of tomographic projections and segmentation process before conducting microstructural quantification. This can be quite time consuming and computationally intensive. In this thesis, a novel procedure is first presented that allows one to directly extract key structural information in forms of spatial correlation functions from limited x-ray tomography data. The key component of the procedure is the computation of a "probability map", which provides the probability of an arbitrary point in the material system belonging to specific phase. The correlation functions of interest are then readily computed from the probability map. Using effective medium theory, accurate predictions of physical properties (e.g., elastic moduli) can be obtained. Secondly, a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of x-ray tomographic projections (e.g., 20 - 40) is presented. Moreover, a stochastic procedure for multi-modal data fusion is proposed, where both X-ray projections and correlation functions computed from limited 2D optical images are fused to accurately reconstruct complex heterogeneous materials in 3D. This multi-modal reconstruction algorithm is proved to be able to integrate the complementary data to perform an excellent optimization procedure, which indicates its high efficiency in using limited structural information. Finally, the accuracy of the stochastic reconstruction procedure using limited X-ray projection data is ascertained by analyzing the microstructural degeneracy and the roughness of energy landscape associated with different number of projections. Ground-state degeneracy of a microstructure is found to decrease with increasing number of projections, which indicates a higher probability that the reconstructed configurations match the actual microstructure. The roughness of energy landscape can also provide information about the complexity and convergence behavior of the reconstruction for given microstructures and projection number.
Plenoptic projection fluorescence tomography.
Iglesias, Ignacio; Ripoll, Jorge
2014-09-22
A new method to obtain the three-dimensional localization of fluorochrome distributions in micrometric samples is presented. It uses a microlens array coupled to the image port of a standard microscope to obtain tomographic data by a filtered back-projection algorithm. Scanning of the microlens array is proposed to obtain a dense data set for reconstruction. Simulation and experimental results are shown and the implications of this approach in fast 3D imaging are discussed.
NASA Astrophysics Data System (ADS)
Jardin, A.; Mazon, D.; Malard, P.; O'Mullane, M.; Chernyshova, M.; Czarski, T.; Malinowski, K.; Kasprowicz, G.; Wojenski, A.; Pozniak, K.
2017-08-01
The tokamak WEST aims at testing ITER divertor high heat flux component technology in long pulse operation. Unfortunately, heavy impurities like tungsten (W) sputtered from the plasma facing components can pollute the plasma core by radiation cooling in the soft x-ray (SXR) range, which is detrimental for the energy confinement and plasma stability. SXR diagnostics give valuable information to monitor impurities and study their transport. The WEST SXR diagnostic is composed of two new cameras based on the Gas Electron Multiplier (GEM) technology. The WEST GEM cameras will be used for impurity transport studies by performing 2D tomographic reconstructions with spectral resolution in tunable energy bands. In this paper, we characterize the GEM spectral response and investigate W density reconstruction thanks to a synthetic diagnostic recently developed and coupled with a tomography algorithm based on the minimum Fisher information (MFI) inversion method. The synthetic diagnostic includes the SXR source from a given plasma scenario, the photoionization, electron cloud transport and avalanche in the detection volume using Magboltz, and tomographic reconstruction of the radiation from the GEM signal. Preliminary studies of the effect of transport on the W ionization equilibrium and on the reconstruction capabilities are also presented.
NASA Astrophysics Data System (ADS)
Burov, V. A.; Zotov, D. I.; Rumyantseva, O. D.
2014-07-01
A two-step algorithm is used to reconstruct the spatial distributions of the acoustic characteristics of soft biological tissues-the sound velocity and absorption coefficient. Knowing these distributions is urgent for early detection of benign and malignant neoplasms in biological tissues, primarily in the breast. At the first stage, large-scale distributions are estimated; at the second step, they are refined with a high resolution. Results of reconstruction on the base of model initial data are presented. The principal necessity of preliminary reconstruction of large-scale distributions followed by their being taken into account at the second step is illustrated. The use of CUDA technology for processing makes it possible to obtain final images of 1024 × 1024 samples in only a few minutes.
Low-frequency noise effect on terahertz tomography using thermal detectors.
Guillet, J P; Recur, B; Balacey, H; Bou Sleiman, J; Darracq, F; Lewis, D; Mounaix, P
2015-08-01
In this paper, the impact of low-frequency noise on terahertz-computed tomography (THz-CT) is analyzed for several measurement configurations and pyroelectric detectors. We acquire real noise data from a continuous millimeter-wave tomographic scanner in order to figure out its impact on reconstructed images. Second, noise characteristics are quantified according to two distinct acquisition methods by (i) extrapolating from experimental acquisitions a sinogram for different noise backgrounds and (ii) reconstructing the corresponding spatial distributions in a slice using a CT reconstruction algorithm. Then we describe the low-frequency noise fingerprint and its influence on reconstructed images. Thanks to the observations, we demonstrate that some experimental choices can dramatically affect the 3D rendering of reconstructions. Thus, we propose some experimental methodologies optimizing the resulting quality and accuracy of the 3D reconstructions, with respect to the low-frequency noise characteristics observed during acquisitions.
Image processing pipeline for synchrotron-radiation-based tomographic microscopy.
Hintermüller, C; Marone, F; Isenegger, A; Stampanoni, M
2010-07-01
With synchrotron-radiation-based tomographic microscopy, three-dimensional structures down to the micrometer level can be visualized. Tomographic data sets typically consist of 1000 to 1500 projections of 1024 x 1024 to 2048 x 2048 pixels and are acquired in 5-15 min. A processing pipeline has been developed to handle this large amount of data efficiently and to reconstruct the tomographic volume within a few minutes after the end of a scan. Just a few seconds after the raw data have been acquired, a selection of reconstructed slices is accessible through a web interface for preview and to fine tune the reconstruction parameters. The same interface allows initiation and control of the reconstruction process on the computer cluster. By integrating all programs and tools, required for tomographic reconstruction into the pipeline, the necessary user interaction is reduced to a minimum. The modularity of the pipeline allows functionality for new scan protocols to be added, such as an extended field of view, or new physical signals such as phase-contrast or dark-field imaging etc.
Tomographic imaging of OH laser-induced fluorescence in laminar and turbulent jet flames
NASA Astrophysics Data System (ADS)
Li, Tao; Pareja, Jhon; Fuest, Frederik; Schütte, Manuel; Zhou, Yihui; Dreizler, Andreas; Böhm, Benjamin
2018-01-01
In this paper a new approach for 3D flame structure diagnostics using tomographic laser-induced fluorescence (Tomo-LIF) of the OH radical was evaluated. The approach combined volumetric illumination with a multi-camera detection system of eight views. Single-shot measurements were performed in a methane/air premixed laminar flame and in a non-premixed turbulent methane jet flame. 3D OH fluorescence distributions in the flames were reconstructed using the simultaneous multiplicative algebraic reconstruction technique. The tomographic measurements were compared and validated against results of OH-PLIF in the laminar flame. The effects of the experimental setup of the detection system and the size of the volumetric illumination on the quality of the tomographic reconstructions were evaluated. Results revealed that the Tomo-LIF is suitable for volumetric reconstruction of flame structures with acceptable spatial resolution and uncertainty. It was found that the number of views and their angular orientation have a strong influence on the quality and accuracy of the tomographic reconstruction while the illumination volume thickness influences mainly the spatial resolution.
Offodile, Anaeze C; Chatterjee, Abhishek; Vallejo, Sergio; Fisher, Carla S; Tchou, Julia C; Guo, Lifei
2015-04-01
Computed tomographic angiography is a diagnostic tool increasingly used for preoperative vascular mapping in abdomen-based perforator flap breast reconstruction. This study compared the use of computed tomographic angiography and the conventional practice of Doppler ultrasonography only in postmastectomy reconstruction using a cost-utility model. Following a comprehensive literature review, a decision analytic model was created using the three most clinically relevant health outcomes in free autologous breast reconstruction with computed tomographic angiography versus Doppler ultrasonography only. Cost and utility estimates for each health outcome were used to derive the quality-adjusted life-years and incremental cost-utility ratio. One-way sensitivity analysis was performed to scrutinize the robustness of the authors' results. Six studies and 782 patients were identified. Cost-utility analysis revealed a baseline cost savings of $3179, a gain in quality-adjusted life-years of 0.25. This yielded an incremental cost-utility ratio of -$12,716, implying a dominant choice favoring preoperative computed tomographic angiography. Sensitivity analysis revealed that computed tomographic angiography was costlier when the operative time difference between the two techniques was less than 21.3 minutes. However, the clinical advantage of computed tomographic angiography over Doppler ultrasonography only showed that computed tomographic angiography would still remain the cost-effective option even if it offered no additional operating time advantage. The authors' results show that computed tomographic angiography is a cost-effective technology for identifying lower abdominal perforators for autologous breast reconstruction. Although the perfect study would be a randomized controlled trial of the two approaches with true cost accrual, the authors' results represent the best available evidence.
Implementation of a cone-beam backprojection algorithm on the cell broadband engine processor
NASA Astrophysics Data System (ADS)
Bockenbach, Olivier; Knaup, Michael; Kachelrieß, Marc
2007-03-01
Tomographic image reconstruction is computationally very demanding. In all cases the backprojection represents the performance bottleneck due to the high operational count and due to the high demand put on the memory subsystem. In the past, solving this problem has lead to the implementation of specific architectures, connecting Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) to memory through dedicated high speed busses. More recently, there have also been attempt to use Graphic Processing Units (GPUs) to perform the backprojection step. Originally aimed at the gaming market, IBM, Toshiba and Sony have introduced the Cell Broadband Engine (CBE) processor, often considered as a multicomputer on a chip. Clocked at 3 GHz, the Cell allows for a theoretical performance of 192 GFlops and a peak data transfer rate over the internal bus of 200 GB/s. This performance indeed makes the Cell a very attractive architecture for implementing tomographic image reconstruction algorithms. In this study, we investigate the relative performance of a perspective backprojection algorithm when implemented on a standard PC and on the Cell processor. We compare these results to the performance achievable with FPGAs based boards and high end GPUs. The cone-beam backprojection performance was assessed by backprojecting a full circle scan of 512 projections of 1024x1024 pixels into a volume of size 512x512x512 voxels. It took 3.2 minutes on the PC (single CPU) and is as fast as 13.6 seconds on the Cell.
NASA Astrophysics Data System (ADS)
Zarubin, V.; Bychkov, A.; Simonova, V.; Zhigarkov, V.; Karabutov, A.; Cherepetskaya, E.
2018-05-01
In this paper, a technique for reflection mode immersion 2D laser-ultrasound tomography of solid objects with piecewise linear 2D surface profiles is presented. Pulsed laser radiation was used for generation of short ultrasonic probe pulses, providing high spatial resolution. A piezofilm sensor array was used for detection of the waves reflected by the surface and internal inhomogeneities of the object. The original ultrasonic image reconstruction algorithm accounting for refraction of acoustic waves at the liquid-solid interface provided longitudinal resolution better than 100 μm in the polymethyl methacrylate sample object.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim
2013-03-15
Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less
Li, Xu; Xia, Rongmin; He, Bin
2008-01-01
A new tomographic algorithm for reconstructing a curl-free vector field, whose divergence serves as acoustic source is proposed. It is shown that under certain conditions, the scalar acoustic measurements obtained from a surface enclosing the source area can be vectorized according to the known measurement geometry and then be used to reconstruct the vector field. The proposed method is validated by numerical experiments. This method can be easily applied to magnetoacoustic tomography with magnetic induction (MAT-MI). A simulation study of applying this method to MAT-MI shows that compared to existing methods, the proposed method can give an accurate estimation of the induced current distribution and a better reconstruction of electrical conductivity within an object.
2D and 3D X-ray phase retrieval of multi-material objects using a single defocus distance.
Beltran, M A; Paganin, D M; Uesugi, K; Kitchen, M J
2010-03-29
A method of tomographic phase retrieval is developed for multi-material objects whose components each has a distinct complex refractive index. The phase-retrieval algorithm, based on the Transport-of-Intensity equation, utilizes propagation-based X-ray phase contrast images acquired at a single defocus distance for each tomographic projection. The method requires a priori knowledge of the complex refractive index for each material present in the sample, together with the total projected thickness of the object at each orientation. The requirement of only a single defocus distance per projection simplifies the experimental setup and imposes no additional dose compared to conventional tomography. The algorithm was implemented using phase contrast data acquired at the SPring-8 Synchrotron facility in Japan. The three-dimensional (3D) complex refractive index distribution of a multi-material test object was quantitatively reconstructed using a single X-ray phase-contrast image per projection. The technique is robust in the presence of noise, compared to conventional absorption based tomography.
Portable Ultrasound Imaging of the Brain for Use in Forward Battlefield Areas
2011-03-01
ultrasound measurement of skull thickness and sound speed, phase correction of beam distortion, the tomographic reconstruction algorithm, and the final...produce a coherent imaging source. We propose a corrective technique that will use ultrasound-based phased -array beam correction [3], optimized...not expected to be a significant factor in the ability to phase -correct the imaging beam . In addition to planning (2.2.1), the data is also be used
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bielecki, J.; Scholz, M.; Drozdowicz, K.
A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less
Optical diffraction tomography: accuracy of an off-axis reconstruction
NASA Astrophysics Data System (ADS)
Kostencka, Julianna; Kozacki, Tomasz
2014-05-01
Optical diffraction tomography is an increasingly popular method that allows for reconstruction of three-dimensional refractive index distribution of semi-transparent samples using multiple measurements of an optical field transmitted through the sample for various illumination directions. The process of assembly of the angular measurements is usually performed with one of two methods: filtered backprojection (FBPJ) or filtered backpropagation (FBPP) tomographic reconstruction algorithm. The former approach, although conceptually very simple, provides an accurate reconstruction for the object regions located close to the plane of focus. However, since FBPJ ignores diffraction, its use for spatially extended structures is arguable. According to the theory of scattering, more precise restoration of a 3D structure shall be achieved with the FBPP algorithm, which unlike the former approach incorporates diffraction. It is believed that with this method one is allowed to obtain a high accuracy reconstruction in a large measurement volume exceeding depth of focus of an imaging system. However, some studies have suggested that a considerable improvement of the FBPP results can be achieved with prior propagation of the transmitted fields back to the centre of the object. This, supposedly, enables reduction of errors due to approximated diffraction formulas used in FBPP. In our view this finding casts doubt on quality of the FBPP reconstruction in the regions far from the rotation axis. The objective of this paper is to investigate limitation of the FBPP algorithm in terms of an off-axis reconstruction and compare its performance with the FBPJ approach. Moreover, in this work we propose some modifications to the FBPP algorithm that allow for more precise restoration of a sample structure in off-axis locations. The research is based on extensive numerical simulations supported with wave-propagation method.
Analysis of the multigroup model for muon tomography based threat detection
NASA Astrophysics Data System (ADS)
Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.
2014-02-01
We compare different algorithms for detecting a 5 cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5 cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.
A review of GPU-based medical image reconstruction.
Després, Philippe; Jia, Xun
2017-10-01
Tomographic image reconstruction is a computationally demanding task, even more so when advanced models are used to describe a more complete and accurate picture of the image formation process. Such advanced modeling and reconstruction algorithms can lead to better images, often with less dose, but at the price of long calculation times that are hardly compatible with clinical workflows. Fortunately, reconstruction tasks can often be executed advantageously on Graphics Processing Units (GPUs), which are exploited as massively parallel computational engines. This review paper focuses on recent developments made in GPU-based medical image reconstruction, from a CT, PET, SPECT, MRI and US perspective. Strategies and approaches to get the most out of GPUs in image reconstruction are presented as well as innovative applications arising from an increased computing capacity. The future of GPU-based image reconstruction is also envisioned, based on current trends in high-performance computing. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan
2016-04-28
The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less
Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost
2016-01-01
The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167
Tomographic imaging of flourescence resonance energy transfer in highly light scattering media
NASA Astrophysics Data System (ADS)
Soloviev, Vadim Y.; McGinty, James; Tahir, Khadija B.; Laine, Romain; Stuckey, Daniel W.; Mohan, P. Surya; Hajnal, Joseph V.; Sardini, Alessandro; French, Paul M. W.; Arridge, Simon R.
2010-02-01
Three-dimensional localization of protein conformation changes in turbid media using Förster Resonance Energy Transfer (FRET) was investigated by tomographic fluorescence lifetime imaging (FLIM). FRET occurs when a donor fluorophore, initially in its electronic excited state, transfers energy to an acceptor fluorophore in close proximity through non-radiative dipole-dipole coupling. An acceptor effectively behaves as a quencher of the donor's fluorescence. The quenching process is accompanied by a reduction in the quantum yield and lifetime of the donor fluorophore. Therefore, FRET can be localized by imaging changes in the quantum yield and the fluorescence lifetime of the donor fluorophore. Extending FRET to diffuse optical tomography has potentially important applications such as in vivo studies in small animal. We show that FRET can be localized by reconstructing the quantum yield and lifetime distribution from time-resolved non-invasive boundary measurements of fluorescence and transmitted excitation radiation. Image reconstruction was obtained by an inverse scattering algorithm. Thus we report, to the best of our knowledge, the first tomographic FLIM-FRET imaging in turbid media. The approach is demonstrated by imaging a highly scattering cylindrical phantom concealing two thin wells containing cytosol preparations of HEK293 cells expressing TN-L15, a cytosolic genetically-encoded calcium FRET sensor. A 10mM calcium chloride solution was added to one of the wells to induce a protein conformation change upon binding to TN-L15, resulting in FRET and a corresponding decrease in the donor fluorescence lifetime. The resulting fluorescence lifetime distribution, the quantum efficiency, absorption and scattering coefficients were reconstructed.
PET image reconstruction: a robust state space approach.
Liu, Huafeng; Tian, Yi; Shi, Pengcheng
2005-01-01
Statistical iterative reconstruction algorithms have shown improved image quality over conventional nonstatistical methods in PET by using accurate system response models and measurement noise models. Strictly speaking, however, PET measurements, pre-corrected for accidental coincidences, are neither Poisson nor Gaussian distributed and thus do not meet basic assumptions of these algorithms. In addition, the difficulty in determining the proper system response model also greatly affects the quality of the reconstructed images. In this paper, we explore the usage of state space principles for the estimation of activity map in tomographic PET imaging. The proposed strategy formulates the organ activity distribution through tracer kinetics models, and the photon-counting measurements through observation equations, thus makes it possible to unify the dynamic reconstruction problem and static reconstruction problem into a general framework. Further, it coherently treats the uncertainties of the statistical model of the imaging system and the noisy nature of measurement data. Since H(infinity) filter seeks minimummaximum-error estimates without any assumptions on the system and data noise statistics, it is particular suited for PET image reconstruction where the statistical properties of measurement data and the system model are very complicated. The performance of the proposed framework is evaluated using Shepp-Logan simulated phantom data and real phantom data with favorable results.
NASA Astrophysics Data System (ADS)
Elias, P. Q.; Jarrige, J.; Cucchetti, E.; Cannat, F.; Packan, D.
2017-09-01
Measuring the full ion velocity distribution function (IVDF) by non-intrusive techniques can improve our understanding of the ionization processes and beam dynamics at work in electric thrusters. In this paper, a Laser-Induced Fluorescence (LIF) tomographic reconstruction technique is applied to the measurement of the IVDF in the plume of a miniature Hall effect thruster. A setup is developed to move the laser axis along two rotation axes around the measurement volume. The fluorescence spectra taken from different viewing angles are combined using a tomographic reconstruction algorithm to build the complete 3D (in phase space) time-averaged distribution function. For the first time, this technique is used in the plume of a miniature Hall effect thruster to measure the full distribution function of the xenon ions. Two examples of reconstructions are provided, in front of the thruster nose-cone and in front of the anode channel. The reconstruction reveals the features of the ion beam, in particular on the thruster axis where a toroidal distribution function is observed. These findings are consistent with the thruster shape and operation. This technique, which can be used with other LIF schemes, could be helpful in revealing the details of the ion production regions and the beam dynamics. Using a more powerful laser source, the current implementation of the technique could be improved to reduce the measurement time and also to reconstruct the temporal evolution of the distribution function.
Model-based tomographic reconstruction of objects containing known components.
Stayman, J Webster; Otake, Yoshito; Prince, Jerry L; Khanna, A Jay; Siewerdsen, Jeffrey H
2012-10-01
The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
NASA Astrophysics Data System (ADS)
Deyhle, Hans; Weitkamp, Timm; Lang, Sabrina; Schulz, Georg; Rack, Alexander; Zanette, Irene; Müller, Bert
2012-10-01
The complex hierarchical structure of human tooth hard tissues, enamel and dentin, guarantees function for decades. On the micrometer level the dentin morphology is dominated by the tubules, micrometer-narrow channels extending from the dentin-enamel junction to the pulp chamber. Their structure has been extensively studied, mainly with two-dimensional approaches. Dentin tubules are formed during tooth growth and their orientation is linked to the morphology of the nanometer-sized components, which is of interest for example for the development of bio-inspired dental fillings. Therefore, a method has to be identified that can access the three-dimensional organization of the tubules, e.g. density and orientation. Tomographic setups with pixel sizes in the sub-micrometer range allow for the three-dimensional visualization of tooth dentin tubules both in phase and absorption contrast modes. We compare high-resolution tomographic scans reconstructed with propagation based phase retrieval algorithms as well as reconstructions without phase retrieval concerning spatial and density resolution as well as rendering of the dentin microstructure to determine the approach best suited for dentin tubule imaging. Reasonable results were obtained with a single-distance phase retrieval algorithm and a propagation distance of about 75% of the critical distance of d2/λ, where d is the size of the smallest objects identifiable in the specimen and λ is the X-ray wavelength.
Hyperspectral optical tomography of intrinsic signals in the rat cortex
Konecky, Soren D.; Wilson, Robert H.; Hagen, Nathan; Mazhar, Amaan; Tkaczyk, Tomasz S.; Frostig, Ron D.; Tromberg, Bruce J.
2015-01-01
Abstract. We introduce a tomographic approach for three-dimensional imaging of evoked hemodynamic activity, using broadband illumination and diffuse optical tomography (DOT) image reconstruction. Changes in diffuse reflectance in the rat somatosensory cortex due to stimulation of a single whisker were imaged at a frame rate of 5 Hz using a hyperspectral image mapping spectrometer. In each frame, images in 38 wavelength bands from 484 to 652 nm were acquired simultaneously. For data analysis, we developed a hyperspectral DOT algorithm that used the Rytov approximation to quantify changes in tissue concentration of oxyhemoglobin (ctHbO2) and deoxyhemoglobin (ctHb) in three dimensions. Using this algorithm, the maximum changes in ctHbO2 and ctHb were found to occur at 0.29±0.02 and 0.66±0.04 mm beneath the surface of the cortex, respectively. Rytov tomographic reconstructions revealed maximal spatially localized increases and decreases in ctHbO2 and ctHb of 321±53 and 555±96 nM, respectively, with these maximum changes occurring at 4±0.2 s poststimulus. The localized optical signals from the Rytov approximation were greater than those from modified Beer–Lambert, likely due in part to the inability of planar reflectance to account for partial volume effects. PMID:26835483
Comparison of analytic and iterative digital tomosynthesis reconstructions for thin slab objects
NASA Astrophysics Data System (ADS)
Yun, J.; Kim, D. W.; Ha, S.; Kim, H. K.
2017-11-01
For digital x-ray tomosynthesis of thin slab objects, we compare the tomographic imaging performances obtained from the filtered backprojection (FBP) and simultaneous algebraic reconstruction (SART) algorithms. The imaging performance includes the in-plane molulation-transfer function (MTF), the signal difference-to-noise ratio (SDNR), and the out-of-plane blur artifact or artifact-spread function (ASF). The MTF is measured using a thin tungsten-wire phantom, and the SDNR and the ASF are measured using a thin aluminum-disc phantom embedded in a plastic cylinder. The FBP shows a better MTF performance than the SART. On the contrary, the SART outperforms the FBP with regard to the SDNR and ASF performances. Detailed experimental results and their analysis results are described in this paper. For a more proper use of digital tomosynthesis technique, this study suggests to use a reconstuction algorithm suitable for application-specific purposes.
A new probe using hybrid virus-dye nanoparticles for near-infrared fluorescence tomography
NASA Astrophysics Data System (ADS)
Wu, Changfeng; Barnhill, Hannah; Liang, Xiaoping; Wang, Qian; Jiang, Huabei
2005-11-01
A fluorescent probe based on bionanoparticle cowpea mosaic virus has been developed for near-infrared fluorescence tomography. A unique advantage of this probe is that over 30 dye molecules can be loaded onto each viral nanoparticle with an average diameter of 30 nm, making high local dye concentration (∼1.8 mM) possible without significant fluorescence quenching. This ability of high loading of local dye concentration would increase the signal-to-noise ratio considerably, thus sensitivity for detection. We demonstrate successful tomographic fluorescence imaging of a target containing the virus-dye nanoparticles embedded in a tissue-like phantom. Tomographic fluorescence data were obtained through a multi-channel frequency-domain system and the spatial maps of fluorescence quantum yield were recovered with a finite-element-based reconstruction algorithm.
A new apparatus for electron tomography in the scanning electron microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morandi, V., E-mail: morandi@bo.imm.cnr.it; Maccagnani, P.; Masini, L.
2015-06-23
The three-dimensional reconstruction of a microscopic specimen has been obtained by applying the tomographic algorithm to a set of images acquired in a Scanning Electron Microscope. This result was achieved starting from a series of projections obtained by stepwise rotating the sample under the beam raster. The Scanning Electron Microscope was operated in the scanning-transmission imaging mode, where the intensity of the transmitted electron beam is a monotonic function of the local mass-density and thickness of the specimen. The detection strategy has been implemented and tailored in order to maintain the projection requirement over the large tilt range, as requiredmore » by the tomographic workflow. A Si-based electron detector and an eucentric-rotation specimen holder have been specifically developed for the purpose.« less
Singular value decomposition for the truncated Hilbert transform
NASA Astrophysics Data System (ADS)
Katsevich, A.
2010-11-01
Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.
Dense velocity reconstruction from tomographic PTV with material derivatives
NASA Astrophysics Data System (ADS)
Schneiders, Jan F. G.; Scarano, Fulvio
2016-09-01
A method is proposed to reconstruct the instantaneous velocity field from time-resolved volumetric particle tracking velocimetry (PTV, e.g., 3D-PTV, tomographic PTV and Shake-the-Box), employing both the instantaneous velocity and the velocity material derivative of the sparse tracer particles. The constraint to the measured temporal derivative of the PTV particle tracks improves the consistency of the reconstructed velocity field. The method is christened as pouring time into space, as it leverages temporal information to increase the spatial resolution of volumetric PTV measurements. This approach becomes relevant in cases where the spatial resolution is limited by the seeding concentration. The method solves an optimization problem to find the vorticity and velocity fields that minimize a cost function, which includes next to instantaneous velocity, also the velocity material derivative. The velocity and its material derivative are related through the vorticity transport equation, and the cost function is minimized using the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. The procedure is assessed numerically with a simulated PTV experiment in a turbulent boundary layer from a direct numerical simulation (DNS). The experimental validation considers a tomographic particle image velocimetry (PIV) experiment in a similar turbulent boundary layer and the additional case of a jet flow. The proposed technique (`vortex-in-cell plus', VIC+) is compared to tomographic PIV analysis (3D iterative cross-correlation), PTV interpolation methods (linear and adaptive Gaussian windowing) and to vortex-in-cell (VIC) interpolation without the material derivative. A visible increase in resolved details in the turbulent structures is obtained with the VIC+ approach, both in numerical simulations and experiments. This results in a more accurate determination of the turbulent stresses distribution in turbulent boundary layer investigations. Data from a jet experiment, where the vortex topology is retrieved with a small number of tracers indicate the potential utilization of VIC+ in low-concentration experiments as for instance occurring in large-scale volumetric PTV measurements.
Combined algorithmic and GPU acceleration for ultra-fast circular conebeam backprojection
NASA Astrophysics Data System (ADS)
Brokish, Jeffrey; Sack, Paul; Bresler, Yoram
2010-04-01
In this paper, we describe the first implementation and performance of a fast O(N3logN) hierarchical backprojection algorithm for cone beam CT with a circular trajectory1,developed on a modern Graphics Processing Unit (GPU). The resulting tomographic backprojection system for 3D cone beam geometry combines speedup through algorithmic improvements provided by the hierarchical backprojection algorithm with speedup from a massively parallel hardware accelerator. For data parameters typical in diagnostic CT and using a mid-range GPU card, we report reconstruction speeds of up to 360 frames per second, and relative speedup of almost 6x compared to conventional backprojection on the same hardware. The significance of these results is twofold. First, they demonstrate that the reduction in operation counts demonstrated previously for the FHBP algorithm can be translated to a comparable run-time improvement in a massively parallel hardware implementation, while preserving stringent diagnostic image quality. Second, the dramatic speedup and throughput numbers achieved indicate the feasibility of systems based on this technology, which achieve real-time 3D reconstruction for state-of-the art diagnostic CT scanners with small footprint, high-reliability, and affordable cost.
ɛ-subgradient algorithms for bilevel convex optimization
NASA Astrophysics Data System (ADS)
Helou, Elias S.; Simões, Lucas E. A.
2017-05-01
This paper introduces and studies the convergence properties of a new class of explicit ɛ-subgradient methods for the task of minimizing a convex function over a set of minimizers of another convex minimization problem. The general algorithm specializes to some important cases, such as first-order methods applied to a varying objective function, which have computationally cheap iterations. We present numerical experimentation concerning certain applications where the theoretical framework encompasses efficient algorithmic techniques, enabling the use of the resulting methods to solve very large practical problems arising in tomographic image reconstruction. ES Helou was supported by FAPESP grants 2013/07375-0 and 2013/16508-3 and CNPq grant 311476/2014-7. LEA Simões was supported by FAPESP grants 2011/02219-4 and 2013/14615-7.
Tomographic diagnostics of nonthermal plasmas
NASA Astrophysics Data System (ADS)
Denisova, Natalia
2009-10-01
In the previous work [1], we discussed a ``technology'' of tomographic method and relations between the tomographic diagnostics in thermal (equilibrium) and nonthermal (nonequilibrium) plasma sources. The conclusion has been made that tomographic reconstruction in thermal plasma sources is the standard procedure at present, which can provide much useful information on the plasma structure and its evolution in time, while the tomographic reconstruction of nonthermal plasma has a great potential at making a contribution to understanding the fundamental problem of substance behavior in strongly nonequilibrium conditions. Using medical terminology, one could say, that tomographic diagnostics of the equilibrium plasma sources studies their ``anatomic'' structure, while reconstruction of the nonequilibrium plasma is similar to the ``physiological'' examination: it is directed to study the physical mechanisms and processes. The present work is focused on nonthermal plasma research. The tomographic diagnostics is directed to study spatial structures formed in the gas discharge plasmas under the influence of electrical and gravitational fields. The ways of plasma ``self-organization'' in changing and extreme conditions are analyzed. The analysis has been made using some examples from our practical tomographic diagnostics of nonthermal plasma sources, such as low-pressure capacitive and inductive discharges. [0pt] [1] Denisova N. Plasma diagnostics using computed tomography method // IEEE Trans. Plasma Sci. 2009 37 4 502.
Accelerated gradient based diffuse optical tomographic image reconstruction.
Biswas, Samir Kumar; Rajan, K; Vasu, R M
2011-01-01
Fast reconstruction of interior optical parameter distribution using a new approach called Broyden-based model iterative image reconstruction (BMOBIIR) and adjoint Broyden-based MOBIIR (ABMOBIIR) of a tissue and a tissue mimicking phantom from boundary measurement data in diffuse optical tomography (DOT). DOT is a nonlinear and ill-posed inverse problem. Newton-based MOBIIR algorithm, which is generally used, requires repeated evaluation of the Jacobian which consumes bulk of the computation time for reconstruction. In this study, we propose a Broyden approach-based accelerated scheme for Jacobian computation and it is combined with conjugate gradient scheme (CGS) for fast reconstruction. The method makes explicit use of secant and adjoint information that can be obtained from forward solution of the diffusion equation. This approach reduces the computational time many fold by approximating the system Jacobian successively through low-rank updates. Simulation studies have been carried out with single as well as multiple inhomogeneities. Algorithms are validated using an experimental study carried out on a pork tissue with fat acting as an inhomogeneity. The results obtained through the proposed BMOBIIR and ABMOBIIR approaches are compared with those of Newton-based MOBIIR algorithm. The mean squared error and execution time are used as metrics for comparing the results of reconstruction. We have shown through experimental and simulation studies that Broyden-based MOBIIR and adjoint Broyden-based methods are capable of reconstructing single as well as multiple inhomogeneities in tissue and a tissue-mimicking phantom. Broyden MOBIIR and adjoint Broyden MOBIIR methods are computationally simple and they result in much faster implementations because they avoid direct evaluation of Jacobian. The image reconstructions have been carried out with different initial values using Newton, Broyden, and adjoint Broyden approaches. These algorithms work well when the initial guess is close to the true solution. However, when initial guess is far away from true solution, Newton-based MOBIIR gives better reconstructed images. The proposed methods are found to be stable with noisy measurement data.
Markov random field based automatic image alignment for electron tomography.
Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark
2008-03-01
We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.
NASA Astrophysics Data System (ADS)
Hu, Jicun; Tam, Kwok; Johnson, Roger H.
2004-01-01
We derive and analyse a simple algorithm first proposed by Kudo et al (2001 Proc. 2001 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine (Pacific Grove, CA) pp 7-10) for long object imaging from truncated helical cone beam data via a novel definition of region of interest (ROI). Our approach is based on the theory of short object imaging by Kudo et al (1998 Phys. Med. Biol. 43 2885-909). One of the key findings in their work is that filtering of the truncated projection can be divided into two parts: one, finite in the axial direction, results from ramp filtering the data within the Tam window. The other, infinite in the z direction, results from unbounded filtering of ray sums over PI lines only. We show that for an ROI defined by PI lines emanating from the initial and final source positions on a helical segment, the boundary data which would otherwise contaminate the reconstruction of the ROI can be completely excluded. This novel definition of the ROI leads to a simple algorithm for long object imaging. The overscan of the algorithm is analytically calculated and it is the same as that of the zero boundary method. The reconstructed ROI can be divided into two regions: one is minimally contaminated by the portion outside the ROI, while the other is reconstructed free of contamination. We validate the algorithm with a 3D Shepp-Logan phantom and a disc phantom.
Vectorization with SIMD extensions speeds up reconstruction in electron tomography.
Agulleiro, J I; Garzón, E M; García, I; Fernández, J J
2010-06-01
Electron tomography allows structural studies of cellular structures at molecular detail. Large 3D reconstructions are needed to meet the resolution requirements. The processing time to compute these large volumes may be considerable and so, high performance computing techniques have been used traditionally. This work presents a vector approach to tomographic reconstruction that relies on the exploitation of the SIMD extensions available in modern processors in combination to other single processor optimization techniques. This approach succeeds in producing full resolution tomograms with an important reduction in processing time, as evaluated with the most common reconstruction algorithms, namely WBP and SIRT. The main advantage stems from the fact that this approach is to be run on standard computers without the need of specialized hardware, which facilitates the development, use and management of programs. Future trends in processor design open excellent opportunities for vector processing with processor's SIMD extensions in the field of 3D electron microscopy.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
NASA Astrophysics Data System (ADS)
Zhao, Huijuan; Gao, Feng; Tanikawa, Yukari; Homma, Kazuhiro; Onodera, Yoichi; Yamada, Yukio
Near infra-red (NIR) diffuse optical tomography (DOT) has gained much attention and it will be clinically applied to imaging breast, neonatal head, and the hemodynamics of the brain because of its noninvasiveness and deep penetration in biological tissue. Prior to achieving the imaging of infant brain using DOT, the developed methodologies need to be experimentally justified by imaging some real organs with simpler structures. Here we report our results of an in vitro chicken leg and an in vivo exercising human forearm from the data measured by a multi-channel time-resolved NIR system. Tomographic images were reconstructed by a two-dimensional image reconstruction algorithm based on a modified generalized pulse spectrum technique for simultaneous reconstruction of the µa and µs´. The absolute µa- and µs´-images revealed the inner structures of the chicken leg and the forearm, where the bones were clearly distinguished from the muscle. The Δµa-images showed the blood volume changes during the forearm exercise, proving that the system and the image reconstruction algorithm could potentially be used for imaging not only the anatomic structure but also the hemodynamics in neonatal heads.
Magnetoacoustic tomographic imaging of electrical impedance with magnetic induction
Xia, Rongmin; Li, Xu; He, Bin
2008-01-01
Magnetoacoustic tomography with magnetic induction (MAT-MI) is a recently introduced method for imaging tissue electrical impedance properties by integrating magnetic induction and ultrasound measurements. In the present study, we have developed a focused cylindrical scanning mode MAT-MI system and the corresponding reconstruction algorithms. Using this system, we demonstrated 3-dimensional MAT-MI imaging in a physical phantom, with cylindrical scanning combined with ultrasound focusing, and the ability of MAT-MI in imaging electrical conductivity properties of biological tissue. PMID:19169372
Magnetoacoustic tomographic imaging of electrical impedance with magnetic induction
NASA Astrophysics Data System (ADS)
Xia, Rongmin; Li, Xu; He, Bin
2007-08-01
Magnetoacoustic tomography with magnetic induction (MAT-MI) is a recently introduced method for imaging tissue electrical impedance properties by integrating magnetic induction and ultrasound measurements. In the present study, the authors have developed a focused cylindrical scanning mode MAT-MI system and the corresponding reconstruction algorithms. Using this system, they demonstrated a three-dimensional MAT-MI imaging approach in a physical phantom, with cylindrical scanning combined with ultrasound focusing, and the ability of MAT-MI in imaging electrical conductivity properties of biological tissue.
Magnetoacoustic tomographic imaging of electrical impedance with magnetic induction.
Xia, Rongmin; Li, Xu; He, Bin
2007-08-22
Magnetoacoustic tomography with magnetic induction (MAT-MI) is a recently introduced method for imaging tissue electrical impedance properties by integrating magnetic induction and ultrasound measurements. In the present study, we have developed a focused cylindrical scanning mode MAT-MI system and the corresponding reconstruction algorithms. Using this system, we demonstrated 3-dimensional MAT-MI imaging in a physical phantom, with cylindrical scanning combined with ultrasound focusing, and the ability of MAT-MI in imaging electrical conductivity properties of biological tissue.
NASA Astrophysics Data System (ADS)
Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.
2017-03-01
In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.
Super resolution reconstruction of μ-CT image of rock sample using neighbour embedding algorithm
NASA Astrophysics Data System (ADS)
Wang, Yuzhu; Rahman, Sheik S.; Arns, Christoph H.
2018-03-01
X-ray computed tomography (μ-CT) is considered to be the most effective way to obtain the inner structure of rock sample without destructions. However, its limited resolution hampers its ability to probe sub-micro structures which is critical for flow transportation of rock sample. In this study, we propose an innovative methodology to improve the resolution of μ-CT image using neighbour embedding algorithm where low frequency information is provided by μ-CT image itself while high frequency information is supplemented by high resolution scanning electron microscopy (SEM) image. In order to obtain prior for reconstruction, a large number of image patch pairs contain high- and low- image patches are extracted from the Gaussian image pyramid generated by SEM image. These image patch pairs contain abundant information about tomographic evolution of local porous structures under different resolution spaces. Relying on the assumption of self-similarity of porous structure, this prior information can be used to supervise the reconstruction of high resolution μ-CT image effectively. The experimental results show that the proposed method is able to achieve the state-of-the-art performance.
NASA Astrophysics Data System (ADS)
Dinten, Jean-Marc; Petié, Philippe; da Silva, Anabela; Boutet, Jérôme; Koenig, Anne; Hervé, Lionel; Berger, Michel; Laidevant, Aurélie; Rizo, Philippe
2006-03-01
Optical imaging of fluorescent probes is an essential tool for investigation of molecular events in small animals for drug developments. In order to get localization and quantification information of fluorescent labels, CEA-LETI has developed efficient approaches in classical reflectance imaging as well as in diffuse optical tomographic imaging with continuous and temporal signals. This paper presents an overview of the different approaches investigated and their performances. High quality fluorescence reflectance imaging is obtained thanks to the development of an original "multiple wavelengths" system. The uniformity of the excitation light surface area is better than 15%. Combined with the use of adapted fluorescent probes, this system enables an accurate detection of pathological tissues, such as nodules, beneath the animal's observed area. Performances for the detection of ovarian nodules on a nude mouse are shown. In order to investigate deeper inside animals and get 3D localization, diffuse optical tomography systems are being developed for both slab and cylindrical geometries. For these two geometries, our reconstruction algorithms are based on analytical expression of light diffusion. Thanks to an accurate introduction of light/matter interaction process in the algorithms, high quality reconstructions of tumors in mice have been obtained. Reconstruction of lung tumors on mice are presented. By the use of temporal diffuse optical imaging, localization and quantification performances can be improved at the price of a more sophisticated acquisition system and more elaborate information processing methods. Such a system based on a pulsed laser diode and a time correlated single photon counting system has been set up. Performances of this system for localization and quantification of fluorescent probes are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr
Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less
Magnetic particle imaging: from proof of principle to preclinical applications
NASA Astrophysics Data System (ADS)
Knopp, T.; Gdaniec, N.; Möddel, M.
2017-07-01
Tomographic imaging has become a mandatory tool for the diagnosis of a majority of diseases in clinical routine. Since each method has its pros and cons, a variety of them is regularly used in clinics to satisfy all application needs. Magnetic particle imaging (MPI) is a relatively new tomographic imaging technique that images magnetic nanoparticles with a high spatiotemporal resolution in a quantitative way, and in turn is highly suited for vascular and targeted imaging. MPI was introduced in 2005 and now enters the preclinical research phase, where medical researchers get access to this new technology and exploit its potential under physiological conditions. Within this paper, we review the development of MPI since its introduction in 2005. Besides an in-depth description of the basic principles, we provide detailed discussions on imaging sequences, reconstruction algorithms, scanner instrumentation and potential medical applications.
Image Reconstruction is a New Frontier of Machine Learning.
Wang, Ge; Ye, Jong Chu; Mueller, Klaus; Fessler, Jeffrey A
2018-06-01
Over past several years, machine learning, or more generally artificial intelligence, has generated overwhelming research interest and attracted unprecedented public attention. As tomographic imaging researchers, we share the excitement from our imaging perspective [item 1) in the Appendix], and organized this special issue dedicated to the theme of "Machine learning for image reconstruction." This special issue is a sister issue of the special issue published in May 2016 of this journal with the theme "Deep learning in medical imaging" [item 2) in the Appendix]. While the previous special issue targeted medical image processing/analysis, this special issue focuses on data-driven tomographic reconstruction. These two special issues are highly complementary, since image reconstruction and image analysis are two of the main pillars for medical imaging. Together we cover the whole workflow of medical imaging: from tomographic raw data/features to reconstructed images and then extracted diagnostic features/readings.
NASA Astrophysics Data System (ADS)
Boxx, I.; Carter, C. D.; Meier, W.
2014-08-01
Tomographic particle image velocimetry (tomographic-PIV) is a recently developed measurement technique used to acquire volumetric velocity field data in liquid and gaseous flows. The technique relies on line-of-sight reconstruction of the rays between a 3D particle distribution and a multi-camera imaging system. In a turbulent flame, however, index-of-refraction variations resulting from local heat-release may inhibit reconstruction and thereby render the technique infeasible. The objective of this study was to test the efficacy of tomographic-PIV in a turbulent flame. An additional goal was to determine the feasibility of acquiring usable tomographic-PIV measurements in a turbulent flame at multi-kHz acquisition rates with current-generation laser and camera technology. To this end, a setup consisting of four complementary metal oxide semiconductor cameras and a dual-cavity Nd:YAG laser was implemented to test the technique in a lifted turbulent jet flame. While the cameras were capable of kHz-rate image acquisition, the laser operated at a pulse repetition rate of only 10 Hz. However, use of this laser allowed exploration of the required pulse energy and thus power for a kHz-rate system. The imaged region was 29 × 28 × 2.7 mm in size. The tomographic reconstruction of the 3D particle distributions was accomplished using the multiplicative algebraic reconstruction technique. The results indicate that volumetric velocimetry via tomographic-PIV is feasible with pulse energies of 25 mJ, which is within the capability of current-generation kHz-rate diode-pumped solid-state lasers.
Zhang, T; Godavarthi, C; Chaumet, P C; Maire, G; Giovannini, H; Talneau, A; Prada, C; Sentenac, A; Belkebir, K
2015-02-15
Tomographic diffractive microscopy is a marker-free optical digital imaging technique in which three-dimensional samples are reconstructed from a set of holograms recorded under different angles of incidence. We show experimentally that, by processing the holograms with singular value decomposition, it is possible to image objects in a noisy background that are invisible with classical wide-field microscopy and conventional tomographic reconstruction procedure. The targets can be further characterized with a selective quantitative inversion.
A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.
Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa
2015-12-01
Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. Copyright © 2015 Elsevier Inc. All rights reserved.
High-Speed GPU-Based Fully Three-Dimensional Diffuse Optical Tomographic System
Saikia, Manob Jyoti; Kanhirodan, Rajan; Mohan Vasu, Ram
2014-01-01
We have developed a graphics processor unit (GPU-) based high-speed fully 3D system for diffuse optical tomography (DOT). The reduction in execution time of 3D DOT algorithm, a severely ill-posed problem, is made possible through the use of (1) an algorithmic improvement that uses Broyden approach for updating the Jacobian matrix and thereby updating the parameter matrix and (2) the multinode multithreaded GPU and CUDA (Compute Unified Device Architecture) software architecture. Two different GPU implementations of DOT programs are developed in this study: (1) conventional C language program augmented by GPU CUDA and CULA routines (C GPU), (2) MATLAB program supported by MATLAB parallel computing toolkit for GPU (MATLAB GPU). The computation time of the algorithm on host CPU and the GPU system is presented for C and Matlab implementations. The forward computation uses finite element method (FEM) and the problem domain is discretized into 14610, 30823, and 66514 tetrahedral elements. The reconstruction time, so achieved for one iteration of the DOT reconstruction for 14610 elements, is 0.52 seconds for a C based GPU program for 2-plane measurements. The corresponding MATLAB based GPU program took 0.86 seconds. The maximum number of reconstructed frames so achieved is 2 frames per second. PMID:24891848
High-Speed GPU-Based Fully Three-Dimensional Diffuse Optical Tomographic System.
Saikia, Manob Jyoti; Kanhirodan, Rajan; Mohan Vasu, Ram
2014-01-01
We have developed a graphics processor unit (GPU-) based high-speed fully 3D system for diffuse optical tomography (DOT). The reduction in execution time of 3D DOT algorithm, a severely ill-posed problem, is made possible through the use of (1) an algorithmic improvement that uses Broyden approach for updating the Jacobian matrix and thereby updating the parameter matrix and (2) the multinode multithreaded GPU and CUDA (Compute Unified Device Architecture) software architecture. Two different GPU implementations of DOT programs are developed in this study: (1) conventional C language program augmented by GPU CUDA and CULA routines (C GPU), (2) MATLAB program supported by MATLAB parallel computing toolkit for GPU (MATLAB GPU). The computation time of the algorithm on host CPU and the GPU system is presented for C and Matlab implementations. The forward computation uses finite element method (FEM) and the problem domain is discretized into 14610, 30823, and 66514 tetrahedral elements. The reconstruction time, so achieved for one iteration of the DOT reconstruction for 14610 elements, is 0.52 seconds for a C based GPU program for 2-plane measurements. The corresponding MATLAB based GPU program took 0.86 seconds. The maximum number of reconstructed frames so achieved is 2 frames per second.
Real-time plasma control based on the ISTTOK tomography diagnostica)
NASA Astrophysics Data System (ADS)
Carvalho, P. J.; Carvalho, B. B.; Neto, A.; Coelho, R.; Fernandes, H.; Sousa, J.; Varandas, C.; Chávez-Alarcón, E.; Herrera-Velázquez, J. J. E.
2008-10-01
The presently available processing power in generic processing units (GPUs) combined with state-of-the-art programmable logic devices benefits the implementation of complex, real-time driven, data processing algorithms for plasma diagnostics. A tomographic reconstruction diagnostic has been developed for the ISTTOK tokamak, based on three linear pinhole cameras each with ten lines of sight. The plasma emissivity in a poloidal cross section is computed locally on a submillisecond time scale, using a Fourier-Bessel algorithm, allowing the use of the output signals for active plasma position control. The data acquisition and reconstruction (DAR) system is based on ATCA technology and consists of one acquisition board with integrated field programmable gate array (FPGA) capabilities and a dual-core Pentium module running real-time application interface (RTAI) Linux. In this paper, the DAR real-time firmware/software implementation is presented, based on (i) front-end digital processing in the FPGA; (ii) a device driver specially developed for the board which enables streaming data acquisition to the host GPU; and (iii) a fast reconstruction algorithm running in Linux RTAI. This system behaves as a module of the central ISTTOK control and data acquisition system (FIRESIGNAL). Preliminary results of the above experimental setup are presented and a performance benchmarking against the magnetic coil diagnostic is shown.
Tomographic imaging using poissonian detector data
Aspelmeier, Timo; Ebel, Gernot; Hoeschen, Christoph
2013-10-15
An image reconstruction method for reconstructing a tomographic image (f.sub.j) of a region of investigation within an object (1), comprises the steps of providing detector data (y.sub.i) comprising Poisson random values measured at an i-th of a plurality of different positions, e.g. i=(k,l) with pixel index k on a detector device and angular index l referring to both the angular position (.alpha..sub.l) and the rotation radius (r.sub.l) of the detector device (10) relative to the object (1), providing a predetermined system matrix A.sub.ij assigning a j-th voxel of the object (1) to the i-th detector data (y.sub.i), and reconstructing the tomographic image (f.sub.j) based on the detector data (y.sub.i), said reconstructing step including a procedure of minimizing a functional F(f) depending on the detector data (y.sub.i) and the system matrix A.sub.ij and additionally including a sparse or compressive representation of the object (1) in an orthobasis T, wherein the tomographic image (f.sub.j) represents the global minimum of the functional F(f). Furthermore, an imaging method and an imaging device using the image reconstruction method are described.
Chen, Guang-Hong; Li, Yinsheng
2015-08-01
In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial-temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial-temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial-temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity of the reconstructed images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. In numerical simulations, the 240(∘) short scan angular span was divided into four consecutive 60(∘) angular subsectors. SMART-RECON enables four high temporal fidelity images without limited-view artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200(∘), three 66(∘) angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60(∘) angular subsectors.
A software tool of digital tomosynthesis application for patient positioning in radiotherapy.
Yan, Hui; Dai, Jian-Rong
2016-03-08
Digital Tomosynthesis (DTS) is an image modality in reconstructing tomographic images from two-dimensional kV projections covering a narrow scan angles. Comparing with conventional cone-beam CT (CBCT), it requires less time and radiation dose in data acquisition. It is feasible to apply this technique in patient positioning in radiotherapy. To facilitate its clinical application, a software tool was developed and the reconstruction processes were accelerated by graphic process-ing unit (GPU). Two reconstruction and two registration processes are required for DTS application which is different from conventional CBCT application which requires one image reconstruction process and one image registration process. The reconstruction stage consists of productions of two types of DTS. One type of DTS is reconstructed from cone-beam (CB) projections covering a narrow scan angle and is named onboard DTS (ODTS), which represents the real patient position in treatment room. Another type of DTS is reconstructed from digitally reconstructed radiography (DRR) and is named reference DTS (RDTS), which represents the ideal patient position in treatment room. Prior to the reconstruction of RDTS, The DRRs are reconstructed from planning CT using the same acquisition setting of CB projections. The registration stage consists of two matching processes between ODTS and RDTS. The target shift in lateral and longitudinal axes are obtained from the matching between ODTS and RDTS in coronal view, while the target shift in longitudinal and vertical axes are obtained from the matching between ODTS and RDTS in sagittal view. In this software, both DRR and DTS reconstruction algorithms were implemented on GPU environments for acceleration purpose. The comprehensive evaluation of this software tool was performed including geometric accuracy, image quality, registration accuracy, and reconstruction efficiency. The average correlation coefficient between DRR/DTS generated by GPU-based algorithm and CPU-based algorithm is 0.99. Based on the measurements of cube phantom on DTS, the geometric errors are within 0.5 mm in three axes. For both cube phantom and pelvic phantom, the registration errors are within 0.5 mm in three axes. Compared with reconstruction performance of CPU-based algorithms, the performances of DRR and DTS reconstructions are improved by a factor of 15 to 20. A GPU-based software tool was developed for DTS application for patient positioning of radiotherapy. The geometric and registration accuracy met the clinical requirement in patient setup of radiotherapy. The high performance of DRR and DTS reconstruction algorithms was achieved by the GPU-based computation environments. It is a useful software tool for researcher and clinician in evaluating DTS application in patient positioning of radiotherapy.
A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise
NASA Astrophysics Data System (ADS)
Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno
2017-09-01
While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.
NASA Technical Reports Server (NTRS)
Mcdade, Ian C.
1991-01-01
Techniques were developed for recovering two-dimensional distributions of auroral volume emission rates from rocket photometer measurements made in a tomographic spin scan mode. These tomographic inversion procedures are based upon an algebraic reconstruction technique (ART) and utilize two different iterative relaxation techniques for solving the problems associated with noise in the observational data. One of the inversion algorithms is based upon a least squares method and the other on a maximum probability approach. The performance of the inversion algorithms, and the limitations of the rocket tomography technique, were critically assessed using various factors such as (1) statistical and non-statistical noise in the observational data, (2) rocket penetration of the auroral form, (3) background sources of emission, (4) smearing due to the photometer field of view, and (5) temporal variations in the auroral form. These tests show that the inversion procedures may be successfully applied to rocket observations made in medium intensity aurora with standard rocket photometer instruments. The inversion procedures have been used to recover two-dimensional distributions of auroral emission rates and ionization rates from an existing set of N2+3914A rocket photometer measurements which were made in a tomographic spin scan mode during the ARIES auroral campaign. The two-dimensional distributions of the 3914A volume emission rates recoverd from the inversion of the rocket data compare very well with the distributions that were inferred from ground-based measurements using triangulation-tomography techniques and the N2 ionization rates derived from the rocket tomography results are in very good agreement with the in situ particle measurements that were made during the flight. Three pre-prints describing the tomographic inversion techniques and the tomographic analysis of the ARIES rocket data are included as appendices.
Navigation-supported diagnosis of the substantia nigra by matching midbrain sonography and MRI
NASA Astrophysics Data System (ADS)
Salah, Zein; Weise, David; Preim, Bernhard; Classen, Joseph; Rose, Georg
2012-03-01
Transcranial sonography (TCS) is a well-established neuroimaging technique that allows for visualizing several brainstem structures, including the substantia nigra, and helps for the diagnosis and differential diagnosis of various movement disorders, especially in Parkinsonian syndromes. However, proximate brainstem anatomy can hardly be recognized due to the limited image quality of B-scans. In this paper, a visualization system for the diagnosis of the substantia nigra is presented, which utilizes neuronavigated TCS to reconstruct tomographical slices from registered MRI datasets and visualizes them simultaneously with corresponding TCS planes in realtime. To generate MRI tomographical slices, the tracking data of the calibrated ultrasound probe are passed to an optimized slicing algorithm, which computes cross sections at arbitrary positions and orientations from the registered MRI dataset. The extracted MRI cross sections are finally fused with the region of interest from the ultrasound image. The system allows for the computation and visualization of slices at a near real-time rate. Primary tests of the system show an added value to the pure sonographic imaging. The system also allows for reconstructing volumetric (3D) ultrasonic data of the region of interest, and thus contributes to enhancing the diagnostic yield of midbrain sonography.
Gilles, L; Ellerbroek, B L
2010-11-01
Real-time turbulence profiling is necessary to tune tomographic wavefront reconstruction algorithms for wide-field adaptive optics (AO) systems on large to extremely large telescopes, and to perform a variety of image post-processing tasks involving point-spread function reconstruction. This paper describes a computationally efficient and accurate numerical technique inspired by the slope detection and ranging (SLODAR) method to perform this task in real time from properly selected Shack-Hartmann wavefront sensor measurements accumulated over a few hundred frames from a pair of laser guide stars, thus eliminating the need for an additional instrument. The algorithm is introduced, followed by a theoretical influence function analysis illustrating its impulse response to high-resolution turbulence profiles. Finally, its performance is assessed in the context of the Thirty Meter Telescope multi-conjugate adaptive optics system via end-to-end wave optics Monte Carlo simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, C.; et al.
We describe the concept and procedure of drifted-charge extraction developed in the MicroBooNE experiment, a single-phase liquid argon time projection chamber (LArTPC). This technique converts the raw digitized TPC waveform to the number of ionization electrons passing through a wire plane at a given time. A robust recovery of the number of ionization electrons from both induction and collection anode wire planes will augment the 3D reconstruction, and is particularly important for tomographic reconstruction algorithms. A number of building blocks of the overall procedure are described. The performance of the signal processing is quantitatively evaluated by comparing extracted charge withmore » the true charge through a detailed TPC detector simulation taking into account position-dependent induced current inside a single wire region and across multiple wires. Some areas for further improvement of the performance of the charge extraction procedure are also discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk; Tiseanu, Ion; Zoita, Vasile
The Joint European Torus (JET) neutron profile monitor ensures 2D coverage of the gamma and neutron emissive region that enables tomographic reconstruction. Due to the availability of only two projection angles and to the coarse sampling, tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET, but the problem of evaluating the errors associated with the reconstructed emissivity profile is still open. The reconstruction technique based on the maximum likelihood principle, that proved already to be a powerful tool for JET tomography, has been usedmore » to develop a method for the numerical evaluation of the statistical properties of the uncertainties in gamma and neutron emissivity reconstructions. The image covariance calculation takes into account the additional techniques introduced in the reconstruction process for tackling with the limited data set (projection resampling, smoothness regularization depending on magnetic field). The method has been validated by numerically simulations and applied to JET data. Different sources of artefacts that may significantly influence the quality of reconstructions and the accuracy of variance calculation have been identified.« less
Vertical structure of medium-scale traveling ionospheric disturbances
NASA Astrophysics Data System (ADS)
Ssessanga, Nicholas; Kim, Yong Ha; Kim, Eunsol
2015-11-01
We develop an algorithm of computerized ionospheric tomography (CIT) to infer information on the vertical and horizontal structuring of electron density during nighttime medium-scale traveling ionospheric disturbances (MSTIDs). To facilitate digital CIT we have adopted total electron contents (TEC) from a dense Global Positioning System (GPS) receiver network, GEONET, which contains more than 1000 receivers. A multiplicative algebraic reconstruction technique was utilized with a calibrated IRI-2012 model as an initial solution. The reconstructed F2 peak layer varied in altitude with average peak-to-peak amplitude of ~52 km. In addition, the F2 peak layer anticorrelated with TEC variations. This feature supports a theory in which nighttime MSTID is composed of oscillating electric fields due to conductivity variations. Moreover, reconstructed TEC variations over two stations were reasonably close to variations directly derived from the measured TEC data set. Our tomographic analysis may thus help understand three-dimensional structure of MSTIDs in a quantitative way.
Data-processing strategies for nano-tomography with elemental specification
NASA Astrophysics Data System (ADS)
Liu, Yijin; Cats, Korneel H.; Nelson Weker, Johanna; Andrews, Joy C.; Weckhuysen, Bert M.; Pianetta, Piero
2013-10-01
Combining the energy tunability provided by synchrotron X-ray sources with transmission X-ray microscopy, the morphology of materials can be resolved in 3D at spatial resolution down to 30 nm with elemental/chemical specification. In order to study the energy dependence of the absorption coefficient over the investigated volume, the tomographic reconstruction and image registration (before and/or after the tomographic reconstruction) are critical. We show in this paper the comparison of two different data processing strategies and conclude that the signal to noise ratio (S/N) in the final result can be improved via performing tomographic reconstruction prior to the evaluation of energy dependence. Our result echoes the dose fractionation theorem, and is particularly helpful when the element of interest has low concentration.
Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus
2015-01-01
Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 – Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning. PMID:26217710
Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus
2015-06-01
Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.
In vivo quantitative bioluminescence tomography using heterogeneous and homogeneous mouse models.
Liu, Junting; Wang, Yabin; Qu, Xiaochao; Li, Xiangsi; Ma, Xiaopeng; Han, Runqiang; Hu, Zhenhua; Chen, Xueli; Sun, Dongdong; Zhang, Rongqing; Chen, Duofang; Chen, Dan; Chen, Xiaoyuan; Liang, Jimin; Cao, Feng; Tian, Jie
2010-06-07
Bioluminescence tomography (BLT) is a new optical molecular imaging modality, which can monitor both physiological and pathological processes by using bioluminescent light-emitting probes in small living animal. Especially, this technology possesses great potential in drug development, early detection, and therapy monitoring in preclinical settings. In the present study, we developed a dual modality BLT prototype system with Micro-computed tomography (MicroCT) registration approach, and improved the quantitative reconstruction algorithm based on adaptive hp finite element method (hp-FEM). Detailed comparisons of source reconstruction between the heterogeneous and homogeneous mouse models were performed. The models include mice with implanted luminescence source and tumor-bearing mice with firefly luciferase report gene. Our data suggest that the reconstruction based on heterogeneous mouse model is more accurate in localization and quantification than the homogeneous mouse model with appropriate optical parameters and that BLT allows super-early tumor detection in vivo based on tomographic reconstruction of heterogeneous mouse model signal.
High-efficiency tomographic reconstruction of quantum states by quantum nondemolition measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J. S.; Centre for Quantum Technologies and Department of Physics, National University of Singapore, 3 Science Drive 2, Singapore 117542; Wei, L. F.
We propose a high-efficiency scheme to tomographically reconstruct an unknown quantum state by using a series of quantum nondemolition (QND) measurements. The proposed QND measurements of the qubits are implemented by probing the stationary transmissions through a driven dispersively coupled resonator. It is shown that only one kind of QND measurement is sufficient to determine all the diagonal elements of the density matrix of the detected quantum state. The remaining nondiagonal elements can be similarly determined by transferring them to the diagonal locations after a series of unitary operations. Compared with the tomographic reconstructions based on the usual destructive projectivemore » measurements (wherein one such measurement can determine only one diagonal element of the density matrix), the present reconstructive approach exhibits significantly high efficiency. Specifically, our generic proposal is demonstrated by the experimental circuit quantum electrodynamics systems with a few Josephson charge qubits.« less
NASA Technical Reports Server (NTRS)
Czabaj, M. W.; Riccio, M. L.; Whitacre, W. W.
2014-01-01
A combined experimental and computational study aimed at high-resolution 3D imaging, visualization, and numerical reconstruction of fiber-reinforced polymer microstructures at the fiber length scale is presented. To this end, a sample of graphite/epoxy composite was imaged at sub-micron resolution using a 3D X-ray computed tomography microscope. Next, a novel segmentation algorithm was developed, based on concepts adopted from computer vision and multi-target tracking, to detect and estimate, with high accuracy, the position of individual fibers in a volume of the imaged composite. In the current implementation, the segmentation algorithm was based on Global Nearest Neighbor data-association architecture, a Kalman filter estimator, and several novel algorithms for virtualfiber stitching, smoothing, and overlap removal. The segmentation algorithm was used on a sub-volume of the imaged composite, detecting 508 individual fibers. The segmentation data were qualitatively compared to the tomographic data, demonstrating high accuracy of the numerical reconstruction. Moreover, the data were used to quantify a) the relative distribution of individual-fiber cross sections within the imaged sub-volume, and b) the local fiber misorientation relative to the global fiber axis. Finally, the segmentation data were converted using commercially available finite element (FE) software to generate a detailed FE mesh of the composite volume. The methodology described herein demonstrates the feasibility of realizing an FE-based, virtual-testing framework for graphite/fiber composites at the constituent level.
Exemplar-based inpainting as a solution to the missing wedge problem in electron tomography.
Trampert, Patrick; Wang, Wu; Chen, Delei; Ravelli, Raimond B G; Dahmen, Tim; Peters, Peter J; Kübel, Christian; Slusallek, Philipp
2018-04-21
A new method for dealing with incomplete projection sets in electron tomography is proposed. The approach is inspired by exemplar-based inpainting techniques in image processing and heuristically generates data for missing projection directions. The method has been extended to work on three dimensional data. In general, electron tomography reconstructions suffer from elongation artifacts along the beam direction. These artifacts can be seen in the corresponding Fourier domain as a missing wedge. The new method synthetically generates projections for these missing directions with the help of a dictionary based approach that is able to convey both structure and texture at the same time. It constitutes a preprocessing step that can be combined with any tomographic reconstruction algorithm. The new algorithm was applied to phantom data, to a real electron tomography data set taken from a catalyst, as well as to a real dataset containing solely colloidal gold particles. Visually, the synthetic projections, reconstructions, and corresponding Fourier power spectra showed a decrease of the typical missing wedge artifacts. Quantitatively, the inpainting method is capable to reduce missing wedge artifacts and improves tomogram quality with respect to full width half maximum measurements. Copyright © 2018. Published by Elsevier B.V.
Ketoff, Serge; Khonsari, Roman Hossein; Schouman, Thomas; Bertolus, Chloé
2014-11-01
Handling 3-dimensional reconstructions of computed tomographic scans on portable devices is problematic because of the size of the Digital Imaging and Communications in Medicine (DICOM) stacks. The authors provide a user-friendly method allowing the production, transfer, and sharing of good-quality 3-dimensional reconstructions on smartphones and tablets. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
TomoBank: a tomographic data repository for computational x-ray science
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; ...
2018-02-08
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology made sub-second and multi-energy tomographic data collection possible [1], but also increased the demand to develop new reconstruction methods able to handle in-situ [2] and dynamic systems [3] that can be quickly incorporated in beamline production software [4]. The X-ray Tomography Datamore » Bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging dataset and their descriptors.« less
A flexible, small positron emission tomography prototype for resource-limited laboratories
NASA Astrophysics Data System (ADS)
Miranda-Menchaca, A.; Martínez-Dávalos, A.; Murrieta-Rodríguez, T.; Alva-Sánchez, H.; Rodríguez-Villafuerte, M.
2015-05-01
Modern small-animal PET scanners typically consist of a large number of detectors along with complex electronics to provide tomographic images for research in the preclinical sciences that use animal models. These systems can be expensive, especially for resource-limited educational and academic institutions in developing countries. In this work we show that a small-animal PET scanner can be built with a relatively reduced budget while, at the same time, achieving relatively high performance. The prototype consists of four detector modules each composed of LYSO pixelated crystal arrays (individual crystal elements of dimensions 1 × 1 × 10 mm3) coupled to position-sensitive photomultiplier tubes. Tomographic images are obtained by rotating the subject to complete enough projections for image reconstruction. Image quality was evaluated for different reconstruction algorithms including filtered back-projection and iterative reconstruction with maximum likelihood-expectation maximization and maximum a posteriori methods. The system matrix was computed both with geometric considerations and by Monte Carlo simulations. Prior to image reconstruction, Fourier data rebinning was used to increase the number of lines of response used. The system was evaluated for energy resolution at 511 keV (best 18.2%), system sensitivity (0.24%), spatial resolution (best 0.87 mm), scatter fraction (4.8%) and noise equivalent count-rate. The system can be scaled-up to include up to 8 detector modules, increasing detection efficiency, and its price may be reduced as newer solid state detectors become available replacing the traditional photomultiplier tubes. Prototypes like this may prove to be very valuable for educational, training, preclinical and other biological research purposes.
NASA Astrophysics Data System (ADS)
Petersen, T. C.; Ringer, S. P.
2010-03-01
Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular range. The algorithm does not solve the tomographic back-projection problem but rather reconstructs the local 3D morphology of surfaces defined by varied scattering densities. Solution method: Reconstruction using differential geometry applied to image analysis computations. Restrictions: The code has only been tested with square images and has been developed for only single-axis tilting. Running time: For high quality reconstruction, 5-15 min
GPU-based prompt gamma ray imaging from boron neutron capture therapy.
Yoon, Do-Kun; Jung, Joo-Young; Jo Hong, Key; Sil Lee, Keum; Suk Suh, Tae
2015-01-01
The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.
Limited data tomographic image reconstruction via dual formulation of total variation minimization
NASA Astrophysics Data System (ADS)
Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong
2011-03-01
The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.
Acoustic representation of tomographic data
NASA Astrophysics Data System (ADS)
Wampler, Cheryl; Zahrt, John D.; Hotchkiss, Robert S.; Zahrt, Rebecca; Kust, Mark
1993-04-01
Tomographic data and tomographic reconstructions are naturally periodic in the angle of rotation of the turntable and the polar angel of the coordinates in the object, respectively. Similarly, acoustic waves are periodic and have amplitude and wavelength as free parameters that can be fit to another representation. Work has been in progress for some time in bringing the acoustic senses to bear on large data sets rather than just the visual sense. We will provide several different acoustic representations of both raw data and density maps. Rather than graphical portrayal of the data and reconstructions, you will be presented various 'tone poems.'
GATE - Geant4 Application for Tomographic Emission: a simulation toolkit for PET and SPECT
Jan, S.; Santin, G.; Strul, D.; Staelens, S.; Assié, K.; Autret, D.; Avner, S.; Barbier, R.; Bardiès, M.; Bloomfield, P. M.; Brasse, D.; Breton, V.; Bruyndonckx, P.; Buvat, I.; Chatziioannou, A. F.; Choi, Y.; Chung, Y. H.; Comtat, C.; Donnarieix, D.; Ferrer, L.; Glick, S. J.; Groiselle, C. J.; Guez, D.; Honore, P.-F.; Kerhoas-Cavata, S.; Kirov, A. S.; Kohli, V.; Koole, M.; Krieguer, M.; van der Laan, D. J.; Lamare, F.; Largeron, G.; Lartizien, C.; Lazaro, D.; Maas, M. C.; Maigne, L.; Mayet, F.; Melot, F.; Merheb, C.; Pennacchio, E.; Perez, J.; Pietrzyk, U.; Rannou, F. R.; Rey, M.; Schaart, D. R.; Schmidtlein, C. R.; Simon, L.; Song, T. Y.; Vieira, J.-M.; Visvikis, D.; Van de Walle, R.; Wieërs, E.; Morel, C.
2012-01-01
Monte Carlo simulation is an essential tool in emission tomography that can assist in the design of new medical imaging devices, the optimization of acquisition protocols, and the development or assessment of image reconstruction algorithms and correction techniques. GATE, the Geant4 Application for Tomographic Emission, encapsulates the Geant4 libraries to achieve a modular, versatile, scripted simulation toolkit adapted to the field of nuclear medicine. In particular, GATE allows the description of time-dependent phenomena such as source or detector movement, and source decay kinetics. This feature makes it possible to simulate time curves under realistic acquisition conditions and to test dynamic reconstruction algorithms. This paper gives a detailed description of the design and development of GATE by the OpenGATE collaboration, whose continuing objective is to improve, document, and validate GATE by simulating commercially available imaging systems for PET and SPECT. Large effort is also invested in the ability and the flexibility to model novel detection systems or systems still under design. A public release of GATE licensed under the GNU Lesser General Public License can be downloaded at the address http://www-lphe.ep.ch/GATE/. Two benchmarks developed for PET and SPECT to test the installation of GATE and to serve as a tutorial for the users are presented. Extensive validation of the GATE simulation platform has been started, comparing simulations and measurements on commercially available acquisition systems. References to those results are listed. The future prospects toward the gridification of GATE and its extension to other domains such as dosimetry are also discussed. PMID:15552416
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Yifei; Zuo, Jian -Min
A diffraction-based technique is developed for the determination of three-dimensional nanostructures. The technique employs high-resolution and low-dose scanning electron nanodiffraction (SEND) to acquire three-dimensional diffraction patterns, with the help of a special sample holder for large-angle rotation. Grains are identified in three-dimensional space based on crystal orientation and on reconstructed dark-field images from the recorded diffraction patterns. Application to a nanocrystalline TiN thin film shows that the three-dimensional morphology of columnar TiN grains of tens of nanometres in diameter can be reconstructed using an algebraic iterative algorithm under specified prior conditions, together with their crystallographic orientations. The principles can bemore » extended to multiphase nanocrystalline materials as well. Furthermore, the tomographic SEND technique provides an effective and adaptive way of determining three-dimensional nanostructures.« less
A software tool of digital tomosynthesis application for patient positioning in radiotherapy
Dai, Jian‐Rong
2016-01-01
Digital Tomosynthesis (DTS) is an image modality in reconstructing tomographic images from two‐dimensional kV projections covering a narrow scan angles. Comparing with conventional cone‐beam CT (CBCT), it requires less time and radiation dose in data acquisition. It is feasible to apply this technique in patient positioning in radiotherapy. To facilitate its clinical application, a software tool was developed and the reconstruction processes were accelerated by graphic processing unit (GPU). Two reconstruction and two registration processes are required for DTS application which is different from conventional CBCT application which requires one image reconstruction process and one image registration process. The reconstruction stage consists of productions of two types of DTS. One type of DTS is reconstructed from cone‐beam (CB) projections covering a narrow scan angle and is named onboard DTS (ODTS), which represents the real patient position in treatment room. Another type of DTS is reconstructed from digitally reconstructed radiography (DRR) and is named reference DTS (RDTS), which represents the ideal patient position in treatment room. Prior to the reconstruction of RDTS, The DRRs are reconstructed from planning CT using the same acquisition setting of CB projections. The registration stage consists of two matching processes between ODTS and RDTS. The target shift in lateral and longitudinal axes are obtained from the matching between ODTS and RDTS in coronal view, while the target shift in longitudinal and vertical axes are obtained from the matching between ODTS and RDTS in sagittal view. In this software, both DRR and DTS reconstruction algorithms were implemented on GPU environments for acceleration purpose. The comprehensive evaluation of this software tool was performed including geometric accuracy, image quality, registration accuracy, and reconstruction efficiency. The average correlation coefficient between DRR/DTS generated by GPU‐based algorithm and CPU‐based algorithm is 0.99. Based on the measurements of cube phantom on DTS, the geometric errors are within 0.5 mm in three axes. For both cube phantom and pelvic phantom, the registration errors are within 0.5 mm in three axes. Compared with reconstruction performance of CPU‐based algorithms, the performances of DRR and DTS reconstructions are improved by a factor of 15 to 20. A GPU‐based software tool was developed for DTS application for patient positioning of radiotherapy. The geometric and registration accuracy met the clinical requirement in patient setup of radiotherapy. The high performance of DRR and DTS reconstruction algorithms was achieved by the GPU‐based computation environments. It is a useful software tool for researcher and clinician in evaluating DTS application in patient positioning of radiotherapy. PACS number(s): 87.57.nf PMID:27074482
Image reconstructions from super-sampled data sets with resolution modeling in PET imaging.
Li, Yusheng; Matej, Samuel; Metzler, Scott D
2014-12-01
Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The authors also demonstrate that the reconstructions from super-sampled data sets using a fine system matrix yield improved image quality compared to the reconstructions using a coarse system matrix. Super-sampling reconstructions with different count levels showed that the more spatial-resolution improvement can be obtained with higher count at a larger iteration number. The authors developed a super-sampling reconstruction framework that can reconstruct super-resolution images using the super-sampling data sets simultaneously with known acquisition motion. The super-sampling PET acquisition using the proposed algorithms provides an effective and economic way to improve image quality for PET imaging, which has an important implication in preclinical and clinical region-of-interest PET imaging applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Meng, E-mail: mengwu@stanford.edu; Fahrig, Rebecca
2014-11-01
Purpose: The scanning beam digital x-ray system (SBDX) is an inverse geometry fluoroscopic system with high dose efficiency and the ability to perform continuous real-time tomosynthesis in multiple planes. This system could be used for image guidance during lung nodule biopsy. However, the reconstructed images suffer from strong out-of-plane artifact due to the small tomographic angle of the system. Methods: The authors propose an out-of-plane artifact subtraction tomosynthesis (OPAST) algorithm that utilizes a prior CT volume to augment the run-time image processing. A blur-and-add (BAA) analytical model, derived from the project-to-backproject physical model, permits the generation of tomosynthesis images thatmore » are a good approximation to the shift-and-add (SAA) reconstructed image. A computationally practical algorithm is proposed to simulate images and out-of-plane artifacts from patient-specific prior CT volumes using the BAA model. A 3D image registration algorithm to align the simulated and reconstructed images is described. The accuracy of the BAA analytical model and the OPAST algorithm was evaluated using three lung cancer patients’ CT data. The OPAST and image registration algorithms were also tested with added nonrigid respiratory motions. Results: Image similarity measurements, including the correlation coefficient, mean squared error, and structural similarity index, indicated that the BAA model is very accurate in simulating the SAA images from the prior CT for the SBDX system. The shift-variant effect of the BAA model can be ignored when the shifts between SBDX images and CT volumes are within ±10 mm in the x and y directions. The nodule visibility and depth resolution are improved by subtracting simulated artifacts from the reconstructions. The image registration and OPAST are robust in the presence of added respiratory motions. The dominant artifacts in the subtraction images are caused by the mismatches between the real object and the prior CT volume. Conclusions: Their proposed prior CT-augmented OPAST reconstruction algorithm improves lung nodule visibility and depth resolution for the SBDX system.« less
Tomographic Image Reconstruction Using an Interpolation Method for Tree Decay Detection
Hailin Feng; Guanghui Li; Sheng Fu; Xiping Wang
2014-01-01
Stress wave velocity has been traditionally regarded as an indicator of the extent of damage inside wood. This paper aimed to detect internal decay of urban trees through reconstructing tomographic image of the cross section of a tree trunk. A grid model covering the cross section area of a tree trunk was defined with some assumptions. Stress wave data were processed...
Ultra-high resolution computed tomography imaging
Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.
2002-01-01
A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.
Pan, Xiaochuan; Sidky, Emil Y; Vannier, Michael
2010-01-01
Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues. PMID:20376330
NASA Astrophysics Data System (ADS)
Pan, Xiaochuan; Sidky, Emil Y.; Vannier, Michael
2009-12-01
Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues.
NASA Astrophysics Data System (ADS)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman
2017-02-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.
A synchrotron radiation microtomography system for the analysis of trabecular bone samples.
Salomé, M; Peyrin, F; Cloetens, P; Odet, C; Laval-Jeantet, A M; Baruchel, J; Spanne, P
1999-10-01
X-ray computed microtomography is particularly well suited for studying trabecular bone architecture, which requires three-dimensional (3-D) images with high spatial resolution. For this purpose, we describe a three-dimensional computed microtomography (microCT) system using synchrotron radiation, developed at ESRF. Since synchrotron radiation provides a monochromatic and high photon flux x-ray beam, it allows high resolution and a high signal-to-noise ratio imaging. The principle of the system is based on truly three-dimensional parallel tomographic acquisition. It uses a two-dimensional (2-D) CCD-based detector to record 2-D radiographs of the transmitted beam through the sample under different angles of view. The 3-D tomographic reconstruction, performed by an exact 3-D filtered backprojection algorithm, yields 3-D images with cubic voxels. The spatial resolution of the detector was experimentally measured. For the application to bone investigation, the voxel size was set to 6.65 microm, and the experimental spatial resolution was found to be 11 microm. The reconstructed linear attenuation coefficient was calibrated from hydroxyapatite phantoms. Image processing tools are being developed to extract structural parameters quantifying trabecular bone architecture from the 3-D microCT images. First results on human trabecular bone samples are presented.
Sodankylä ionospheric tomography dataset 2003-2014
NASA Astrophysics Data System (ADS)
Norberg, J.; Roininen, L.; Kero, A.; Raita, T.; Ulich, T.; Markkanen, M.; Juusola, L.; Kauristie, K.
2015-12-01
Sodankylä Geophysical Observatory has been operating a tomographic receiver network and collecting the produced data since 2003. The collected dataset consists of phase difference curves measured from Russian COSMOS dual-frequency (150/400 MHz) low-Earth-orbit satellite signals, and tomographic electron density reconstructions obtained from these measurements. In this study vertical total electron content (VTEC) values are integrated from the reconstructed electron densities to make a qualitative and quantitative analysis to validate the long-term performance of the tomographic system. During the observation period, 2003-2014, there were three-to-five operational stations at the Fenno-Scandinavian sector. Altogether the analysis consists of around 66 000 overflights, but to ensure the quality of the reconstructions, the examination is limited to cases with descending (north to south) overflights and maximum elevation over 60°. These constraints limit the number of overflights to around 10 000. Based on this dataset, one solar cycle of ionospheric vertical total electron content estimates is constructed. The measurements are compared against International Reference Ionosphere IRI-2012 model, F10.7 solar flux index and sunspot number data. Qualitatively the tomographic VTEC estimate corresponds to reference data very well, but the IRI-2012 model are on average 40 % higher of that of the tomographic results.
NASA Astrophysics Data System (ADS)
Zhu, Dianwen; Zhang, Wei; Zhao, Yue; Li, Changqing
2016-03-01
Dynamic fluorescence molecular tomography (FMT) has the potential to quantify physiological or biochemical information, known as pharmacokinetic parameters, which are important for cancer detection, drug development and delivery etc. To image those parameters, there are indirect methods, which are easier to implement but tend to provide images with low signal-to-noise ratio, and direct methods, which model all the measurement noises together and are statistically more efficient. The direct reconstruction methods in dynamic FMT have attracted a lot of attention recently. However, the coupling of tomographic image reconstruction and nonlinearity of kinetic parameter estimation due to the compartment modeling has imposed a huge computational burden to the direct reconstruction of the kinetic parameters. In this paper, we propose to take advantage of both the direct and indirect reconstruction ideas through a variable splitting strategy under the augmented Lagrangian framework. Each iteration of the direct reconstruction is split into two steps: the dynamic FMT image reconstruction and the node-wise nonlinear least squares fitting of the pharmacokinetic parameter images. Through numerical simulation studies, we have found that the proposed algorithm can achieve good reconstruction results within a small amount of time. This will be the first step for a combined dynamic PET and FMT imaging in the future.
Model-based tomographic reconstruction
Chambers, David H; Lehman, Sean K; Goodman, Dennis M
2012-06-26
A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.
Kotasidis, F A; Mehranian, A; Zaidi, H
2016-05-07
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Mehranian, A.; Zaidi, H.
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guang-Hong, E-mail: gchen7@wisc.edu; Li, Yinsheng
Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods:more » In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial–temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial–temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial–temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity of the reconstructed images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. Results: In numerical simulations, the 240{sup ∘} short scan angular span was divided into four consecutive 60{sup ∘} angular subsectors. SMART-RECON enables four high temporal fidelity images without limited-view artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200{sup ∘}, three 66{sup ∘} angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. Conclusions: In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60{sup ∘} angular subsectors.« less
Multi-modal molecular diffuse optical tomography system for small animal imaging
Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid
2013-01-01
A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images. PMID:24954977
Three-dimensional reconstruction for coherent diffraction patterns obtained by XFEL.
Nakano, Miki; Miyashita, Osamu; Jonic, Slavica; Song, Changyong; Nam, Daewoong; Joti, Yasumasa; Tama, Florence
2017-07-01
The three-dimensional (3D) structural analysis of single particles using an X-ray free-electron laser (XFEL) is a new structural biology technique that enables observations of molecules that are difficult to crystallize, such as flexible biomolecular complexes and living tissue in the state close to physiological conditions. In order to restore the 3D structure from the diffraction patterns obtained by the XFEL, computational algorithms are necessary as the orientation of the incident beam with respect to the sample needs to be estimated. A program package for XFEL single-particle analysis based on the Xmipp software package, that is commonly used for image processing in 3D cryo-electron microscopy, has been developed. The reconstruction program has been tested using diffraction patterns of an aerosol nanoparticle obtained by tomographic coherent X-ray diffraction microscopy.
Meng, Yifei; Zuo, Jian-Min
2016-09-01
A diffraction-based technique is developed for the determination of three-dimensional nanostructures. The technique employs high-resolution and low-dose scanning electron nanodiffraction (SEND) to acquire three-dimensional diffraction patterns, with the help of a special sample holder for large-angle rotation. Grains are identified in three-dimensional space based on crystal orientation and on reconstructed dark-field images from the recorded diffraction patterns. Application to a nanocrystalline TiN thin film shows that the three-dimensional morphology of columnar TiN grains of tens of nanometres in diameter can be reconstructed using an algebraic iterative algorithm under specified prior conditions, together with their crystallographic orientations. The principles can be extended to multiphase nanocrystalline materials as well. Thus, the tomographic SEND technique provides an effective and adaptive way of determining three-dimensional nanostructures.
GPU-based prompt gamma ray imaging from boron neutron capture therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Do-Kun; Jung, Joo-Young; Suk Suh, Tae, E-mail: suhsanta@catholic.ac.kr
Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusions: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.« less
TU-FG-BRB-07: GPU-Based Prompt Gamma Ray Imaging From Boron Neutron Capture Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S; Suh, T; Yoon, D
Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusion: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray reconstruction using the GPU computation for BNCT simulations.« less
EIT Imaging Regularization Based on Spectral Graph Wavelets.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut
2017-09-01
The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology made sub-second and multi-energy tomographic data collection possible [1], but also increased the demand to develop new reconstruction methods able to handle in-situ [2] and dynamic systems [3] that can be quickly incorporated in beamline production software [4]. The X-ray Tomography Datamore » Bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging dataset and their descriptors.« less
Refinement procedure for the image alignment in high-resolution electron tomography.
Houben, L; Bar Sadan, M
2011-01-01
High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levin, Barnaby D. A.; Padgett, Elliot; Chen, Chien-Chun
Electron tomography in materials science has flourished with the demand to characterize nanoscale materials in three dimensions (3D). Access to experimental data is vital for developing and validating reconstruction methods that improve resolution and reduce radiation dose requirements. This work presents five high-quality scanning transmission electron microscope (STEM) tomography datasets in order to address the critical need for open access data in this field. The datasets represent the current limits of experimental technique, are of high quality, and contain materials with structural complexity. Included are tomographic series of a hyperbranched Co 2 P nanocrystal, platinum nanoparticles on a carbonmore » nanofibre imaged over the complete 180° tilt range, a platinum nanoparticle and a tungsten needle both imaged at atomic resolution by equal slope tomography, and a through-focal tilt series of PtCu nanoparticles. A volumetric reconstruction from every dataset is provided for comparison and development of post-processing and visualization techniques. Researchers interested in creating novel data processing and reconstruction algorithms will now have access to state of the art experimental test data.« less
Preliminary experiments on pharmacokinetic diffuse fluorescence tomography of CT-scanning mode
NASA Astrophysics Data System (ADS)
Zhang, Yanqi; Wang, Xin; Yin, Guoyan; Li, Jiao; Zhou, Zhongxing; Zhao, Huijuan; Gao, Feng; Zhang, Limin
2016-10-01
In vivo tomographic imaging of the fluorescence pharmacokinetic parameters in tissues can provide additional specific and quantitative physiological and pathological information to that of fluorescence concentration. This modality normally requires a highly-sensitive diffuse fluorescence tomography (DFT) working in dynamic way to finally extract the pharmacokinetic parameters from the measured pharmacokinetics-associated temporally-varying boundary intensity. This paper is devoted to preliminary experimental validation of our proposed direct reconstruction scheme of instantaneous sampling based pharmacokinetic-DFT: A highly-sensitive DFT system of CT-scanning mode working with parallel four photomultiplier-tube photon-counting channels is developed to generate an instantaneous sampling dataset; A direct reconstruction scheme then extracts images of the pharmacokinetic parameters using the adaptive-EKF strategy. We design a dynamic phantom that can simulate the agent metabolism in living tissue. The results of the dynamic phantom experiments verify the validity of the experiment system and reconstruction algorithms, and demonstrate that system provides good resolution, high sensitivity and quantitativeness at different pump speed.
Refraction-based X-ray Computed Tomography for Biomedical Purpose Using Dark Field Imaging Method
NASA Astrophysics Data System (ADS)
Sunaguchi, Naoki; Yuasa, Tetsuya; Huo, Qingkai; Ichihara, Shu; Ando, Masami
We have proposed a tomographic x-ray imaging system using DFI (dark field imaging) optics along with a data-processing method to extract information on refraction from the measured intensities, and a reconstruction algorithm to reconstruct a refractive-index field from the projections generated from the extracted refraction information. The DFI imaging system consists of a tandem optical system of Bragg- and Laue-case crystals, a positioning device system for a sample, and two CCD (charge coupled device) cameras. Then, we developed a software code to simulate the data-acquisition, data-processing, and reconstruction methods to investigate the feasibility of the proposed methods. Finally, in order to demonstrate its efficacy, we imaged a sample with DCIS (ductal carcinoma in situ) excised from a breast cancer patient using a system constructed at the vertical wiggler beamline BL-14C in KEK-PF. Its CT images depicted a variety of fine histological structures, such as milk ducts, duct walls, secretions, adipose and fibrous tissue. They correlate well with histological sections.
Glaser, Adam K; Andreozzi, Jacqueline M; Zhang, Rongxiao; Pogue, Brian W; Gladstone, David J
2015-07-01
To test the use of a three-dimensional (3D) optical cone beam computed tomography reconstruction algorithm, for estimation of the imparted 3D dose distribution from megavoltage photon beams in a water tank for quality assurance, by imaging the induced Cherenkov-excited fluorescence (CEF). An intensified charge-coupled device coupled to a standard nontelecentric camera lens was used to tomographically acquire two-dimensional (2D) projection images of CEF from a complex multileaf collimator (MLC) shaped 6 MV linear accelerator x-ray photon beam operating at a dose rate of 600 MU/min. The resulting projections were used to reconstruct the 3D CEF light distribution, a potential surrogate of imparted dose, using a Feldkamp-Davis-Kress cone beam back reconstruction algorithm. Finally, the reconstructed light distributions were compared to the expected dose values from one-dimensional diode scans, 2D film measurements, and the 3D distribution generated from the clinical Varian ECLIPSE treatment planning system using a gamma index analysis. A Monte Carlo derived correction was applied to the Cherenkov reconstructions to account for beam hardening artifacts. 3D light volumes were successfully reconstructed over a 400 × 400 × 350 mm(3) volume at a resolution of 1 mm. The Cherenkov reconstructions showed agreement with all comparative methods and were also able to recover both inter- and intra-MLC leaf leakage. Based upon a 3%/3 mm criterion, the experimental Cherenkov light measurements showed an 83%-99% pass fraction depending on the chosen threshold dose. The results from this study demonstrate the use of optical cone beam computed tomography using CEF for the profiling of the imparted dose distribution from large area megavoltage photon beams in water.
Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2012-10-01
In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.
Blob-enhanced reconstruction technique
NASA Astrophysics Data System (ADS)
Castrillo, Giusy; Cafiero, Gioacchino; Discetti, Stefano; Astarita, Tommaso
2016-09-01
A method to enhance the quality of the tomographic reconstruction and, consequently, the 3D velocity measurement accuracy, is presented. The technique is based on integrating information on the objects to be reconstructed within the algebraic reconstruction process. A first guess intensity distribution is produced with a standard algebraic method, then the distribution is rebuilt as a sum of Gaussian blobs, based on location, intensity and size of agglomerates of light intensity surrounding local maxima. The blobs substitution regularizes the particle shape allowing a reduction of the particles discretization errors and of their elongation in the depth direction. The performances of the blob-enhanced reconstruction technique (BERT) are assessed with a 3D synthetic experiment. The results have been compared with those obtained by applying the standard camera simultaneous multiplicative reconstruction technique (CSMART) to the same volume. Several blob-enhanced reconstruction processes, both substituting the blobs at the end of the CSMART algorithm and during the iterations (i.e. using the blob-enhanced reconstruction as predictor for the following iterations), have been tested. The results confirm the enhancement in the velocity measurements accuracy, demonstrating a reduction of the bias error due to the ghost particles. The improvement is more remarkable at the largest tested seeding densities. Additionally, using the blobs distributions as a predictor enables further improvement of the convergence of the reconstruction algorithm, with the improvement being more considerable when substituting the blobs more than once during the process. The BERT process is also applied to multi resolution (MR) CSMART reconstructions, permitting simultaneously to achieve remarkable improvements in the flow field measurements and to benefit from the reduction in computational time due to the MR approach. Finally, BERT is also tested on experimental data, obtaining an increase of the signal-to-noise ratio in the reconstructed flow field and a higher value of the correlation factor in the velocity measurements with respect to the volume to which the particles are not replaced.
Xu, Lijun; Liu, Chang; Jing, Wenyang; Cao, Zhang; Xue, Xin; Lin, Yuzhen
2016-01-01
To monitor two-dimensional (2D) distributions of temperature and H2O mole fraction, an on-line tomography system based on tunable diode laser absorption spectroscopy (TDLAS) was developed. To the best of the authors' knowledge, this is the first report on a multi-view TDLAS-based system for simultaneous tomographic visualization of temperature and H2O mole fraction in real time. The system consists of two distributed feedback (DFB) laser diodes, a tomographic sensor, electronic circuits, and a computer. The central frequencies of the two DFB laser diodes are at 7444.36 cm(-1) (1343.3 nm) and 7185.6 cm(-1) (1391.67 nm), respectively. The tomographic sensor is used to generate fan-beam illumination from five views and to produce 60 ray measurements. The electronic circuits not only provide stable temperature and precise current controlling signals for the laser diodes but also can accurately sample the transmitted laser intensities and extract integrated absorbances in real time. Finally, the integrated absorbances are transferred to the computer, in which the 2D distributions of temperature and H2O mole fraction are reconstructed by using a modified Landweber algorithm. In the experiments, the TDLAS-based tomography system was validated by using asymmetric premixed flames with fixed and time-varying equivalent ratios, respectively. The results demonstrate that the system is able to reconstruct the profiles of the 2D distributions of temperature and H2O mole fraction of the flame and effectively capture the dynamics of the combustion process, which exhibits good potential for flame monitoring and on-line combustion diagnosis.
NASA Astrophysics Data System (ADS)
Xu, Lijun; Liu, Chang; Jing, Wenyang; Cao, Zhang; Xue, Xin; Lin, Yuzhen
2016-01-01
To monitor two-dimensional (2D) distributions of temperature and H2O mole fraction, an on-line tomography system based on tunable diode laser absorption spectroscopy (TDLAS) was developed. To the best of the authors' knowledge, this is the first report on a multi-view TDLAS-based system for simultaneous tomographic visualization of temperature and H2O mole fraction in real time. The system consists of two distributed feedback (DFB) laser diodes, a tomographic sensor, electronic circuits, and a computer. The central frequencies of the two DFB laser diodes are at 7444.36 cm-1 (1343.3 nm) and 7185.6 cm-1 (1391.67 nm), respectively. The tomographic sensor is used to generate fan-beam illumination from five views and to produce 60 ray measurements. The electronic circuits not only provide stable temperature and precise current controlling signals for the laser diodes but also can accurately sample the transmitted laser intensities and extract integrated absorbances in real time. Finally, the integrated absorbances are transferred to the computer, in which the 2D distributions of temperature and H2O mole fraction are reconstructed by using a modified Landweber algorithm. In the experiments, the TDLAS-based tomography system was validated by using asymmetric premixed flames with fixed and time-varying equivalent ratios, respectively. The results demonstrate that the system is able to reconstruct the profiles of the 2D distributions of temperature and H2O mole fraction of the flame and effectively capture the dynamics of the combustion process, which exhibits good potential for flame monitoring and on-line combustion diagnosis.
SXR measurement and W transport survey using GEM tomographic system on WEST
NASA Astrophysics Data System (ADS)
Mazon, D.; Jardin, A.; Malard, P.; Chernyshova, M.; Coston, C.; Malard, P.; O'Mullane, M.; Czarski, T.; Malinowski, K.; Faisse, F.; Ferlay, F.; Verger, J. M.; Bec, A.; Larroque, S.; Kasprowicz, G.; Wojenski, A.; Pozniak, K.
2017-11-01
Measuring Soft X-Ray (SXR) radiation (0.1-20 keV) of fusion plasmas is a standard way of accessing valuable information on particle transport. Since heavy impurities like tungsten (W) could degrade plasma core performances and cause radiative collapses, it is necessary to develop new diagnostics to be able to monitor the impurity distribution in harsh fusion environments like ITER. A gaseous detector with energy discrimination would be a very good candidate for this purpose. The design and implementation of a new SXR diagnostic developed for the WEST project, based on a triple Gas Electron Multiplier (GEM) detector is presented. This detector works in photon counting mode and presents energy discrimination capabilities. The SXR system is composed of two 1D cameras (vertical and horizontal views respectively), located in the same poloidal cross-section to allow for tomographic reconstruction. An array (20 cm × 2 cm) consists of up to 128 detectors in front of a beryllium pinhole (equipped with a 1 mm diameter diaphragm) inserted at about 50 cm depth inside a cooled thimble in order to retrieve a wide plasma view. Acquisition of low energy spectrum is insured by a helium buffer installed between the pinhole and the detector. Complementary cooling systems (water) are used to maintain a constant temperature (25oC) inside the thimble. Finally a real-time automatic extraction system has been developed to protect the diagnostic during baking phases or any overheating unwanted events. Preliminary simulations of plasma emissivity and W distribution have been performed for WEST using a recently developed synthetic diagnostic coupled to a tomographic algorithm based on the minimum Fisher information (MFI) inversion method. First GEM acquisitions are presented as well as estimation of transport effect in presence of ICRH on W density reconstruction capabilities of the GEM.
NASA Astrophysics Data System (ADS)
Flynn, Brendan P.; D'Souza, Alisha V.; Kanick, Stephen C.; Maytin, Edward; Hasan, Tayyaba; Pogue, Brian W.
2013-03-01
Aminolevulinic acid (ALA)-induced Protoporphyrin IX (PpIX)-based photodynamic therapy (PDT) is an effective treatment for skin cancers including basal cell carcinoma (BCC). Topically applied ALA promotes PpIX production preferentially in tumors, and many strategies have been developed to increase PpIX distribution and PDT treatment efficacy at depths > 1mm is not fully understood. While surface imaging techniques provide useful diagnosis, dosimetry, and efficacy information for superficial tumors, these methods cannot interrogate deeper tumors to provide in situ insight into spatial PpIX distributions. We have developed an ultrasound-guided, white-light-informed, tomographics spectroscopy system for the spatial measurement of subsurface PpIX. Detailed imaging system specifications, methodology, and optical-phantom-based characterization will be presented separately. Here we evaluate preliminary in vivo results using both full tomographic reconstruction and by plotting individual tomographic source-detector pair data against US images.
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.
2018-06-01
Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.
Inverse scattering and refraction corrected reflection for breast cancer imaging
NASA Astrophysics Data System (ADS)
Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John
2010-03-01
Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.
Compressive sensing reconstruction of 3D wet refractivity based on GNSS and InSAR observations
NASA Astrophysics Data System (ADS)
Heublein, Marion; Alshawaf, Fadwa; Erdnüß, Bastian; Zhu, Xiao Xiang; Hinz, Stefan
2018-06-01
In this work, the reconstruction quality of an approach for neutrospheric water vapor tomography based on Slant Wet Delays (SWDs) obtained from Global Navigation Satellite Systems (GNSS) and Interferometric Synthetic Aperture Radar (InSAR) is investigated. The novelties of this approach are (1) the use of both absolute GNSS and absolute InSAR SWDs for tomography and (2) the solution of the tomographic system by means of compressive sensing (CS). The tomographic reconstruction is performed based on (i) a synthetic SWD dataset generated using wet refractivity information from the Weather Research and Forecasting (WRF) model and (ii) a real dataset using GNSS and InSAR SWDs. Thus, the validation of the achieved results focuses (i) on a comparison of the refractivity estimates with the input WRF refractivities and (ii) on radiosonde profiles. In case of the synthetic dataset, the results show that the CS approach yields a more accurate and more precise solution than least squares (LSQ). In addition, the benefit of adding synthetic InSAR SWDs into the tomographic system is analyzed. When applying CS, adding synthetic InSAR SWDs into the tomographic system improves the solution both in magnitude and in scattering. When solving the tomographic system by means of LSQ, no clear behavior is observed. In case of the real dataset, the estimated refractivities of both methodologies show a consistent behavior although the LSQ and CS solution strategies differ.
Hachouf, N; Kharfi, F; Boucenna, A
2012-10-01
An ideal neutron radiograph, for quantification and 3D tomographic image reconstruction, should be a transmission image which exactly obeys to the exponential attenuation law of a monochromatic neutron beam. There are many reasons for which this assumption does not hold for high neutron absorbing materials. The main deviations from the ideal are due essentially to neutron beam hardening effect. The main challenges of this work are the characterization of neutron transmission through boron enriched steel materials and the observation of beam hardening. Then, in our work, the influence of beam hardening effect on neutron tomographic image, for samples based on these materials, is studied. MCNP and FBP simulation are performed to adjust linear attenuation coefficients data and to perform 2D tomographic image reconstruction with and without beam hardening corrections. A beam hardening correction procedure is developed and applied based on qualitative and quantitative analyses of the projections data. Results from original and corrected 2D reconstructed images obtained shows the efficiency of the proposed correction procedure. Copyright © 2012 Elsevier Ltd. All rights reserved.
Estimating crustal heterogeneity from double-difference tomography
Got, J.-L.; Monteiller, V.; Virieux, J.; Okubo, P.
2006-01-01
Seismic velocity parameters in limited, but heterogeneous volumes can be inferred using a double-difference tomographic algorithm, but to obtain meaningful results accuracy must be maintained at every step of the computation. MONTEILLER et al. (2005) have devised a double-difference tomographic algorithm that takes full advantage of the accuracy of cross-spectral time-delays of large correlated event sets. This algorithm performs an accurate computation of theoretical travel-time delays in heterogeneous media and applies a suitable inversion scheme based on optimization theory. When applied to Kilauea Volcano, in Hawaii, the double-difference tomography approach shows significant and coherent changes to the velocity model in the well-resolved volumes beneath the Kilauea caldera and the upper east rift. In this paper, we first compare the results obtained using MONTEILLER et al.'s algorithm with those obtained using the classic travel-time tomographic approach. Then, we evaluated the effect of using data series of different accuracies, such as handpicked arrival-time differences ("picking differences"), on the results produced by double-difference tomographic algorithms. We show that picking differences have a non-Gaussian probability density function (pdf). Using a hyperbolic secant pdf instead of a Gaussian pdf allows improvement of the double-difference tomographic result when using picking difference data. We completed our study by investigating the use of spatially discontinuous time-delay data. ?? Birkha??user Verlag, Basel, 2006.
TomoBank: a tomographic data repository for computational x-ray science
NASA Astrophysics Data System (ADS)
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; Joost Batenburg, K.; Ludwig, Wolfgang; Mancini, Lucia; Marone, Federica; Mokso, Rajmund; Pelt, Daniël M.; Sijbers, Jan; Rivers, Mark
2018-03-01
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology have made sub-second and multi-energy tomographic data collection possible (Gibbs et al 2015 Sci. Rep. 5 11824), but have also increased the demand to develop new reconstruction methods able to handle in situ (Pelt and Batenburg 2013 IEEE Trans. Image Process. 22 5238-51) and dynamic systems (Mohan et al 2015 IEEE Trans. Comput. Imaging 1 96-111) that can be quickly incorporated in beamline production software (Gürsoy et al 2014 J. Synchrotron Radiat. 21 1188-93). The x-ray tomography data bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging datasets and their descriptors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brauchler, R.; Doetsch, J.; Dietrich, P.
2012-01-10
In this study, hydraulic and seismic tomographic measurements were used to derive a site-specific relationship between the geophysical parameter p-wave velocity and the hydraulic parameters, diffusivity and specific storage. Our field study includes diffusivity tomograms derived from hydraulic travel time tomography, specific storage tomograms, derived from hydraulic attenuation tomography, and p-wave velocity tomograms, derived from seismic tomography. The tomographic inversion was performed in all three cases with the SIRT (Simultaneous Iterative Reconstruction Technique) algorithm, using a ray tracing technique with curved trajectories. The experimental set-up was designed such that the p-wave velocity tomogram overlaps the hydraulic tomograms by half. Themore » experiments were performed at a wellcharacterized sand and gravel aquifer, located in the Leine River valley near Göttingen, Germany. Access to the shallow subsurface was provided by direct-push technology. The high spatial resolution of hydraulic and seismic tomography was exploited to derive representative site-specific relationships between the hydraulic and geophysical parameters, based on the area where geophysical and hydraulic tests were performed. The transformation of the p-wave velocities into hydraulic properties was undertaken using a k-means cluster analysis. Results demonstrate that the combination of hydraulic and geophysical tomographic data is a promising approach to improve hydrogeophysical site characterization.« less
NASA Astrophysics Data System (ADS)
Kim, Kyoohyun; Yoon, HyeOk; Diez-Silva, Monica; Dao, Ming; Dasari, Ramachandra R.; Park, YongKeun
2014-01-01
We present high-resolution optical tomographic images of human red blood cells (RBC) parasitized by malaria-inducing Plasmodium falciparum (Pf)-RBCs. Three-dimensional (3-D) refractive index (RI) tomograms are reconstructed by recourse to a diffraction algorithm from multiple two-dimensional holograms with various angles of illumination. These 3-D RI tomograms of Pf-RBCs show cellular and subcellular structures of host RBCs and invaded parasites in fine detail. Full asexual intraerythrocytic stages of parasite maturation (ring to trophozoite to schizont stages) are then systematically investigated using optical diffraction tomography algorithms. These analyses provide quantitative information on the structural and chemical characteristics of individual host Pf-RBCs, parasitophorous vacuole, and cytoplasm. The in situ structural evolution and chemical characteristics of subcellular hemozoin crystals are also elucidated.
Fast photoacoustic imaging system based on 320-element linear transducer array.
Yin, Bangzheng; Xing, Da; Wang, Yi; Zeng, Yaguang; Tan, Yi; Chen, Qun
2004-04-07
A fast photoacoustic (PA) imaging system, based on a 320-transducer linear array, was developed and tested on a tissue phantom. To reconstruct a test tomographic image, 64 time-domain PA signals were acquired from a tissue phantom with embedded light-absorption targets. A signal acquisition was accomplished by utilizing 11 phase-controlled sub-arrays, each consisting of four transducers. The results show that the system can rapidly map the optical absorption of a tissue phantom and effectively detect the embedded light-absorbing target. By utilizing the multi-element linear transducer array and phase-controlled imaging algorithm, we thus can acquire PA tomography more efficiently, compared to other existing technology and algorithms. The methodology and equipment thus provide a rapid and reliable approach to PA imaging that may have potential applications in noninvasive imaging and clinic diagnosis.
Kim, Kyoohyun; Yoon, HyeOk; Diez-Silva, Monica; Dao, Ming; Dasari, Ramachandra R.
2013-01-01
Abstract. We present high-resolution optical tomographic images of human red blood cells (RBC) parasitized by malaria-inducing Plasmodium falciparum (Pf)-RBCs. Three-dimensional (3-D) refractive index (RI) tomograms are reconstructed by recourse to a diffraction algorithm from multiple two-dimensional holograms with various angles of illumination. These 3-D RI tomograms of Pf-RBCs show cellular and subcellular structures of host RBCs and invaded parasites in fine detail. Full asexual intraerythrocytic stages of parasite maturation (ring to trophozoite to schizont stages) are then systematically investigated using optical diffraction tomography algorithms. These analyses provide quantitative information on the structural and chemical characteristics of individual host Pf-RBCs, parasitophorous vacuole, and cytoplasm. The in situ structural evolution and chemical characteristics of subcellular hemozoin crystals are also elucidated. PMID:23797986
Optical tomography by means of regularized MLEM
NASA Astrophysics Data System (ADS)
Majer, Charles L.; Urbanek, Tina; Peter, Jörg
2015-09-01
To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
Multi-pinhole SPECT Imaging with Silicon Strip Detectors
Peterson, Todd E.; Shokouhi, Sepideh; Furenlid, Lars R.; Wilson, Donald W.
2010-01-01
Silicon double-sided strip detectors offer outstanding instrinsic spatial resolution with reasonable detection efficiency for iodine-125 emissions. This spatial resolution allows for multiple-pinhole imaging at low magnification, minimizing the problem of multiplexing. We have conducted imaging studies using a prototype system that utilizes a detector of 300-micrometer thickness and 50-micrometer strip pitch together with a 23-pinhole collimator. These studies include an investigation of the synthetic-collimator imaging approach, which combines multiple-pinhole projections acquired at multiple magnifications to obtain tomographic reconstructions from limited-angle data using the ML-EM algorithm. Sub-millimeter spatial resolution was obtained, demonstrating the basic validity of this approach. PMID:20953300
Kligerman, Seth; Mehta, Dhruv; Farnadesh, Mahmmoudreza; Jeudy, Jean; Olsen, Kathryn; White, Charles
2013-01-01
To determine whether an iterative reconstruction (IR) technique (iDose, Philips Healthcare) can reduce image noise and improve image quality in obese patients undergoing computed tomographic pulmonary angiography (CTPA). The study was Health Insurance Portability and Accountability Act compliant and approved by our institutional review board. A total of 33 obese patients (average body mass index: 42.7) underwent CTPA studies following standard departmental protocols. The data were reconstructed with filtered back projection (FBP) and 3 iDose strengths (iDoseL1, iDoseL3, and iDoseL5) for a total of 132 studies. FBP data were collected from 33 controls (average body mass index: 22) undergoing CTPA. Regions of interest were drawn at 6 identical levels in the pulmonary artery (PA), from the main PA to a subsegmental branch, in both the control group and study groups using each algorithm. Noise and attenuation were measured at all PA levels. Three thoracic radiologists graded each study on a scale of 1 (very poor) to 5 (ideal) by 4 categories: image quality, noise, PA enhancement, and "plastic" appearance. Statistical analysis was performed using an unpaired t test, 1-way analysis of variance, and linear weighted κ. Compared with the control group, there was significantly higher noise with FBP, iDoseL1, and iDoseL3 algorithms (P<0.001) in the study group. There was no significant difference between the noise in the control group and iDoseL5 algorithm in the study group. Analysis within the study group showed a significant and progressive decrease in noise and increase in the contrast-to-noise ratio as the level of IR was increased (P<0.001). Compared with FBP, readers graded overall image quality as being higher using iDoseL1 (P=0.0018), iDoseL3 (P<0.001), and iDoseL5 (P<0.001). Compared with FBP, there was subjective improvement in image noise and PA enhancement with increasing levels of iDose. The use of an IR technique leads to qualitative and quantitative improvements in image noise and image quality in obese patients undergoing CTPA.
Markov chain Monte Carlo estimation of quantum states
NASA Astrophysics Data System (ADS)
Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman
2009-03-01
We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.
Finite element method framework for RF-based through-the-wall mapping
NASA Astrophysics Data System (ADS)
Campos, Rafael Saraiva; Lovisolo, Lisandro; de Campos, Marcello Luiz R.
2017-05-01
Radiofrequency (RF) Through-the-Wall Mapping (TWM) employs techniques originally applied in X-Ray Computerized Tomographic Imaging to map obstacles behind walls. It aims to provide valuable information for rescuing efforts in damaged buildings, as well as for military operations in urban scenarios. This work defines a Finite Element Method (FEM) based framework to allow fast and accurate simulations of the reconstruction of floors blueprints, using Ultra High-Frequency (UHF) signals at three different frequencies (500 MHz, 1 GHz and 2 GHz). To the best of our knowledge, this is the first use of FEM in a TWM scenario. This framework allows quick evaluation of different algorithms without the need to assemble a full test setup, which might not be available due to budgetary and time constraints. Using this, the present work evaluates a collection of reconstruction methods (Filtered Backprojection Reconstruction, Direct Fourier Reconstruction, Algebraic Reconstruction and Simultaneous Iterative Reconstruction) under a parallel-beam acquisition geometry for different spatial sampling rates, number of projections, antenna gains and operational frequencies. The use of multiple frequencies assesses the trade-off between higher resolution at shorter wavelengths and lower through-the-wall penetration. Considering all the drawbacks associated with such a complex problem, a robust and reliable computational setup based on a flexible method such as FEM can be very useful.
Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.
Lee, Sing Chun; Fuerst, Bernhard; Fotouhi, Javad; Fischer, Marius; Osgood, Greg; Navab, Nassir
2016-06-01
This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.
NASA Astrophysics Data System (ADS)
Glišović, Petar; Forte, Alessandro
2016-04-01
The paleo-distribution of density variations throughout the mantle is unknown. To address this question, we reconstruct 3-D mantle structure over the Cenozoic era using a data assimilation method that implements a new back-and-forth nudging algorithm. For this purpose, we employ convection models for a compressible and self-gravitating mantle that employ 3-D mantle structure derived from joint seismic-geodynamic tomography as a starting condition. These convection models are then integrated backwards in time and are required to match geologic estimates of past plate motions derived from marine magnetic data. Our implementation of the nudging algorithm limits the difference between a reconstruction (backward-in-time solution) and a prediction (forward-in-time solution) on over a sequence of 5-million-year time windows that span the Cenozoic. We find that forward integration of reconstructed mantle heterogeneity that is constrained to match past plate motions delivers relatively poor fits to the seismic-tomographic inference of present-day mantle heterogeneity in the upper mantle. We suggest that uncertainties in the past plate motions, related for example to plate reorganization episodes, could partly contribute to the poor match between predicted and observed present-day heterogeneity. We propose that convection models that allow tectonic plates to evolve freely in accord with the buoyancy forces and rheological structure in the mantle could provide additional constraints on geologic estimates of paleo-configurations of the major tectonic plates.
Meng, Yifei; Zuo, Jian -Min
2016-07-04
A diffraction-based technique is developed for the determination of three-dimensional nanostructures. The technique employs high-resolution and low-dose scanning electron nanodiffraction (SEND) to acquire three-dimensional diffraction patterns, with the help of a special sample holder for large-angle rotation. Grains are identified in three-dimensional space based on crystal orientation and on reconstructed dark-field images from the recorded diffraction patterns. Application to a nanocrystalline TiN thin film shows that the three-dimensional morphology of columnar TiN grains of tens of nanometres in diameter can be reconstructed using an algebraic iterative algorithm under specified prior conditions, together with their crystallographic orientations. The principles can bemore » extended to multiphase nanocrystalline materials as well. Furthermore, the tomographic SEND technique provides an effective and adaptive way of determining three-dimensional nanostructures.« less
A fast multi-resolution approach to tomographic PIV
NASA Astrophysics Data System (ADS)
Discetti, Stefano; Astarita, Tommaso
2012-03-01
Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy.
Tomography and the Herglotz-Wiechert inverse formulation
NASA Astrophysics Data System (ADS)
Nowack, Robert L.
1990-04-01
In this paper, linearized tomography and the Herglotz-Wiechert inverse formulation are compared. Tomographic inversions for 2-D or 3-D velocity structure use line integrals along rays and can be written in terms of Radon transforms. For radially concentric structures, Radon transforms are shown to reduce to Abel transforms. Therefore, for straight ray paths, the Abel transform of travel-time is a tomographic algorithm specialized to a one-dimensional radially concentric medium. The Herglotz-Wiechert formulation uses seismic travel-time data to invert for one-dimensional earth structure and is derived using exact ray trajectories by applying an Abel transform. This is of historical interest since it would imply that a specialized tomographic-like algorithm has been used in seismology since the early part of the century (see Herglotz, 1907; Wiechert, 1910). Numerical examples are performed comparing the Herglotz-Wiechert algorithm and linearized tomography along straight rays. Since the Herglotz-Wiechert algorithm is applicable under specific conditions, (the absence of low velocity zones) to non-straight ray paths, the association with tomography may prove to be useful in assessing the uniqueness of tomographic results generalized to curved ray geometries.
Computed Tomography Angiography in Microsurgery: Indications, Clinical Utility, and Pitfalls
Lee, Gordon K.; Fox, Paige M.; Riboh, Jonathan; Hsu, Charles; Saber, Sepideh; Rubin, Geoffrey D.; Chang, James
2013-01-01
Objective: Computed tomographic angiography (CTA) can be used to obtain 3-dimensional vascular images and soft-tissue definition. The goal of this study was to evaluate the reliability, usefulness, and pitfalls of CTA in preoperative planning of microvascular reconstructive surgery. Methods: A retrospective review of patients who obtained preoperative CTA in preparation for planned microvascular reconstruction was performed over a 5-year period (2001–2005). The influence of CTA on the original operative plan was assessed for each patient, and CTA results were correlated to the operative findings. Results: Computed tomographic angiography was performed on 94 patients in preparation for microvascular reconstruction. In 48 patients (51%), vascular abnormalities were noted on CTA. Intraoperative findings correlated with CTA results in 97% of cases. In 42 patients (45%), abnormal CTA findings influenced the original operative plan, such as the choice of vessels, side of harvest, or nature of the reconstruction (local flap instead of free tissue transfer). Technical difficulties in performing CTA were encountered in 5 patients (5%) in whom interference from external fixation devices was the main cause. Conclusions: This large study of CTA obtained for preoperative planning of reconstructive microsurgery at both donor and recipient sites study demonstrates that CTA is safe and highly accurate. Computed tomographic angiography can alter the surgeon's reconstructive plan when abnormalities are noted preoperatively and consequently improve results by decreasing vascular complication rates. The use of CTA should be considered for cases of microsurgical reconstruction where the vascular anatomy may be questionable. PMID:24023972
NASA Astrophysics Data System (ADS)
Thampi, Smitha V.; Yamamoto, Mamoru
2010-03-01
A chain of newly designed GNU (GNU is not UNIX) Radio Beacon Receivers (GRBR) has recently been established over Japan, primarily for tomographic imaging of the ionosphere over this region. Receivers installed at Shionomisaki (33.45°N, 135.8°E), Shigaraki (34.8°N, 136.1°E), and Fukui (36°N, 136°E) continuously track low earth orbiting satellites (LEOS), mainly OSCAR, Cosmos, and FORMOSAT-3/COSMIC, to obtain simultaneous total electron content (TEC) data from these three locations, which are then used for the tomographic reconstruction of ionospheric electron densities. This is the first GRBR network established for TEC observations, and the first beacon-based tomographic imaging in Japanese longitudes. The first tomographic images revealed the temporal evolution with all of the major features in the ionospheric electron density distribution over Japan. A comparison of the tomographically reconstructed electron densities with the ƒ o F 2 data from Kokubunji (35°N, 139°E) revealed that there was good agreement between the datasets. These first results show the potential of GRBR and its network for making continuous, unattended ionospheric TEC measurements and for tomographic imaging of the ionosphere.
NASA Astrophysics Data System (ADS)
Comite, Davide; Galli, Alessandro; Catapano, Ilaria; Soldovieri, Francesco; Pettinelli, Elena
2013-04-01
This work is focused on the three-dimensional (3-D) imaging of buried metallic targets achievable by processing GPR (ground penetrating radar) simulation data via a tomographic inversion algorithm. The direct scattering problem has been analysed by means of a recently-developed numerical setup based on an electromagnetic time-domain CAD tool (CST Microwave Studio), which enables us to efficiently explore different GPR scenarios of interest [1]. The investigated 3D domain considers here two media, representing, e.g., an air/soil environment in which variously-shaped metallic (PEC) scatterers can be buried. The GPR system is simulated with Tx/Rx antennas placed in a bistatic configuration at the soil interface. In the implementation, the characteristics of the antennas may suitably be chosen in terms of topology, offset, radiative features, frequency ranges, etc. Arbitrary time-domain waveforms can be used as the input GPR signal (e.g., a Gaussian-like pulse having the frequency spectrum in the microwave range). The gathered signal at the output port includes the backscattered wave from the objects to be reconstructed, and the relevant data may be displayed in canonical radargram forms [1]. The GPR system sweeps along one main rectilinear direction, and the scanning process is here repeated along different close parallel lines to acquire data for a full 3-D analysis. Starting from the processing of the synthetic GPR data, a microwave tomographic approach is used to tackle the imaging, which is based on the Kirchhoff approximation to linearize the inverse scattering problem [2]. The target reconstruction is given in terms of the amplitude of the 'object function' (normalized with respect to its maximum inside the 3-D investigation domain). The data of the scattered field are collected considering a multi-frequency step process inside the fixed range of the signal spectrum, under a multi-bistatic configuration where the Tx and Rx antennas are separated by an offset distance and move at the interface over rectilinear observation domains. Analyses have been performed for some canonical scatterer shapes (e.g., sphere and cylinder, cube and parallelepiped, cone and wedge) in order to specifically highlight the influence of all the three dimensions (length, depth, and width) in the reconstruction of the targets. The roles of both size and location of the objects are also addressed in terms of the probing signal wavelengths and of the antenna offset. The results show to what extent it is possible to achieve a correct spatial localization of the targets, in conjunction with a generally satisfactory prediction of their 3-D size and shape. It should anyway be noted that the tomographic reconstructions here manage challenging cases of non-penetrable objects with data gathered under a reflection configuration, hence most of the information achievable is expected relating to the upper illuminated parts of the reflectors that give rise to the main scattering effects. The limits in the identification of fine geometrical details are discussed further in connection with the critical aspects of GPR operation, which include the adopted detection configuration and the frequency spectrum of the employed signals. [1] G. Valerio, A. Galli, P. M. Barone, S. E. Lauro, E. Mattei, and E. Pettinelli, "GPR detectability of rocks in a Martian-like shallow subsoil: a numerical approach," Planet. Space Sci., Vol. 62, pp. 31-40, 2012. [2] R. Solimene, A. Buonanno, F. Soldovieri, and R. Pierri, "Physical optics imaging of 3D PEC objects: vector and multipolarized approaches," IEEE Trans. Geosci. Remote Sens., Vol. 48, pp. 1799-1808, Apr. 2010.
NASA Astrophysics Data System (ADS)
Qin, Zhuanping; Ma, Wenjuan; Ren, Shuyan; Geng, Liqing; Li, Jing; Yang, Ying; Qin, Yingmei
2017-02-01
Endoscopic DOT has the potential to apply to cancer-related imaging in tubular organs. Although the DOT has relatively large tissue penetration depth, the endoscopic DOT is limited by the narrow space of the internal tubular tissue, so as to the relatively small penetration depth. Because some adenocarcinomas including cervical adenocarcinoma are located in deep canal, it is necessary to improve the imaging resolution under the limited measurement condition. To improve the resolution, a new FOCUSS algorithm along with the image reconstruction algorithm based on the effective detection range (EDR) is developed. This algorithm is based on the region of interest (ROI) to reduce the dimensions of the matrix. The shrinking method cuts down the computation burden. To reduce the computational complexity, double conjugate gradient method is used in the matrix inversion. For a typical inner size and optical properties of the cervix-like tubular tissue, reconstructed images from the simulation data demonstrate that the proposed method achieves equivalent image quality to that obtained from the method based on EDR when the target is close the inner boundary of the model, and with higher spatial resolution and quantitative ratio when the targets are far from the inner boundary of the model. The quantitative ratio of reconstructed absorption and reduced scattering coefficient can be up to 70% and 80% under 5mm depth, respectively. Furthermore, the two close targets with different depths can be separated from each other. The proposed method will be useful to the development of endoscopic DOT technologies in tubular organs.
Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S; Subramanian, Hariharan; Dravid, Vinayak P; Backman, Vadim
2017-06-01
Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass-density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass-density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass-density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass-density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass-density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes.
Damage mapping in structural health monitoring using a multi-grid architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathews, V. John
2015-03-31
This paper presents a multi-grid architecture for tomography-based damage mapping of composite aerospace structures. The system employs an array of piezo-electric transducers bonded on the structure. Each transducer may be used as an actuator as well as a sensor. The structure is excited sequentially using the actuators and the guided waves arriving at the sensors in response to the excitations are recorded for further analysis. The sensor signals are compared to their baseline counterparts and a damage index is computed for each actuator-sensor pair. These damage indices are then used as inputs to the tomographic reconstruction system. Preliminary damage mapsmore » are reconstructed on multiple coordinate grids defined on the structure. These grids are shifted versions of each other where the shift is a fraction of the spatial sampling interval associated with each grid. These preliminary damage maps are then combined to provide a reconstruction that is more robust to measurement noise in the sensor signals and the ill-conditioned problem formulation for single-grid algorithms. Experimental results on a composite structure with complexity that is representative of aerospace structures included in the paper demonstrate that for sufficiently high sensor densities, the algorithm of this paper is capable of providing damage detection and characterization with accuracy comparable to traditional C-scan and A-scan-based ultrasound non-destructive inspection systems quickly and without human supervision.« less
Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A.; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S.; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim
2018-01-01
Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass–density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass–density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass–density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass–density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass–density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes. PMID:28416035
Sodankylä ionospheric tomography data set 2003-2014
NASA Astrophysics Data System (ADS)
Norberg, Johannes; Roininen, Lassi; Kero, Antti; Raita, Tero; Ulich, Thomas; Markkanen, Markku; Juusola, Liisa; Kauristie, Kirsti
2016-07-01
Sodankylä Geophysical Observatory has been operating a receiver network for ionospheric tomography and collecting the produced data since 2003. The collected data set consists of phase difference curves measured from COSMOS navigation satellites from the Russian Parus network (Wood and Perry, 1980) and tomographic electron density reconstructions obtained from these measurements. In this study vertical total electron content (VTEC) values are integrated from the reconstructed electron densities to make a qualitative and quantitative analysis to validate the long-term performance of the tomographic system. During the observation period, 2003-2014, there were three to five operational stations at the Fennoscandia sector. Altogether the analysis consists of around 66 000 overflights, but to ensure the quality of the reconstructions, the examination is limited to cases with descending (north to south) overflights and maximum elevation over 60°. These constraints limit the number of overflights to around 10 000. Based on this data set, one solar cycle of ionospheric VTEC estimates is constructed. The measurements are compared against the International Reference Ionosphere (IRI)-2012 model, F10.7 solar flux index and sunspot number data. Qualitatively the tomographic VTEC estimate corresponds to reference data very well, but the IRI-2012 model results are on average 40 % higher than that of the tomographic results.
Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy
NASA Astrophysics Data System (ADS)
Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc
2014-12-01
Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse rawdata and provides more stable results than volume-to-volume approaches. By applying the proposed registration approach to low dose tomographic fluoroscopy it is possible to improve the temporal resolution and thus to increase the robustness of low dose tomographic fluoroscopy.
NASA Astrophysics Data System (ADS)
Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo
2016-11-01
We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.
NASA Astrophysics Data System (ADS)
Sheppard, Adrian; Latham, Shane; Middleton, Jill; Kingston, Andrew; Myers, Glenn; Varslot, Trond; Fogden, Andrew; Sawkins, Tim; Cruikshank, Ron; Saadatfar, Mohammad; Francois, Nicolas; Arns, Christoph; Senden, Tim
2014-04-01
This paper reports on recent advances at the micro-computed tomography facility at the Australian National University. Since 2000 this facility has been a significant centre for developments in imaging hardware and associated software for image reconstruction, image analysis and image-based modelling. In 2010 a new instrument was constructed that utilises theoretically-exact image reconstruction based on helical scanning trajectories, allowing higher cone angles and thus better utilisation of the available X-ray flux. We discuss the technical hurdles that needed to be overcome to allow imaging with cone angles in excess of 60°. We also present dynamic tomography algorithms that enable the changes between one moment and the next to be reconstructed from a sparse set of projections, allowing higher speed imaging of time-varying samples. Researchers at the facility have also created a sizeable distributed-memory image analysis toolkit with capabilities ranging from tomographic image reconstruction to 3D shape characterisation. We show results from image registration and present some of the new imaging and experimental techniques that it enables. Finally, we discuss the crucial question of image segmentation and evaluate some recently proposed techniques for automated segmentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Rui; Singh, Sudhanshu S.; Chawla, Nikhilesh
2016-08-15
We present a robust method for automating removal of “segregation artifacts” in segmented tomographic images of three-dimensional heterogeneous microstructures. The objective of this method is to accurately identify and separate discrete features in composite materials where limitations in imaging resolution lead to spurious connections near close contacts. The method utilizes betweenness centrality, a measure of the importance of a node in the connectivity of a graph network, to identify voxels that create artificial bridges between otherwise distinct geometric features. To facilitate automation of the algorithm, we develop a relative centrality metric to allow for the selection of a threshold criterionmore » that is not sensitive to inclusion size or shape. As a demonstration of the effectiveness of the algorithm, we report on the segmentation of a 3D reconstruction of a SiC particle reinforced aluminum alloy, imaged by X-ray synchrotron tomography.« less
NASA Astrophysics Data System (ADS)
Xia, Huihui; Kan, Ruifeng; Xu, Zhenyu; He, Yabai; Liu, Jianguo; Chen, Bing; Yang, Chenguang; Yao, Lu; Wei, Min; Zhang, Guangle
2017-03-01
We present a system for accurate tomographic reconstruction of the combustion temperature and H2O vapor concentration of a flame based on laser absorption measurements, in combination with an innovative two-step algebraic reconstruction technique. A total of 11 collimated laser beams generated from outputs of fiber-coupled diode lasers formed a two-dimensional 5 × 6 orthogonal beam grids and measured at two H2O absorption transitions (7154.354/7154.353 cm-1 and 7467.769 cm-1). The measurement system was designed on a rotation platform to achieve a two-folder improvement in spatial resolution. Numerical simulation showed that the proposed two-step algebraic reconstruction technique for temperature and concentration, respectively, greatly improved the reconstruction accuracy of species concentration when compared with a traditional calculation. Experimental results demonstrated the good performances of the measurement system and the two-step reconstruction technique for applications such as flame monitoring and combustion diagnosis.
Application of optical longitudinal tomography for dental introscopy
NASA Astrophysics Data System (ADS)
Levin, Gennady G.; Burgansky, Alexander A.; Levandovski, Alexei G.
1997-08-01
A new method of dental introscopy in-vitro is suggested by the authors. This method implies the usage of longitudinal tomography techniques and is characterized by non-invasive and non-harmful diagnostics features, as well as interactive regime of image reconstruction which lets an operator (doctor) to control the diagnostics process in real time. He-Ne laser emission is used for obtaining of the projections. By the means of longitudinal tomography, images of different sections of an object (tooth) can be reconstructed. An experiment was held by the authors in which 100 projections of a tooth (premolar) were obtained and images of 10 different sections were reconstructed. These images were later compared to real sections of the tooth. This experiment proved that optical longitudinal tomography can be successfully used for dental introscopy. Authors claim that optical tomographic methods can be used for diagnostics of other biological objects as well. Such objects are characterized by spatial geometrical anisotropy (tubular bones, phalanxes of fingers, penis, etc.). It is especially promising to use this method for children's dentistry. the authors discuss some features of the data acquisition system for optical longitudinal tomography. Reconstruction algorithms are described. The results of experimental reconstruction are presented and advantages of this diagnostics method are discussed.
NASA Astrophysics Data System (ADS)
Liu, Chang; Cao, Zhang; Li, Fangyan; Lin, Yuzhen; Xu, Lijun
2017-05-01
Distributions of temperature and H2O concentration in a swirling flame are critical to evaluate the performance of a gas turbine combustor. In this paper, 1D tunable diode laser absorption spectroscopy tomography (1D-TDLAST) was introduced to monitor swirling flames generated from a model swirl injector by simultaneously reconstructing the rotationally symmetric distributions of temperature and H2O concentration. The optical system was sufficiently simplified by introducing only one fan-beam illumination and a linear detector array of 12 equally-spaced photodetectors. The fan-beam illumination penetrated a cross section of interest in the swirling flame and the transmitted intensities were detected by the detector array. With the transmitted intensities in hand, projections were extracted and employed by a 1D tomographic algorithm to reconstruct the distributions of temperature and H2O concentration. The route of the precessing vortex core generated in the swirling flame can be easily inferred from the reconstructed profiles of temperature and H2O concentration at different heights above the nozzle of the swirl injector.
Nanomaterial datasets to advance tomography in scanning transmission electron microscopy
Levin, Barnaby D. A.; Padgett, Elliot; Chen, Chien-Chun; ...
2016-06-07
Electron tomography in materials science has flourished with the demand to characterize nanoscale materials in three dimensions (3D). Access to experimental data is vital for developing and validating reconstruction methods that improve resolution and reduce radiation dose requirements. This work presents five high-quality scanning transmission electron microscope (STEM) tomography datasets in order to address the critical need for open access data in this field. The datasets represent the current limits of experimental technique, are of high quality, and contain materials with structural complexity. Included are tomographic series of a hyperbranched Co 2 P nanocrystal, platinum nanoparticles on a carbonmore » nanofibre imaged over the complete 180° tilt range, a platinum nanoparticle and a tungsten needle both imaged at atomic resolution by equal slope tomography, and a through-focal tilt series of PtCu nanoparticles. A volumetric reconstruction from every dataset is provided for comparison and development of post-processing and visualization techniques. Researchers interested in creating novel data processing and reconstruction algorithms will now have access to state of the art experimental test data.« less
Nanomaterial datasets to advance tomography in scanning transmission electron microscopy.
Levin, Barnaby D A; Padgett, Elliot; Chen, Chien-Chun; Scott, M C; Xu, Rui; Theis, Wolfgang; Jiang, Yi; Yang, Yongsoo; Ophus, Colin; Zhang, Haitao; Ha, Don-Hyung; Wang, Deli; Yu, Yingchao; Abruña, Hector D; Robinson, Richard D; Ercius, Peter; Kourkoutis, Lena F; Miao, Jianwei; Muller, David A; Hovden, Robert
2016-06-07
Electron tomography in materials science has flourished with the demand to characterize nanoscale materials in three dimensions (3D). Access to experimental data is vital for developing and validating reconstruction methods that improve resolution and reduce radiation dose requirements. This work presents five high-quality scanning transmission electron microscope (STEM) tomography datasets in order to address the critical need for open access data in this field. The datasets represent the current limits of experimental technique, are of high quality, and contain materials with structural complexity. Included are tomographic series of a hyperbranched Co2P nanocrystal, platinum nanoparticles on a carbon nanofibre imaged over the complete 180° tilt range, a platinum nanoparticle and a tungsten needle both imaged at atomic resolution by equal slope tomography, and a through-focal tilt series of PtCu nanoparticles. A volumetric reconstruction from every dataset is provided for comparison and development of post-processing and visualization techniques. Researchers interested in creating novel data processing and reconstruction algorithms will now have access to state of the art experimental test data.
Nanomaterial datasets to advance tomography in scanning transmission electron microscopy
Levin, Barnaby D.A.; Padgett, Elliot; Chen, Chien-Chun; Scott, M.C.; Xu, Rui; Theis, Wolfgang; Jiang, Yi; Yang, Yongsoo; Ophus, Colin; Zhang, Haitao; Ha, Don-Hyung; Wang, Deli; Yu, Yingchao; Abruña, Hector D.; Robinson, Richard D.; Ercius, Peter; Kourkoutis, Lena F.; Miao, Jianwei; Muller, David A.; Hovden, Robert
2016-01-01
Electron tomography in materials science has flourished with the demand to characterize nanoscale materials in three dimensions (3D). Access to experimental data is vital for developing and validating reconstruction methods that improve resolution and reduce radiation dose requirements. This work presents five high-quality scanning transmission electron microscope (STEM) tomography datasets in order to address the critical need for open access data in this field. The datasets represent the current limits of experimental technique, are of high quality, and contain materials with structural complexity. Included are tomographic series of a hyperbranched Co2P nanocrystal, platinum nanoparticles on a carbon nanofibre imaged over the complete 180° tilt range, a platinum nanoparticle and a tungsten needle both imaged at atomic resolution by equal slope tomography, and a through-focal tilt series of PtCu nanoparticles. A volumetric reconstruction from every dataset is provided for comparison and development of post-processing and visualization techniques. Researchers interested in creating novel data processing and reconstruction algorithms will now have access to state of the art experimental test data. PMID:27272459
Niumsawatt, Vachara; Debrotwir, Andrew N; Rozen, Warren Matthew
2014-01-01
Computed tomographic angiography (CTA) has become a mainstay in preoperative perforator flap planning in the modern era of reconstructive surgery. However, the increased use of CTA does raise the concern of radiation exposure to patients. Several techniques have been developed to decrease radiation dosage without compromising image quality, with varying results. The most recent advance is in the improvement of image reconstruction using an adaptive statistical iterative reconstruction (ASIR) algorithm. We sought to evaluate the image quality of ASIR in preoperative deep inferior epigastric perforator (DIEP) flap surgery, through a direct comparison with conventional filtered back projection (FBP) images. A prospective review of 60 consecutive ASIR and 60 consecutive FBP CTA images using similar protocol (except for radiation dosage) was undertaken, analyzed by 2 independent reviewers. In both groups, we were able to accurately identify axial arteries and their perforators. Subjective analysis of image quality demonstrated no statistically significant difference between techniques. ASIR can thus be used for preoperative imaging with similar image quality to FBP, but with a 60% reduction in radiation delivery to patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, H; Yoon, D; Jung, J
Purpose: The purpose of this study is to suggest a tumor monitoring technique using prompt gamma rays emitted during the reaction between an antiproton and a boron particle, and to verify the increase of the therapeutic effectiveness of the antiproton boron fusion therapy using Monte Carlo simulation code. Methods: We acquired the percentage depth dose of the antiproton beam from a water phantom with and without three boron uptake regions (region A, B, and C) using F6 tally of MCNPX. The tomographic image was reconstructed using prompt gamma ray events from the reaction between the antiproton and boron during themore » treatment from 32 projections (reconstruction algorithm: MLEM). For the image reconstruction, we were performed using a 80 × 80 pixel matrix with a pixel size of 5 mm. The energy window was set as a 10 % energy window. Results: The prompt gamma ray peak for imaging was observed at 719 keV in the energy spectrum using the F8 tally fuction (energy deposition tally) of the MCNPX code. The tomographic image shows that the boron uptake regions were successfully identified from the simulation results. In terms of the receiver operating characteristic curve analysis, the area under the curve values were 0.647 (region A), 0.679 (region B), and 0.632 (region C). The SNR values increased as the tumor diameter increased. The CNR indicated the relative signal intensity within different regions. The CNR values also increased as the different of BURs diamter increased. Conclusion: We confirmed the feasibility of tumor monitoring during the antiproton therapy as well as the superior therapeutic effect of the antiproton boron fusion therapy. This result can be beneficial for the development of a more accurate particle therapy.« less
A multiresolution inversion for imaging the ionosphere
NASA Astrophysics Data System (ADS)
Yin, Ping; Zheng, Ya-Nan; Mitchell, Cathryn N.; Li, Bo
2017-06-01
Ionospheric tomography has been widely employed in imaging the large-scale ionospheric structures at both quiet and storm times. However, the tomographic algorithms to date have not been very effective in imaging of medium- and small-scale ionospheric structures due to limitations of uneven ground-based data distributions and the algorithm itself. Further, the effect of the density and quantity of Global Navigation Satellite Systems data that could help improve the tomographic results for the certain algorithm remains unclear in much of the literature. In this paper, a new multipass tomographic algorithm is proposed to conduct the inversion using intensive ground GPS observation data and is demonstrated over the U.S. West Coast during the period of 16-18 March 2015 which includes an ionospheric storm period. The characteristics of the multipass inversion algorithm are analyzed by comparing tomographic results with independent ionosonde data and Center for Orbit Determination in Europe total electron content estimates. Then, several ground data sets with different data distributions are grouped from the same data source in order to investigate the impact of the density of ground stations on ionospheric tomography results. Finally, it is concluded that the multipass inversion approach offers an improvement. The ground data density can affect tomographic results but only offers improvements up to a density of around one receiver every 150 to 200 km. When only GPS satellites are tracked there is no clear advantage in increasing the density of receivers beyond this level, although this may change if multiple constellations are monitored from each receiving station in the future.
Volumetric PIV with a Plenoptic Camera
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Tim
2012-11-01
Plenoptic cameras have received attention recently due to their ability to computationally refocus an image after it has been acquired. We describe the development of a robust, economical and easy-to-use volumetric PIV technique using a unique plenoptic camera built in our laboratory. The tomographic MART algorithm is used to reconstruct pairs of 3D particle volumes with velocity determined using conventional cross-correlation techniques. 3D/3C velocity measurements (volumetric dimensions of 2 . 8 ' ' × 1 . 9 ' ' × 1 . 6 ' ') of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. This work has been supported by the Air Force Office of Scientific Research,(Grant #FA9550-100100576).
NASA Astrophysics Data System (ADS)
van Gent, P. L.; Michaelis, D.; van Oudheusden, B. W.; Weiss, P.-É.; de Kat, R.; Laskari, A.; Jeon, Y. J.; David, L.; Schanz, D.; Huhn, F.; Gesemann, S.; Novara, M.; McPhaden, C.; Neeteson, N. J.; Rival, D. E.; Schneiders, J. F. G.; Schrijer, F. F. J.
2017-04-01
A test case for pressure field reconstruction from particle image velocimetry (PIV) and Lagrangian particle tracking (LPT) has been developed by constructing a simulated experiment from a zonal detached eddy simulation for an axisymmetric base flow at Mach 0.7. The test case comprises sequences of four subsequent particle images (representing multi-pulse data) as well as continuous time-resolved data which can realistically only be obtained for low-speed flows. Particle images were processed using tomographic PIV processing as well as the LPT algorithm `Shake-The-Box' (STB). Multiple pressure field reconstruction techniques have subsequently been applied to the PIV results (Eulerian approach, iterative least-square pseudo-tracking, Taylor's hypothesis approach, and instantaneous Vortex-in-Cell) and LPT results (FlowFit, Vortex-in-Cell-plus, Voronoi-based pressure evaluation, and iterative least-square pseudo-tracking). All methods were able to reconstruct the main features of the instantaneous pressure fields, including methods that reconstruct pressure from a single PIV velocity snapshot. Highly accurate reconstructed pressure fields could be obtained using LPT approaches in combination with more advanced techniques. In general, the use of longer series of time-resolved input data, when available, allows more accurate pressure field reconstruction. Noise in the input data typically reduces the accuracy of the reconstructed pressure fields, but none of the techniques proved to be critically sensitive to the amount of noise added in the present test case.
Licht, Heather; Murray, Mark; Vassaur, John; Jupiter, Daniel C; Regner, Justin L; Chaput, Christopher D
2015-11-18
With the rise of obesity in the American population, there has been a proportionate increase of obesity in the trauma population. The purpose of this study was to use a computed tomography-based measurement of adiposity to determine if obesity is associated with an increased burden to the health-care system in patients with orthopaedic polytrauma. A prospective comprehensive trauma database at a level-I trauma center was utilized to identify 301 patients with polytrauma who had orthopaedic injuries and intensive care unit admission from 2006 to 2011. Routine thoracoabdominal computed tomographic scans allowed for measurement of the truncal adiposity volume. The truncal three-dimensional reconstruction body mass index was calculated from the computed tomography-based volumes based on a previously validated algorithm. A truncal three-dimensional reconstruction body mass index of <30 kg/m(2) denoted non-obese patients and ≥ 30 kg/m(2) denoted obese patients. The need for orthopaedic surgical procedure, in-hospital mortality, length of stay, hospital charges, and discharge disposition were compared between the two groups. Of the 301 patients, 21.6% were classified as obese (truncal three-dimensional reconstruction body mass index of ≥ 30 kg/m(2)). Higher truncal three-dimensional reconstruction body mass index was associated with longer hospital length of stay (p = 0.02), more days spent in the intensive care unit (p = 0.03), more frequent discharge to a long-term care facility (p < 0.0002), higher rate of orthopaedic surgical intervention (p < 0.01), and increased total hospital charges (p < 0.001). Computed tomographic scans, routinely obtained at the time of admission, can be utilized to calculate truncal adiposity and to investigate the impact of obesity on patients with polytrauma. Obese patients were found to have higher total hospital charges, longer hospital stays, discharge to a continuing-care facility, and a higher rate of orthopaedic surgical intervention. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.
Metric on the space of quantum states from relative entropy. Tomographic reconstruction
NASA Astrophysics Data System (ADS)
Man'ko, Vladimir I.; Marmo, Giuseppe; Ventriglia, Franco; Vitale, Patrizia
2017-08-01
In the framework of quantum information geometry, we derive, from quantum relative Tsallis entropy, a family of quantum metrics on the space of full rank, N level quantum states, by means of a suitably defined coordinate free differential calculus. The cases N=2, N=3 are discussed in detail and notable limits are analyzed. The radial limit procedure has been used to recover quantum metrics for lower rank states, such as pure states. By using the tomographic picture of quantum mechanics we have obtained the Fisher-Rao metric for the space of quantum tomograms and derived a reconstruction formula of the quantum metric of density states out of the tomographic one. A new inequality obtained for probabilities of three spin-1/2 projections in three perpendicular directions is proposed to be checked in experiments with superconducting circuits.
Feasibility of track-based multiple scattering tomography
NASA Astrophysics Data System (ADS)
Jansen, H.; Schütze, P.
2018-04-01
We present a tomographic technique making use of a gigaelectronvolt electron beam for the determination of the material budget distribution of centimeter-sized objects by means of simulations and measurements. In both cases, the trajectory of electrons traversing a sample under test is reconstructed using a pixel beam-telescope. The width of the deflection angle distribution of electrons undergoing multiple Coulomb scattering at the sample is estimated. Basing the sinogram on position-resolved estimators enables the reconstruction of the original sample using an inverse radon transform. We exemplify the feasibility of this tomographic technique via simulations of two structured cubes—made of aluminium and lead—and via an in-beam measured coaxial adapter. The simulations yield images with FWHM edge resolutions of (177 ± 13) μm and a contrast-to-noise ratio of 5.6 ± 0.2 (7.8 ± 0.3) for aluminium (lead) compared to air. The tomographic reconstruction of a coaxial adapter serves as experimental evidence of the technique and yields a contrast-to-noise ratio of 15.3 ± 1.0 and a FWHM edge resolution of (117 ± 4) μm.
NASA Astrophysics Data System (ADS)
Wu, Z.; Gao, K.; Wang, Z. L.; Shao, Q. G.; Hu, R. F.; Wei, C. X.; Zan, G. B.; Wali, F.; Luo, R. H.; Zhu, P. P.; Tian, Y. C.
2017-06-01
In X-ray grating-based phase contrast imaging, information retrieval is necessary for quantitative research, especially for phase tomography. However, numerous and repetitive processes have to be performed for tomographic reconstruction. In this paper, we report a novel information retrieval method, which enables retrieving phase and absorption information by means of a linear combination of two mutually conjugate images. Thanks to the distributive law of the multiplication as well as the commutative law and associative law of the addition, the information retrieval can be performed after tomographic reconstruction, thus simplifying the information retrieval procedure dramatically. The theoretical model of this method is established in both parallel beam geometry for Talbot interferometer and fan beam geometry for Talbot-Lau interferometer. Numerical experiments are also performed to confirm the feasibility and validity of the proposed method. In addition, we discuss its possibility in cone beam geometry and its advantages compared with other methods. Moreover, this method can also be employed in other differential phase contrast imaging methods, such as diffraction enhanced imaging, non-interferometric imaging, and edge illumination.
3D+time acquisitions of 3D cell culture by means of lens-free tomographic microscopy
NASA Astrophysics Data System (ADS)
Berdeu, Anthony; Laperrousaz, Bastien; Bordy, Thomas; Morales, S.; Gidrol, Xavier; Picollet-D'hahan, Nathalie; Allier, Cédric
2018-02-01
We propose a three-dimensional (3D) imaging platform based on lens-free microscopy to perform multi-angle acquisitions on 3D cell cultures embedded in extracellular matrix (ECM). We developed algorithms based on the Fourier diffraction theorem to perform fully 3D reconstructions of biological samples and we adapted the lens-free microscope to incubator conditions. Here we demonstrate for the first time, 3D+time lens-free acquisitions of 3D cell culture over 8 days directly into the incubator. The 3D reconstructed volume is as large as 5 mm3 and provides a unique way to observe in the same 3D cell culture experiment multiple cell migration strategies. Namely, in a 3D cell culture of prostate epithelial cells embedded within a Matrigel® matrix, we are able to distinguish single cell 'leaders', migration of cell clusters, migration of large aggregates of cells, and also close-gap and large-scale branching. In addition, we observe long-scale 3D deformations of the ECM that modify the geometry of the 3D cell culture. Interestingly, we also observed the opposite, i.e. we found that large aggregates of cells may deform the ECM by generating traction forces over very long distances. In sum we put forward a novel 3D lens-free microscopy tomographic technique to study the single and collective cell migrations, the cell-to-cell interactions and the cell-to-matrix interactions.
Tomography with energy dispersive diffraction
NASA Astrophysics Data System (ADS)
Stock, S. R.; Okasinski, J. S.; Woods, R.; Baldwin, J.; Madden, T.; Quaranta, O.; Rumaiz, A.; Kuczewski, T.; Mead, J.; Krings, T.; Siddons, P.; Miceli, A.; Almer, J. D.
2017-09-01
X-ray diffraction can be used as the signal for tomographic reconstruction and provides a cross-sectional map of the crystallographic phases and related quantities. Diffraction tomography has been developed over the last decade using monochromatic x-radiation and an area detector. This paper reports tomographic reconstruction with polychromatic radiation and an energy sensitive detector array. The energy dispersive diffraction (EDD) geometry, the instrumentation and the reconstruction process are described and related to the expected resolution. Results of EDD tomography are presented for two samples containing hydroxyapatite (hAp). The first is a 3D-printed sample with an elliptical crosssection and contains synthetic hAp. The second is a human second metacarpal bone from the Roman-era cemetery at Ancaster, UK and contains bio-hAp which may have been altered by diagenesis. Reconstructions with different diffraction peaks are compared. Prospects for future EDD tomography are also discussed.
An image filtering technique for SPIDER visible tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonnesu, N., E-mail: nicola.fonnesu@igi.cnr.it; Agostini, M.; Brombin, M.
2014-02-15
The tomographic diagnostic developed for the beam generated in the SPIDER facility (100 keV, 50 A prototype negative ion source of ITER neutral beam injector) will characterize the two-dimensional particle density distribution of the beam. The simulations described in the paper show that instrumental noise has a large influence on the maximum achievable resolution of the diagnostic. To reduce its impact on beam pattern reconstruction, a filtering technique has been adapted and implemented in the tomography code. This technique is applied to the simulated tomographic reconstruction of the SPIDER beam, and the main results are reported.
Three-dimensional study of the vector potential of magnetic structures.
Phatak, Charudatta; Petford-Long, Amanda K; De Graef, Marc
2010-06-25
The vector potential is central to a number of areas of condensed matter physics, such as superconductivity and magnetism. We have used a combination of electron wave phase reconstruction and electron tomographic reconstruction to experimentally measure and visualize the three-dimensional vector potential in and around a magnetic Permalloy structure. The method can probe the vector potential of the patterned structures with a resolution of about 13 nm. A transmission electron microscope operated in the Lorentz mode is used to record four tomographic tilt series. Measurements for a square Permalloy structure with an internal closure domain configuration are presented.
Yang, Pengfei; Niu, Kai; Wu, Yijing; Struffert, Tobias; Dorfler, Arnd; Schafer, Sebastian; Royalty, Kevin; Strother, Charles; Chen, Guang-Hong
2015-12-01
Multimodal imaging using cone beam C-arm computed tomography (CT) may shorten the delay from ictus to revascularization for acute ischemic stroke patients with a large vessel occlusion. Largely because of limited temporal resolution, reconstruction of time-resolved CT angiography (CTA) from these systems has not yielded satisfactory results. We evaluated the image quality and diagnostic value of time-resolved C-arm CTA reconstructed using novel image processing algorithms. Studies were done under an Institutional Review Board approved protocol. Postprocessing of data from 21 C-arm CT dynamic perfusion acquisitions from 17 patients with acute ischemic stroke were done to derive time-resolved C-arm CTA images. Two observers independently evaluated image quality and diagnostic content for each case. ICC and receiver-operating characteristic analysis were performed to evaluate interobserver agreement and diagnostic value of this novel imaging modality. Time-resolved C-arm CTA images were successfully generated from 20 data sets (95.2%, 20/21). Two observers agreed well that the image quality for large cerebral arteries was good but was more limited for small cerebral arteries (distal to M1, A1, and P1). receiver-operating characteristic curves demonstrated excellent diagnostic value for detecting large vessel occlusions (area under the curve=0.987-1). Time-resolved CTAs derived from C-arm CT perfusion acquisitions provide high quality images that allowed accurate diagnosis of large vessel occlusions. Although image quality of smaller arteries in this study was not optimal ongoing modifications of the postprocessing algorithm will likely remove this limitation. Adding time-resolved C-arm CTAs to the capabilities of the angiography suite further enhances its suitability as a one-stop shop for care for patients with acute ischemic stroke. © 2015 American Heart Association, Inc.
Moosavi Tayebi, Rohollah; Wirza, Rahmita; Sulaiman, Puteri S B; Dimon, Mohd Zamrin; Khalid, Fatimah; Al-Surmi, Aqeel; Mazaheri, Samaneh
2015-04-22
Computerized tomographic angiography (3D data representing the coronary arteries) and X-ray angiography (2D X-ray image sequences providing information about coronary arteries and their stenosis) are standard and popular assessment tools utilized for medical diagnosis of coronary artery diseases. At present, the results of both modalities are individually analyzed by specialists and it is difficult for them to mentally connect the details of these two techniques. The aim of this work is to assist medical diagnosis by providing specialists with the relationship between computerized tomographic angiography and X-ray angiography. In this study, coronary arteries from two modalities are registered in order to create a 3D reconstruction of the stenosis position. The proposed method starts with coronary artery segmentation and labeling for both modalities. Then, stenosis and relevant labeled artery in X-ray angiography image are marked by a specialist. Proper control points for the marked artery in both modalities are automatically detected and normalized. Then, a geometrical transformation function is computed using these control points. Finally, this function is utilized to register the marked artery from the X-ray angiography image on the computerized tomographic angiography and get the 3D position of the stenosis lesion. The result is a 3D informative model consisting of stenosis and coronary arteries' information from the X-ray angiography and computerized tomographic angiography modalities. The results of the proposed method for coronary artery segmentation, labeling and 3D reconstruction are evaluated and validated on the dataset containing both modalities. The advantage of this method is to aid specialists to determine a visual relationship between the correspondent coronary arteries from two modalities and also set up a connection between stenosis points from an X-ray angiography along with their 3D positions on the coronary arteries from computerized tomographic angiography. Moreover, another benefit of this work is that the medical acquisition standards remain unchanged, which means that no calibration in the acquisition devices is required. It can be applied on most computerized tomographic angiography and angiography devices.
In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie
2015-03-01
Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soufi, M; Asl, A Kamali; Geramifar, P
2015-06-15
Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lungmore » lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and diagnosis.« less
Two-dimensional tomographic terahertz imaging by homodyne self-mixing.
Mohr, Till; Breuer, Stefan; Giuliani, G; Elsäßer, Wolfgang
2015-10-19
We realize a compact two-dimensional tomographic terahertz imaging experiment involving only one photoconductive antenna (PCA) simultaneously serving as a transmitter and receiver of the terahertz radiation. A hollow-core Teflon cylinder filled with α-Lactose monohydrate powder is studied at two terahertz frequencies, far away and at a specific absorption line of the powder. This sample is placed between the antenna and a chopper wheel, which serves as back reflector of the terahertz radiation into the PCA. Amplitude and phase information of the continuous-wave (CW) terahertz radiation are extracted from the measured homodyne self-mixing (HSM) signal after interaction with the cylinder. The influence of refraction is studied by modeling the set-up utilizing ZEMAX and is discussed by means of the measured 1D projections. The tomographic reconstruction by using the Simultaneous Algebraic Reconstruction Technique (SART) allows to identify both object geometry and α-Lactose filling.
NASA Astrophysics Data System (ADS)
Dudak, J.; Zemlicka, J.; Krejci, F.; Karch, J.; Patzelt, M.; Zach, P.; Sykora, V.; Mrzilkova, J.
2016-03-01
X-ray microradiography and microtomography are imaging techniques with increasing applicability in the field of biomedical and preclinical research. Application of hybrid pixel detector Timepix enables to obtain very high contrast of low attenuating materials such as soft biological tissue. However X-ray imaging of ex-vivo soft tissue samples is a difficult task due to its structural instability. Ex-vivo biological tissue is prone to fast drying-out which is connected with undesired changes of sample size and shape producing later on artefacts within the tomographic reconstruction. In this work we present the optimization of our Timepix equipped micro-CT system aiming to maintain soft tissue sample in stable condition. Thanks to the suggested approach higher contrast of tomographic reconstructions can be achieved while also large samples that require detector scanning can be easily measured.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Ahmad, S; Alsbou, N
Purpose: A motion algorithm was developed to extract actual length, CT-numbers and motion amplitude of a mobile target imaged with cone-beam-CT (CBCT) retrospective to image-reconstruction. Methods: The motion model considered a mobile target moving with a sinusoidal motion and employed three measurable parameters: apparent length, CT number level and gradient of a mobile target obtained from CBCT images to extract information about the actual length and CT number value of the stationary target and motion amplitude. The algorithm was verified experimentally with a mobile phantom setup that has three targets with different sizes manufactured from homogenous tissue-equivalent gel material embeddedmore » into a thorax phantom. The phantom moved sinusoidal in one-direction using eight amplitudes (0–20mm) and a frequency of 15-cycles-per-minute. The model required imaging parameters such as slice thickness, imaging time. Results: This motion algorithm extracted three unknown parameters: length of the target, CT-number-level, motion amplitude for a mobile target retrospective to CBCT image reconstruction. The algorithm relates three unknown parameters to measurable apparent length, CT-number-level and gradient for well-defined mobile targets obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on actual length of the target and motion amplitude. The cumulative CT-number for a mobile target was dependent on CT-number-level of the stationary target and motion amplitude. The gradient of the CT-distribution of mobile target is dependent on the stationary CT-number-level, actual target length along the direction of motion, and motion amplitude. Motion frequency and phase did not affect the elongation and CT-number distributions of mobile targets when imaging time included several motion cycles. Conclusion: The motion algorithm developed in this study has potential applications in diagnostic CT imaging and radiotherapy to extract actual length, size and CT-numbers distorted by motion in CBCT imaging. The model provides further information about motion of the target.« less
NASA Astrophysics Data System (ADS)
Li, Bin; Wang, Dayong; Rong, Lu; Zhai, Changchao; Wang, Yunxin; Zhao, Jie
2018-02-01
Terahertz (THz) radiation is able to penetrate many different types of nonpolar and nonmetallic materials without the damaging effects of x-rays. THz technology can be combined with computed tomography (CT) to form THz CT, which is an effective imaging method that is used to visualize the internal structure of a three-dimensional sample as cross-sectional images. Here, we reported an application of THz as the radiation source in CT imaging by replacing the x-rays. In this method, the sample cross section is scanned in all translation and rotation directions. Then, the projection data are reconstructed using a tomographic reconstruction algorithm. Two-dimensional (2-D) cross-sectional images of the chicken ulna were obtained through the continuous-wave (CW) THz CT system. Given by the difference of the THz absorption of different substances, the compact bone and spongy bone inside the chicken ulna are structurally distinguishable in the 2-D cross-sectional images. Using the filtered back projection algorithm, we reconstructed the projection data of the chicken ulna at different projection angle intervals and found that the artifacts and noise in the images are strikingly increased when the projection angle intervals become larger, reflected by the blurred boundary of the compact bone. The quality and fidelity of the 2-D cross-sectional images could be substantially improved by reducing the projection angle intervals. Our experimental data demonstrated a feasible application of the CW THz CT system in biological imaging.
A maximum entropy reconstruction technique for tomographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.
2013-04-01
This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.
Feasibility study on low-dosage digital tomosynthesis (DTS) using a multislit collimation technique
NASA Astrophysics Data System (ADS)
Park, S. Y.; Kim, G. A.; Park, C. K.; Cho, H. S.; Seo, C. W.; Lee, D. Y.; Kang, S. Y.; Kim, K. S.; Lim, H. W.; Lee, H. W.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Woo, T. H.
2018-04-01
In this study, we investigated an effective low-dose digital tomosynthesis (DTS) where a multislit collimator placed between the X-ray tube and the patient oscillates during projection data acquisition, partially blocking the X-ray beam to the patient thereby reducing the radiation dosage. We performed a simulation using the proposed DTS with two sets of multislit collimators both having a 50% duty cycle and investigated the image characteristics to demonstrate the feasibility of this proposed approach. In the simulation, all projections were taken at a tomographic angle of θ = ± 50° and an angle step of Δθ =2°. We utilized an iterative algorithm based on a compressed-sensing (CS) scheme for more accurate DTS reconstruction. Using the proposed DTS, we successfully obtained CS-reconstructed DTS images with no bright-band artifacts around the multislit edges of the collimator, thus maintaining the image quality. Therefore, the use of multislit collimation in current real-world DTS systems can reduce the radiation dosage to patients.
Limited-angle tomography for analyzer-based phase-contrast X-ray imaging
Majidi, Keivan; Wernick, Miles N; Li, Jun; Muehleman, Carol; Brankov, Jovan G
2014-01-01
Multiple-Image Radiography (MIR) is an analyzer-based phase-contrast X-ray imaging method (ABI), which is emerging as a potential alternative to conventional radiography. MIR simultaneously generates three planar parametric images containing information about scattering, refraction and attenuation properties of the object. The MIR planar images are linear tomographic projections of the corresponding object properties, which allows reconstruction of volumetric images using computed tomography (CT) methods. However, when acquiring a full range of linear projections around the tissue of interest is not feasible or the scanning time is limited, limited-angle tomography techniques can be used to reconstruct these volumetric images near the central plane, which is the plane that contains the pivot point of the tomographic movement. In this work, we use computer simulations to explore the applicability of limited-angle tomography to MIR. We also investigate the accuracy of reconstructions as a function of number of tomographic angles for a fixed total radiation exposure. We use this function to find an optimal range of angles over which data should be acquired for limited-angle tomography MIR (LAT-MIR). Next, we apply the LAT-MIR technique to experimentally acquired MIR projections obtained in a cadaveric human thumb study. We compare the reconstructed slices near the central plane to the same slices reconstructed by CT-MIR using the full angular view around the object. Finally, we perform a task-based evaluation of LAT-MIR performance for different numbers of angular views, and use template matching to detect cartilage in the refraction image near the central plane. We use the signal-to-noise ratio of this test as the detectability metric to investigate an optimum range of tomographic angles for detecting soft tissues in LAT-MIR. Both results show that there is an optimum range of angular view for data acquisition where LAT-MIR yields the best performance, comparable to CT-MIR only if one considers volumetric images near the central plane and not the whole volume. PMID:24898008
Limited-angle tomography for analyzer-based phase-contrast x-ray imaging
NASA Astrophysics Data System (ADS)
Majidi, Keivan; Wernick, Miles N.; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-07-01
Multiple-image radiography (MIR) is an analyzer-based phase-contrast x-ray imaging method, which is emerging as a potential alternative to conventional radiography. MIR simultaneously generates three planar parametric images containing information about scattering, refraction and attenuation properties of the object. The MIR planar images are linear tomographic projections of the corresponding object properties, which allows reconstruction of volumetric images using computed tomography (CT) methods. However, when acquiring a full range of linear projections around the tissue of interest is not feasible or the scanning time is limited, limited-angle tomography techniques can be used to reconstruct these volumetric images near the central plane, which is the plane that contains the pivot point of the tomographic movement. In this work, we use computer simulations to explore the applicability of limited-angle tomography to MIR. We also investigate the accuracy of reconstructions as a function of number of tomographic angles for a fixed total radiation exposure. We use this function to find an optimal range of angles over which data should be acquired for limited-angle tomography MIR (LAT-MIR). Next, we apply the LAT-MIR technique to experimentally acquired MIR projections obtained in a cadaveric human thumb study. We compare the reconstructed slices near the central plane to the same slices reconstructed by CT-MIR using the full angular view around the object. Finally, we perform a task-based evaluation of LAT-MIR performance for different numbers of angular views, and use template matching to detect cartilage in the refraction image near the central plane. We use the signal-to-noise ratio of this test as the detectability metric to investigate an optimum range of tomographic angles for detecting soft tissues in LAT-MIR. Both results show that there is an optimum range of angular view for data acquisition where LAT-MIR yields the best performance, comparable to CT-MIR only if one considers volumetric images near the central plane and not the whole volume.
Gaitanis, Anastasios; Kastis, George A; Vlastou, Elena; Bouziotis, Penelope; Verginis, Panayotis; Anagnostopoulos, Constantinos D
2017-08-01
The Tera-Tomo 3D image reconstruction algorithm (a version of OSEM), provided with the Mediso nanoScan® PC (PET8/2) small-animal positron emission tomograph (PET)/x-ray computed tomography (CT) scanner, has various parameter options such as total level of regularization, subsets, and iterations. Also, the acquisition time in PET plays an important role. This study aims to assess the performance of this new small-animal PET/CT scanner for different acquisition times and reconstruction parameters, for 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) and Ga-68, under the NEMA NU 4-2008 standards. Various image quality metrics were calculated for different realizations of [ 18 F]FDG and Ga-68 filled image quality (IQ) phantoms. [ 18 F]FDG imaging produced improved images over Ga-68. The best compromise for the optimization of all image quality factors is achieved for at least 30 min acquisition and image reconstruction with 52 iteration updates combined with a high regularization level. A high regularization level at 52 iteration updates and 30 min acquisition time were found to optimize most of the figures of merit investigated.
Yang, C L; Wei, H Y; Adler, A; Soleimani, M
2013-06-01
Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.
Determination of the position of nucleus cochlear implant electrodes in the inner ear.
Skinner, M W; Ketten, D R; Vannier, M W; Gates, G A; Yoffie, R L; Kalender, W A
1994-09-01
Accurate determination of intracochlear electrode position in patients with cochlear implants could provide a basis for detecting migration of the implant and could aid in the selection of stimulation parameters for sound processor programming. New computer algorithms for submillimeter resolution and 3-D reconstruction from spiral computed tomographic (CT) scans now make it possible to accurately determine the position of implanted electrodes within the cochlear canal. The accuracy of these algorithms was tested using an electrode array placed in a phantom model. Measurements of electrode length and interelectrode distance from spiral CT scan reconstructions were in close agreement with those from stereo microscopy. Although apparent electrode width was increased on CT scans due to partial volume averaging, a correction factor was developed for measurements from conventional radiographs and an expanded CT absorption value scale added to detect the presence of platinum electrodes and wires. The length of the cochlear canal was calculated from preoperative spiral CT scans for one patient, and the length of insertion of the electrode array was calculated from her postoperative spiral CT scans. The cross-sectional position of electrodes in relation to the outer bony wall and modiolus was measured and plotted as a function of distance with the electrode width correction applied.
NASA Astrophysics Data System (ADS)
Chang, Jenghwa; Aronson, Raphael; Graber, Harry L.; Barbour, Randall L.
1995-05-01
We present results examining the dependence of image quality for imaging in dense scattering media as influenced by the choice of parameters pertaining to the physical measurement and factors influencing the efficiency of the computation. The former includes the density of the weight matrix as affected by the target volume, view angle, and source condition. The latter includes the density of the weight matrix and type of algorithm used. These were examined by solving a one-step linear perturbation equation derived from the transport equation using three different algorithms: POCS, CGD, and SART algorithms with contraints. THe above were explored by evaluating four different 3D cylindrical phantom media: a homogeneous medium, an media containing a single black rod on the axis, a single black rod parallel to the axis, and thirteen black rods arrayed in the shape of an 'X'. Solutions to the forward problem were computed using Monte Carlo methods for an impulse source, from which was calculated time- independent and time harmonic detector responses. The influence of target volume on image quality and computational efficiency was studied by computing solution to three types of reconstructions: 1) 3D reconstruction, which considered each voxel individually, 2) 2D reconstruction, which assumed that symmetry along the cylinder axis was know a proiri, 3) 2D limited reconstruction, which assumed that only those voxels in the plane of the detectors contribute information to the detecot readings. The effect of view angle was explored by comparing computed images obtained from a single source, whose position was varied, as well as for the type of tomographic measurement scheme used (i.e., radial scan versus transaxial scan). The former condition was also examined for the dependence of the above on choice of source condition [ i.e., cw (2D reconstructions) versus time-harmonic (2D limited reconstructions) source]. The efficiency of the computational effort was explored, principally, by conducting a weight matrix 'threshold titration' study. This involved computing the ratio of each matrix element to the maximum element of its row and setting this to zero if the ratio was less than a preselected threshold. Results obtained showed that all three types of reconstructions provided good image quality. The 3D reconstruction outperformed the other two reconstructions. The time required for 2D and 2D limited reconstruction is much less (< 10%) than that for the 3D reconstruction. The 'threshold titration' study shows that artifacts were present when the threshold was 5% or higher, and no significant differences of image quality were observed when the thresholds were less tha 1%, in which case 38% (21,849 of 57,600) of the total weight elements were set to zero. Restricting the view angle produced degradation in image quality, but, in all cases, clearly recognizable images were obtained.
Buzmakov, Alexey; Chukalina, Marina; Nikolaev, Dmitry; Schaefer, Gerald; Gulimova, Victoria; Saveliev, Sergey; Tereschenko, Elena; Seregin, Alexey; Senin, Roman; Prun, Victor; Zolotov, Denis; Asadchikov, Victor
2013-01-01
This paper presents the results of a comprehensive analysis of structural changes in the caudal vertebrae of Turner's thick-toed geckos by computer microtomography and X-ray fluorescence analysis. We present algorithms used for the reconstruction of tomographic images which allow to work with high noise level projections that represent typical conditions dictated by the nature of the samples. Reptiles, due to their ruggedness, small size, belonging to the amniote and a number of other valuable features, are an attractive model object for long-orbital experiments on unmanned spacecraft. Issues of possible changes in their bone tissue under the influence of spaceflight are the subject of discussions between biologists from different laboratories around the world.
Analytical-Based Partial Volume Recovery in Mouse Heart Imaging
NASA Astrophysics Data System (ADS)
Dumouchel, Tyler; deKemp, Robert A.
2011-02-01
Positron emission tomography (PET) is a powerful imaging modality that has the ability to yield quantitative images of tracer activity. Physical phenomena such as photon scatter, photon attenuation, random coincidences and spatial resolution limit quantification potential and must be corrected to preserve the accuracy of reconstructed images. This study focuses on correcting the partial volume effects that arise in mouse heart imaging when resolution is insufficient to resolve the true tracer distribution in the myocardium. The correction algorithm is based on fitting 1D profiles through the myocardium in gated PET images to derive myocardial contours along with blood, background and myocardial activity. This information is interpolated onto a 2D grid and convolved with the tomograph's point spread function to derive regional recovery coefficients enabling partial volume correction. The point spread function was measured by placing a line source inside a small animal PET scanner. PET simulations were created based on noise properties measured from a reconstructed PET image and on the digital MOBY phantom. The algorithm can estimate the myocardial activity to within 5% of the truth when different wall thicknesses, backgrounds and noise properties are encountered that are typical of healthy FDG mouse scans. The method also significantly improves partial volume recovery in simulated infarcted tissue. The algorithm offers a practical solution to the partial volume problem without the need for co-registered anatomic images and offers a basis for improved quantitative 3D heart imaging.
Prol, Fabricio S; Camargo, Paulo O; Muella, Marcio T A H
2017-01-01
The incomplete geometrical coverage of the Global Navigation Satellite System (GNSS) makes the ionospheric tomographic system an ill-conditioned problem for ionospheric imaging. In order to detect the principal limitations of the ill-conditioned tomographic solutions, numerical simulations of the ionosphere are under constant investigation. In this paper, we show an investigation of the accuracy of Algebraic Reconstruction Technique (ART) and Multiplicative ART (MART) for performing tomographic reconstruction of Chapman profiles using a simulated optimum scenario of GNSS signals tracked by ground-based receivers. Chapman functions were used to represent the ionospheric morphology and a set of analyses was conducted to assess ART and MART performance for estimating the Total Electron Content (TEC) and parameters that describes the Chapman function. The results showed that MART performed better in the reconstruction of the electron density peak and ART gave a better representation for estimating TEC and the shape of the ionosphere. Since we used an optimum scenario of the GNSS signals, the analyses indicate the intrinsic problems that may occur with ART and MART to recover valuable information for many applications of Telecommunication, Spatial Geodesy and Space Weather.
NASA Astrophysics Data System (ADS)
Schäfer, D.; Lin, M.; Rao, P. P.; Loffroy, R.; Liapi, E.; Noordhoek, N.; Eshuis, P.; Radaelli, A.; Grass, M.; Geschwind, J.-F. H.
2012-03-01
C-arm based tomographic 3D imaging is applied in an increasing number of minimal invasive procedures. Due to the limited acquisition speed for a complete projection data set required for tomographic reconstruction, breathing motion is a potential source of artifacts. This is the case for patients who cannot comply breathing commands (e.g. due to anesthesia). Intra-scan motion estimation and compensation is required. Here, a scheme for projection based local breathing motion estimation is combined with an anatomy adapted interpolation strategy and subsequent motion compensated filtered back projection. The breathing motion vector is measured as a displacement vector on the projections of a tomographic short scan acquisition using the diaphragm as a landmark. Scaling of the displacement to the acquisition iso-center and anatomy adapted volumetric motion vector field interpolation delivers a 3D motion vector per voxel. Motion compensated filtered back projection incorporates this motion vector field in the image reconstruction process. This approach is applied in animal experiments on a flat panel C-arm system delivering improved image quality (lower artifact levels, improved tumor delineation) in 3D liver tumor imaging.
Scanning linear estimation: improvements over region of interest (ROI) methods
NASA Astrophysics Data System (ADS)
Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.
2013-03-01
In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.
Cryo-Electron Tomography for Structural Characterization of Macromolecular Complexes
Cope, Julia; Heumann, John; Hoenger, Andreas
2011-01-01
Cryo-electron tomography (cryo-ET) is an emerging 3-D reconstruction technology that combines the principles of tomographic 3-D reconstruction with the unmatched structural preservation of biological material embedded in vitreous ice. Cryo-ET is particularly suited to investigating cell-biological samples and large macromolecular structures that are too polymorphic to be reconstructed by classical averaging-based 3-D reconstruction procedures. This unit aims to make cryo-ET accessible to newcomers and discusses the specialized equipment required, as well as the relevant advantages and hurdles associated with sample preparation by vitrification and cryo-ET. Protocols describe specimen preparation, data recording and 3-D data reconstruction for cryo-ET, with a special focus on macromolecular complexes. A step-by-step procedure for specimen vitrification by plunge freezing is provided, followed by the general practicalities of tilt-series acquisition for cryo-ET, including advice on how to select an area appropriate for acquiring a tilt series. A brief introduction to the underlying computational reconstruction principles applied in tomography is described, along with instructions for reconstructing a tomogram from cryo-tilt series data. Finally, a method is detailed for extracting small subvolumes containing identical macromolecular structures from tomograms for alignment and averaging as a means to increase the signal-to-noise ratio and eliminate missing wedge effects inherent in tomographic reconstructions. PMID:21842467
Low-dose x-ray tomography through a deep convolutional neural network
Yang, Xiaogang; De Andrade, Vincent; Scullin, William; ...
2018-02-07
Synchrotron-based X-ray tomography offers the potential of rapid large-scale reconstructions of the interiors of materials and biological tissue at fine resolution. However, for radiation sensitive samples, there remain fundamental trade-offs between damaging samples during longer acquisition times and reducing signals with shorter acquisition times. We present a deep convolutional neural network (CNN) method that increases the acquired X-ray tomographic signal by at least a factor of 10 during low-dose fast acquisition by improving the quality of recorded projections. Short exposure time projections enhanced with CNN show similar signal to noise ratios as compared with long exposure time projections and muchmore » lower noise and more structural information than low-dose fats acquisition without CNN. We optimized this approach using simulated samples and further validated on experimental nano-computed tomography data of radiation sensitive mouse brains acquired with a transmission X-ray microscopy. We demonstrate that automated algorithms can reliably trace brain structures in datasets collected with low dose-CNN. As a result, this method can be applied to other tomographic or scanning based X-ray imaging techniques and has great potential for studying faster dynamics in specimens.« less
Low-dose x-ray tomography through a deep convolutional neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaogang; De Andrade, Vincent; Scullin, William
Synchrotron-based X-ray tomography offers the potential of rapid large-scale reconstructions of the interiors of materials and biological tissue at fine resolution. However, for radiation sensitive samples, there remain fundamental trade-offs between damaging samples during longer acquisition times and reducing signals with shorter acquisition times. We present a deep convolutional neural network (CNN) method that increases the acquired X-ray tomographic signal by at least a factor of 10 during low-dose fast acquisition by improving the quality of recorded projections. Short exposure time projections enhanced with CNN show similar signal to noise ratios as compared with long exposure time projections and muchmore » lower noise and more structural information than low-dose fats acquisition without CNN. We optimized this approach using simulated samples and further validated on experimental nano-computed tomography data of radiation sensitive mouse brains acquired with a transmission X-ray microscopy. We demonstrate that automated algorithms can reliably trace brain structures in datasets collected with low dose-CNN. As a result, this method can be applied to other tomographic or scanning based X-ray imaging techniques and has great potential for studying faster dynamics in specimens.« less
Evaluation of Airborne l- Band Multi-Baseline Pol-Insar for dem Extraction Beneath Forest Canopy
NASA Astrophysics Data System (ADS)
Li, W. M.; Chen, E. X.; Li, Z. Y.; Jiang, C.; Jia, Y.
2018-04-01
DEM beneath forest canopy is difficult to extract with optical stereo pairs, InSAR and Pol-InSAR techniques. Tomographic SAR (TomoSAR) based on different penetration and view angles could reflect vertical structure and ground structure. This paper aims at evaluating the possibility of TomoSAR for underlying DEM extraction. Airborne L-band repeat-pass Pol-InSAR collected in BioSAR 2008 campaign was applied to reconstruct the 3D structure of forest. And sum of kronecker product and algebraic synthesis algorithm were used to extract ground structure, and phase linking algorithm was applied to estimate ground phase. Then Goldstein cut-branch approach was used to unwrap the phases and then estimated underlying DEM. The average difference between the extracted underlying DEM and Lidar DEM is about 3.39 m in our test site. And the result indicates that it is possible for underlying DEM estimation with airborne L-band repeat-pass TomoSAR technique.
Optimization-Based Approach for Joint X-Ray Fluorescence and Transmission Tomographic Inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Zichao; Leyffer, Sven; Wild, Stefan M.
2016-01-01
Fluorescence tomographic reconstruction, based on the detection of photons coming from fluorescent emission, can be used for revealing the internal elemental composition of a sample. On the other hand, conventional X-ray transmission tomography can be used for reconstructing the spatial distribution of the absorption coefficient inside a sample. In this work, we integrate both X-ray fluorescence and X-ray transmission data modalities and formulate a nonlinear optimization-based approach for reconstruction of the elemental composition of a given object. This model provides a simultaneous reconstruction of both the quantitative spatial distribution of all elements and the absorption effect in the sample. Mathematicallymore » speaking, we show that compared with the single-modality inversion (i.e., the X-ray transmission or fluorescence alone), the joint inversion provides a better-posed problem, which implies a better recovery. Therefore, the challenges in X-ray fluorescence tomography arising mainly from the effects of self-absorption in the sample are partially mitigated. The use of this technique is demonstrated on the reconstruction of several synthetic samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virador, Patrick R.G.
The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems:more » (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data and (c) mash the in plane projections, i.e. 2D data, with the projection data from the first oblique angles, which are then used to reconstruct the preliminary image in the 3D Reprojection Projection algorithm. The author presents reconstructed images of point sources and extended sources in both 2D and 3D. The images show that the camera is anticipated to eliminate radial elongation and produce artifact free and essentially spatially isotropic images throughout the entire FOV. It has a resolution of 1.50 ± 0.75 mm FWHM near the center, 2.25 ±0.75 mm FWHM in the bulk of the FOV, and 3.00 ± 0.75 mm FWHM near the edge and corners of the FOV.« less
A generalized reconstruction framework for unconventional PET systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathews, Aswin John, E-mail: amathews@wustl.edu; Li, Ke; O’Sullivan, Joseph A.
2015-08-15
Purpose: Quantitative estimation of the radionuclide activity concentration in positron emission tomography (PET) requires precise modeling of PET physics. The authors are focused on designing unconventional PET geometries for specific applications. This work reports the creation of a generalized reconstruction framework, capable of reconstructing tomographic PET data for systems that use right cuboidal detector elements positioned at arbitrary geometry using a regular Cartesian grid of image voxels. Methods: The authors report on a variety of design choices and optimization for the creation of the generalized framework. The image reconstruction algorithm is maximum likelihood-expectation–maximization. System geometry can be specified using amore » simple script. Given the geometry, a symmetry seeking algorithm finds existing symmetry in the geometry with respect to the image grid to improve the memory usage/speed. Normalization is approached from a geometry independent perspective. The system matrix is computed using the Siddon’s algorithm and subcrystal approach. The program is parallelized through open multiprocessing and message passing interface libraries. A wide variety of systems can be modeled using the framework. This is made possible by modeling the underlying physics and data correction, while generalizing the geometry dependent features. Results: Application of the framework for three novel PET systems, each designed for a specific application, is presented to demonstrate the robustness of the framework in modeling PET systems of unconventional geometry. Three PET systems of unconventional geometry are studied. (1) Virtual-pinhole half-ring insert integrated into Biograph-40: although the insert device improves image quality over conventional whole-body scanner, the image quality varies depending on the position of the insert and the object. (2) Virtual-pinhole flat-panel insert integrated into Biograph-40: preliminary results from an investigation into a modular flat-panel insert are presented. (3) Plant PET system: a reconfigurable PET system for imaging plants, with resolution of greater than 3.3 mm, is shown. Using the automated symmetry seeking algorithm, the authors achieved a compression ratio of the storage and memory requirement by a factor of approximately 50 for the half-ring and flat-panel systems. For plant PET system, the compression ratio is approximately five. The ratio depends on the level of symmetry that exists in different geometries. Conclusions: This work brings the field closer to arbitrary geometry reconstruction. A generalized reconstruction framework can be used to validate multiple hypotheses and the effort required to investigate each system is reduced. Memory usage/speed can be improved with certain optimizations.« less
A generalized reconstruction framework for unconventional PET systems.
Mathews, Aswin John; Li, Ke; Komarov, Sergey; Wang, Qiang; Ravindranath, Bosky; O'Sullivan, Joseph A; Tai, Yuan-Chuan
2015-08-01
Quantitative estimation of the radionuclide activity concentration in positron emission tomography (PET) requires precise modeling of PET physics. The authors are focused on designing unconventional PET geometries for specific applications. This work reports the creation of a generalized reconstruction framework, capable of reconstructing tomographic PET data for systems that use right cuboidal detector elements positioned at arbitrary geometry using a regular Cartesian grid of image voxels. The authors report on a variety of design choices and optimization for the creation of the generalized framework. The image reconstruction algorithm is maximum likelihood-expectation-maximization. System geometry can be specified using a simple script. Given the geometry, a symmetry seeking algorithm finds existing symmetry in the geometry with respect to the image grid to improve the memory usage/speed. Normalization is approached from a geometry independent perspective. The system matrix is computed using the Siddon's algorithm and subcrystal approach. The program is parallelized through open multiprocessing and message passing interface libraries. A wide variety of systems can be modeled using the framework. This is made possible by modeling the underlying physics and data correction, while generalizing the geometry dependent features. Application of the framework for three novel PET systems, each designed for a specific application, is presented to demonstrate the robustness of the framework in modeling PET systems of unconventional geometry. Three PET systems of unconventional geometry are studied. (1) Virtual-pinhole half-ring insert integrated into Biograph-40: although the insert device improves image quality over conventional whole-body scanner, the image quality varies depending on the position of the insert and the object. (2) Virtual-pinhole flat-panel insert integrated into Biograph-40: preliminary results from an investigation into a modular flat-panel insert are presented. (3) Plant PET system: a reconfigurable PET system for imaging plants, with resolution of greater than 3.3 mm, is shown. Using the automated symmetry seeking algorithm, the authors achieved a compression ratio of the storage and memory requirement by a factor of approximately 50 for the half-ring and flat-panel systems. For plant PET system, the compression ratio is approximately five. The ratio depends on the level of symmetry that exists in different geometries. This work brings the field closer to arbitrary geometry reconstruction. A generalized reconstruction framework can be used to validate multiple hypotheses and the effort required to investigate each system is reduced. Memory usage/speed can be improved with certain optimizations.
A generalized reconstruction framework for unconventional PET systems
Mathews, Aswin John; Li, Ke; Komarov, Sergey; Wang, Qiang; Ravindranath, Bosky; O’Sullivan, Joseph A.; Tai, Yuan-Chuan
2015-01-01
Purpose: Quantitative estimation of the radionuclide activity concentration in positron emission tomography (PET) requires precise modeling of PET physics. The authors are focused on designing unconventional PET geometries for specific applications. This work reports the creation of a generalized reconstruction framework, capable of reconstructing tomographic PET data for systems that use right cuboidal detector elements positioned at arbitrary geometry using a regular Cartesian grid of image voxels. Methods: The authors report on a variety of design choices and optimization for the creation of the generalized framework. The image reconstruction algorithm is maximum likelihood-expectation–maximization. System geometry can be specified using a simple script. Given the geometry, a symmetry seeking algorithm finds existing symmetry in the geometry with respect to the image grid to improve the memory usage/speed. Normalization is approached from a geometry independent perspective. The system matrix is computed using the Siddon’s algorithm and subcrystal approach. The program is parallelized through open multiprocessing and message passing interface libraries. A wide variety of systems can be modeled using the framework. This is made possible by modeling the underlying physics and data correction, while generalizing the geometry dependent features. Results: Application of the framework for three novel PET systems, each designed for a specific application, is presented to demonstrate the robustness of the framework in modeling PET systems of unconventional geometry. Three PET systems of unconventional geometry are studied. (1) Virtual-pinhole half-ring insert integrated into Biograph-40: although the insert device improves image quality over conventional whole-body scanner, the image quality varies depending on the position of the insert and the object. (2) Virtual-pinhole flat-panel insert integrated into Biograph-40: preliminary results from an investigation into a modular flat-panel insert are presented. (3) Plant PET system: a reconfigurable PET system for imaging plants, with resolution of greater than 3.3 mm, is shown. Using the automated symmetry seeking algorithm, the authors achieved a compression ratio of the storage and memory requirement by a factor of approximately 50 for the half-ring and flat-panel systems. For plant PET system, the compression ratio is approximately five. The ratio depends on the level of symmetry that exists in different geometries. Conclusions: This work brings the field closer to arbitrary geometry reconstruction. A generalized reconstruction framework can be used to validate multiple hypotheses and the effort required to investigate each system is reduced. Memory usage/speed can be improved with certain optimizations. PMID:26233187
GPU acceleration towards real-time image reconstruction in 3D tomographic diffractive microscopy
NASA Astrophysics Data System (ADS)
Bailleul, J.; Simon, B.; Debailleul, M.; Liu, H.; Haeberlé, O.
2012-06-01
Phase microscopy techniques regained interest in allowing for the observation of unprepared specimens with excellent temporal resolution. Tomographic diffractive microscopy is an extension of holographic microscopy which permits 3D observations with a finer resolution than incoherent light microscopes. Specimens are imaged by a series of 2D holograms: their accumulation progressively fills the range of frequencies of the specimen in Fourier space. A 3D inverse FFT eventually provides a spatial image of the specimen. Consequently, acquisition then reconstruction are mandatory to produce an image that could prelude real-time control of the observed specimen. The MIPS Laboratory has built a tomographic diffractive microscope with an unsurpassed 130nm resolution but a low imaging speed - no less than one minute. Afterwards, a high-end PC reconstructs the 3D image in 20 seconds. We now expect an interactive system providing preview images during the acquisition for monitoring purposes. We first present a prototype implementing this solution on CPU: acquisition and reconstruction are tied in a producer-consumer scheme, sharing common data into CPU memory. Then we present a prototype dispatching some reconstruction tasks to GPU in order to take advantage of SIMDparallelization for FFT and higher bandwidth for filtering operations. The CPU scheme takes 6 seconds for a 3D image update while the GPU scheme can go down to 2 or > 1 seconds depending on the GPU class. This opens opportunities for 4D imaging of living organisms or crystallization processes. We also consider the relevance of GPU for 3D image interaction in our specific conditions.
Computational adaptive optics for broadband optical interferometric tomography of biological tissue.
Adie, Steven G; Graf, Benedikt W; Ahmad, Adeel; Carney, P Scott; Boppart, Stephen A
2012-05-08
Aberrations in optical microscopy reduce image resolution and contrast, and can limit imaging depth when focusing into biological samples. Static correction of aberrations may be achieved through appropriate lens design, but this approach does not offer the flexibility of simultaneously correcting aberrations for all imaging depths, nor the adaptability to correct for sample-specific aberrations for high-quality tomographic optical imaging. Incorporation of adaptive optics (AO) methods have demonstrated considerable improvement in optical image contrast and resolution in noninterferometric microscopy techniques, as well as in optical coherence tomography. Here we present a method to correct aberrations in a tomogram rather than the beam of a broadband optical interferometry system. Based on Fourier optics principles, we correct aberrations of a virtual pupil using Zernike polynomials. When used in conjunction with the computed imaging method interferometric synthetic aperture microscopy, this computational AO enables object reconstruction (within the single scattering limit) with ideal focal-plane resolution at all depths. Tomographic reconstructions of tissue phantoms containing subresolution titanium-dioxide particles and of ex vivo rat lung tissue demonstrate aberration correction in datasets acquired with a highly astigmatic illumination beam. These results also demonstrate that imaging with an aberrated astigmatic beam provides the advantage of a more uniform depth-dependent signal compared to imaging with a standard gaussian beam. With further work, computational AO could enable the replacement of complicated and expensive optical hardware components with algorithms implemented on a standard desktop computer, making high-resolution 3D interferometric tomography accessible to a wider group of users and nonspecialists.
Solar multi-conjugate adaptive optics performance improvement
NASA Astrophysics Data System (ADS)
Zhang, Zhicheng; Zhang, Xiaofang; Song, Jie
2015-08-01
In order to overcome the effect of the atmospheric anisoplanatism, Multi-Conjugate Adaptive Optics (MCAO), which was developed based on turbulence correction by means of several deformable mirrors (DMs) conjugated to different altitude and by which the limit of a small corrected FOV that is achievable with AO is overcome and a wider FOV is able to be corrected, has been widely used to widen the field-of-view (FOV) of a solar telescope. With the assistance of the multi-threaded Adaptive Optics Simulator (MAOS), we can make a 3D reconstruction of the distorted wavefront. The correction is applied by one or more DMs. This technique benefits from information about atmospheric turbulence at different layers, which can be used to reconstruct the wavefront extremely well. In MAOS, the sensors are either simulated as idealized wavefront gradient sensors, tip-tilt sensors based on the best Zernike fit, or a WFS using physical optics and incorporating user specified pixel characteristics and a matched filter pixel processing algorithm. Only considering the atmospheric anisoplanatism, we focus on how the performance of a solar MCAO system is related to the numbers of DMs and their conjugate heights. We theoretically quantify the performance of the tomographic solar MCAO system. The results indicate that the tomographic AO system can improve the average Strehl ratio of a solar telescope by only employing one or two DMs conjugated to the optimum altitude. And the S.R. has a significant increase when more deformable mirrors are used. Furthermore, we discuss the effects of DM conjugate altitude on the correction achievable by the MCAO system, and present the optimum DM conjugate altitudes.
NASA Astrophysics Data System (ADS)
Li, Jiaji; Chen, Qian; Zhang, Jialin; Zuo, Chao
2017-10-01
Optical diffraction tomography (ODT) is an effective label-free technique for quantitatively refractive index imaging, which enables long-term monitoring of the internal three-dimensional (3D) structures and molecular composition of biological cells with minimal perturbation. However, existing optical tomographic methods generally rely on interferometric configuration for phase measurement and sophisticated mechanical systems for sample rotation or beam scanning. Thereby, the measurement is suspect to phase error coming from the coherent speckle, environmental vibrations, and mechanical error during data acquisition process. To overcome these limitations, we present a new ODT technique based on non-interferometric phase retrieval and programmable illumination emitting from a light-emitting diode (LED) array. The experimental system is built based on a traditional bright field microscope, with the light source replaced by a programmable LED array, which provides angle-variable quasi-monochromatic illumination with an angular coverage of +/-37 degrees in both x and y directions (corresponding to an illumination numerical aperture of ˜ 0.6). Transport of intensity equation (TIE) is utilized to recover the phase at different illumination angles, and the refractive index distribution is reconstructed based on the ODT framework under first Rytov approximation. The missing-cone problem in ODT is addressed by using the iterative non-negative constraint algorithm, and the misalignment of the LED array is further numerically corrected to improve the accuracy of refractive index quantification. Experiments on polystyrene beads and thick biological specimens show that the proposed approach allows accurate refractive index reconstruction while greatly reduced the system complexity and environmental sensitivity compared to conventional interferometric ODT approaches.
Jang, Hansol; Lim, Gukbin; Hong, Keum-Shik; Cho, Jaedu; Gulsen, Gultekin; Kim, Chang-Seok
2017-11-28
Diffuse optical tomography (DOT) has been studied for use in the detection of breast cancer, cerebral oxygenation, and cognitive brain signals. As optical imaging studies have increased significantly, acquiring imaging data in real time has become increasingly important. We have developed frequency-division multiplexing (FDM) DOT systems to analyze their performance with respect to acquisition time and imaging quality, in comparison with the conventional time-division multiplexing (TDM) DOT. A large tomographic area of a cylindrical phantom 60 mm in diameter could be successfully reconstructed using both TDM DOT and FDM DOT systems. In our experiment with 6 source-detector (S-D) pairs, the TDM DOT and FDM DOT systems required 6.18 and 1 s, respectively, to obtain a single tomographic data set. While the absorption coefficient of the reconstruction image was underestimated in the case of the FDM DOT, we experimentally confirmed that the abnormal region can be clearly distinguished from the background phantom using both methods.
Optical tomograph optimized for tumor detection inside highly absorbent organs
NASA Astrophysics Data System (ADS)
Boutet, Jérôme; Koenig, Anne; Hervé, Lionel; Berger, Michel; Dinten, Jean-Marc; Josserand, Véronique; Coll, Jean-Luc
2011-05-01
This paper presents a tomograph for small animal fluorescence imaging. The compact and cost-effective system described in this article was designed to address the problem of tumor detection inside highly absorbent heterogeneous organs, such as lungs. To validate the tomograph's ability to detect cancerous nodules inside lungs, in vivo tumor growth was studied on seven cancerous mice bearing murine mammary tumors marked with Alexa Fluor 700. They were successively imaged 10, 12, and 14 days after the primary tumor implantation. The fluorescence maps were compared over this time period. As expected, the reconstructed fluorescence increases with the tumor growth stage.
Lamb wave tomographic imaging system for aircraft structural health assessment
NASA Astrophysics Data System (ADS)
Schwarz, Willi G.; Read, Michael E.; Kremer, Matthew J.; Hinders, Mark K.; Smith, Barry T.
1999-01-01
A tomographic imaging system using ultrasonic Lamb waves for the nondestructive inspection of aircraft components such as wings and fuselage is being developed. The computer-based system provides large-area inspection capability by electronically scanning an array of transducers that can be easily attached to flat and curved surface without moving parts. Images of the inspected area are produced in near real time employing a tomographic reconstruction method adapted from seismological applications. Changes in material properties caused by structural flaws such as disbonds, corrosion, and fatigue cracks can be effectively detected and characterized utilizing this fast NDE technique.
A new art code for tomographic interferometry
NASA Technical Reports Server (NTRS)
Tan, H.; Modarress, D.
1987-01-01
A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.
NASA Astrophysics Data System (ADS)
Espinosa, Luis; Prieto, Flavio; Brancheriau, Loïc.
2017-03-01
Trees play a major ecological and sanitary role in modern cities. Nondestructive imaging methods allow to analyze the inner structures of trees, without altering their condition. In this study, we are interested on evaluating the influence of anisotropy condition in wood on the tomography image reconstruction using ultrasonic waves, by time-of-flight (TOF) estimation using the raytracing approach, a technique used particularly in the field of exploration seismography to simulate wave fronts in elastic media. Mechanical parameters from six wood species and one isotropic material were defined and their wave fronts and corresponding TOF values were obtained, using the proposed raytracing method. If the material presented anisotropy, the ray paths between the emitter and the receivers were not straight; therefore, curved rays were obtained for wood and the TOF measurements were affected. To obtain the tomographic image from the TOF measurements, the filtered back-projection algorithm was applied, a widely used technique in applications of straight ray tomography, but also commonly used in wood acoustic tomography. First, discs without inner defects for isotropic and wood materials (Spruce sample) were tested. Isotropic material resulted in a flat color image; for wood material, a gradient of velocities was obtained. After, centric and eccentric defects were tested, both for isotropic and orthotropic cases. From the results obtained for wood, when using a reconstruction algorithm intended for straight ray tomography, the images presented velocity variations from the border to the center that made difficult the discrimination of possible defects inside the samples, especially for eccentric cases.
NASA Astrophysics Data System (ADS)
Kandel, Mikhail E.; Kouzehgarani, Ghazal N.; Ngyuen, Tan H.; Gillette, Martha U.; Popescu, Gabriel
2017-02-01
Although the contrast generated in transmitted light microscopy is due to the elastic scattering of light, multiple scattering scrambles the image and reduces overall visibility. To image both thin and thick samples, we turn to gradient light interference microscopy (GLIM) to simultaneously measure morphological parameters such as cell mass, volume, and surfaces as they change through time. Because GLIM combines multiple intensity images corresponding to controlled phase offsets between laterally sheared beams, incoherent contributions from multiple scattering are implicitly cancelled during the phase reconstruction procedure. As the interfering beams traverse near identical paths, they remain comparable in power and interfere with optimal contrast. This key property lets us obtain tomographic parameters from wide field z-scans after simple numerical processing. Here we show our results on reconstructing tomograms of bovine embryos, characterizing the time-lapse growth of HeLa cells in 3D, and preliminary results on imaging much larger specimen such as brain slices.
MIMO nonlinear ultrasonic tomography by propagation and backpropagation method.
Dong, Chengdong; Jin, Yuanwei
2013-03-01
This paper develops a fast ultrasonic tomographic imaging method in a multiple-input multiple-output (MIMO) configuration using the propagation and backpropagation (PBP) method. By this method, ultrasonic excitation signals from multiple sources are transmitted simultaneously to probe the objects immersed in the medium. The scattering signals are recorded by multiple receivers. Utilizing the nonlinear ultrasonic wave propagation equation and the received time domain scattered signals, the objects are to be reconstructed iteratively in three steps. First, the propagation step calculates the predicted acoustic potential data at the receivers using an initial guess. Second, the difference signal between the predicted value and the measured data is calculated. Third, the backpropagation step computes updated acoustical potential data by backpropagating the difference signal to the same medium computationally. Unlike the conventional PBP method for tomographic imaging where each source takes turns to excite the acoustical field until all the sources are used, the developed MIMO-PBP method achieves faster image reconstruction by utilizing multiple source simultaneous excitation. Furthermore, we develop an orthogonal waveform signaling method using a waveform delay scheme to reduce the impact of speckle patterns in the reconstructed images. By numerical experiments we demonstrate that the proposed MIMO-PBP tomographic imaging method results in faster convergence and achieves superior imaging quality.
NASA Astrophysics Data System (ADS)
Blavier, Marie; Blanco, Leonardo; Glanc, Marie; Pouplard, Florence; Tick, Sarah; Maksimovic, Ivan; Mugnier, Laurent; Chènegros, Guillaume; Rousset, Gérard; Lacombe, François; Pâques, Michel; Le Gargasson, Jean-François; Sahel, José-Alain
2009-02-01
Retinal pathologies, like ARMD or glaucoma, need to be early detected, requiring imaging instruments with resolution at a cellular scale. However, in vivo retinal cells studies and early diagnoses are severely limited by the lack of resolution on eye-fundus images from classical ophthalmologic instruments. We built a 2D retina imager using Adaptive Optics to improve lateral resolution. This imager is currently used in clinical environment. We are currently developing a time domain full-field optical coherence tomograph. The first step was to conceive the images reconstruction algorithms and validation was realized on non-biological samples. Ex vivo retina are currently being imaged. The final step will consist in coupling both setups to acquire high resolution retina cross-sections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kramar, M.; Lin, H.; Tomczyk, S., E-mail: kramar@cua.edu, E-mail: lin@ifa.hawaii.edu, E-mail: tomczyk@ucar.edu
We present the first direct “observation” of the global-scale, 3D coronal magnetic fields of Carrington Rotation (CR) Cycle 2112 using vector tomographic inversion techniques. The vector tomographic inversion uses measurements of the Fe xiii 10747 Å Hanle effect polarization signals by the Coronal Multichannel Polarimeter (CoMP) and 3D coronal density and temperature derived from scalar tomographic inversion of Solar Terrestrial Relations Observatory (STEREO)/Extreme Ultraviolet Imager (EUVI) coronal emission lines (CELs) intensity images as inputs to derive a coronal magnetic field model that best reproduces the observed polarization signals. While independent verifications of the vector tomography results cannot be performed, wemore » compared the tomography inverted coronal magnetic fields with those constructed by magnetohydrodynamic (MHD) simulations based on observed photospheric magnetic fields of CR 2112 and 2113. We found that the MHD model for CR 2112 is qualitatively consistent with the tomography inverted result for most of the reconstruction domain except for several regions. Particularly, for one of the most noticeable regions, we found that the MHD simulation for CR 2113 predicted a model that more closely resembles the vector tomography inverted magnetic fields. In another case, our tomographic reconstruction predicted an open magnetic field at a region where a coronal hole can be seen directly from a STEREO-B/EUVI image. We discuss the utilities and limitations of the tomographic inversion technique, and present ideas for future developments.« less
GPS Tomography: Water Vapour Monitoring for Germany
NASA Astrophysics Data System (ADS)
Bender, Michael; Dick, Galina; Wickert, Jens; Raabe, Armin
2010-05-01
Ground based GPS atmosphere sounding provides numerous atmospheric quantities with a high temporal resolution for all weather conditions. The spatial resolution of the GPS observations is mainly given by the number of GNSS satellites and GPS ground stations. The latter could considerably be increased in the last few years leading to more reliable and better resolved GPS products. New techniques such as the GPS water vapour tomography gain increased significance as data from large and dense GPS networks become available. The GPS tomography has the potential to provide spatially resolved fields of different quantities operationally, i. e. the humidity or wet refractivity as required for meteorological applications or the refraction index which is important for several space based observations or for precise positioning. The number of German GPS stations operationally processed by the GFZ in Potsdam was recently enlarged to more than 300. About 28000 IWV observations and more than 1.4 millions of slant total delay data are now available per day with a temporal resolution of 15 min and 2.5 min, respectively. The extended network leads not only to a higher spatial resolution of the tomographically reconstructed 3D fields but also to a much higher stability of the inversion process and with that to an increased quality of the results. Under these improved conditions the GPS tomography can operate continuously over several days or weeks without applying too tight constraints. Time series of tomographically reconstructed humidity fields will be shown and different initialisation strategies will be discussed: Initialisation with a simple exponential profile, with a 3D humidity field extrapolated from synoptic observations and with the result of the preceeding reconstruction. The results are compared to tomographic reconstructions initialised with COSMO-DE analyses and to the corresponding model fields. The inversion can be further stabilised by making use of independent adequately weighted observations, such as synoptic observations or IWV data. The impact of such observations on the quality of the tomographic reconstruction will be discussed together with different alternatives for weighting different types of observations.
Graph-cut based discrete-valued image reconstruction.
Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim
2015-05-01
Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.
Tomographic Validation of the AWSoM Model of the Inner Corona During Solar Minima
NASA Astrophysics Data System (ADS)
Manchester, W.; Vásquez, A. M.; Lloveras, D. G.; Mac Cormack, C.; Nuevo, F.; Lopez-Fuentes, M.; Frazin, R. A.; van der Holst, B.; Landi, E.; Gombosi, T. I.
2017-12-01
Continuous improvement of MHD three-dimensional (3D) models of the global solar corona, such as the Alfven Wave Solar Model (AWSoM) of the Space Weather Modeling Framework (SWMF), requires testing their ability to reproduce observational constraints at a global scale. To that end, solar rotational tomography based on EUV image time-series can be used to reconstruct the 3D distribution of the electron density and temperature in the inner solar corona (r < 1.25 Rsun). The tomographic results, combined with a global coronal magnetic model, can further provide constraints on the energy input flux required at the coronal base to maintain stable structures. In this work, tomographic reconstructions are used to validate steady-state 3D MHD simulations of the inner corona using the latest version of the AWSoM model. We perform the study for selected rotations representative of solar minimum conditions, when the global structure of the corona is more axisymmetric. We analyse in particular the ability of the MHD simulation to match the tomographic results across the boundary region between the equatorial streamer belt and the surrounding coronal holes. The region is of particular interest as the plasma flow from that zone is thought to be related to the origin of the slow component of the solar wind.
Making Advanced Scientific Algorithms and Big Scientific Data Management More Accessible
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkatakrishnan, S. V.; Mohan, K. Aditya; Beattie, Keith
2016-02-14
Synchrotrons such as the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory are known as user facilities. They are sources of extremely bright X-ray beams, and scientists come from all over the world to perform experiments that require these beams. As the complexity of experiments has increased, and the size and rates of data sets has exploded, managing, analyzing and presenting the data collected at synchrotrons has been an increasing challenge. The ALS has partnered with high performance computing, fast networking, and applied mathematics groups to create a"super-facility", giving users simultaneous access to the experimental, computational, and algorithmic resourcesmore » to overcome this challenge. This combination forms an efficient closed loop, where data despite its high rate and volume is transferred and processed, in many cases immediately and automatically, on appropriate compute resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beam-time. In this paper, We will present work done on advanced tomographic reconstruction algorithms to support users of the 3D micron-scale imaging instrument (Beamline 8.3.2, hard X-ray micro-tomography).« less
Optimal joule heating of the subsurface
Berryman, James G.; Daily, William D.
1994-01-01
A method for simultaneously heating the subsurface and imaging the effects of the heating. This method combines the use of tomographic imaging (electrical resistance tomography or ERT) to image electrical resistivity distribution underground, with joule heating by electrical currents injected in the ground. A potential distribution is established on a series of buried electrodes resulting in energy deposition underground which is a function of the resistivity and injection current density. Measurement of the voltages and currents also permits a tomographic reconstruction of the resistivity distribution. Using this tomographic information, the current injection pattern on the driving electrodes can be adjusted to change the current density distribution and thus optimize the heating. As the heating changes conditions, the applied current pattern can be repeatedly adjusted (based on updated resistivity tomographs) to affect real time control of the heating.
Emerging Techniques for Dose Optimization in Abdominal CT
Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit
2014-01-01
Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277
NASA Astrophysics Data System (ADS)
Hui-Hui, Xia; Rui-Feng, Kan; Jian-Guo, Liu; Zhen-Yu, Xu; Ya-Bai, He
2016-06-01
An improved algebraic reconstruction technique (ART) combined with tunable diode laser absorption spectroscopy(TDLAS) is presented in this paper for determining two-dimensional (2D) distribution of H2O concentration and temperature in a simulated combustion flame. This work aims to simulate the reconstruction of spectroscopic measurements by a multi-view parallel-beam scanning geometry and analyze the effects of projection rays on reconstruction accuracy. It finally proves that reconstruction quality dramatically increases with the number of projection rays increasing until more than 180 for 20 × 20 grid, and after that point, the number of projection rays has little influence on reconstruction accuracy. It is clear that the temperature reconstruction results are more accurate than the water vapor concentration obtained by the traditional concentration calculation method. In the present study an innovative way to reduce the error of concentration reconstruction and improve the reconstruction quality greatly is also proposed, and the capability of this new method is evaluated by using appropriate assessment parameters. By using this new approach, not only the concentration reconstruction accuracy is greatly improved, but also a suitable parallel-beam arrangement is put forward for high reconstruction accuracy and simplicity of experimental validation. Finally, a bimodal structure of the combustion region is assumed to demonstrate the robustness and universality of the proposed method. Numerical investigation indicates that the proposed TDLAS tomographic algorithm is capable of detecting accurate temperature and concentration profiles. This feasible formula for reconstruction research is expected to resolve several key issues in practical combustion devices. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205151), the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2014YQ060537), and the National Basic Research Program, China (Grant No. 2013CB632803).
AOF LTAO mode: reconstruction strategy and first test results
NASA Astrophysics Data System (ADS)
Oberti, Sylvain; Kolb, Johann; Le Louarn, Miska; La Penna, Paolo; Madec, Pierre-Yves; Neichel, Benoit; Sauvage, Jean-François; Fusco, Thierry; Donaldson, Robert; Soenke, Christian; Suárez Valles, Marcos; Arsenault, Robin
2016-07-01
GALACSI is the Adaptive Optics (AO) system serving the instrument MUSE in the framework of the Adaptive Optics Facility (AOF) project. Its Narrow Field Mode (NFM) is a Laser Tomography AO (LTAO) mode delivering high resolution in the visible across a small Field of View (FoV) of 7.5" diameter around the optical axis. From a reconstruction standpoint, GALACSI NFM intends to optimize the correction on axis by estimating the turbulence in volume via a tomographic process, then projecting the turbulence profile onto one single Deformable Mirror (DM) located in the pupil, close to the ground. In this paper, the laser tomographic reconstruction process is described. Several methods (virtual DM, virtual layer projection) are studied, under the constraint of a single matrix vector multiplication. The pseudo-synthetic interaction matrix model and the LTAO reconstructor design are analysed. Moreover, the reconstruction parameter space is explored, in particular the regularization terms. Furthermore, we present here the strategy to define the modal control basis and split the reconstruction between the Low Order (LO) loop and the High Order (HO) loop. Finally, closed loop performance obtained with a 3D turbulence generator will be analysed with respect to the most relevant system parameters to be tuned.
Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.; ...
2017-01-03
Over the last decade or so, reconstruction methods using ℓ 1 regularization, often categorized as compressed sensing (CS) algorithms, have significantly improved the capabilities of high fidelity imaging in electron tomography. The most popular ℓ 1 regularization approach within electron tomography has been total variation (TV) regularization. In addition to reducing unwanted noise, TV regularization encourages a piecewise constant solution with sparse boundary regions. In this paper we propose an alternative ℓ 1 regularization approach for electron tomography based on higher order total variation (HOTV). Like TV, the HOTV approach promotes solutions with sparse boundary regions. In smooth regions however,more » the solution is not limited to piecewise constant behavior. We demonstrate that this allows for more accurate reconstruction of a broader class of images – even those for which TV was designed for – particularly when dealing with pragmatic tomographic sampling patterns and very fine image features. In conclusion, we develop results for an electron tomography data set as well as a phantom example, and we also make comparisons with discrete tomography approaches.« less
Web-based Tool Suite for Plasmasphere Information Discovery
NASA Astrophysics Data System (ADS)
Newman, T. S.; Wang, C.; Gallagher, D. L.
2005-12-01
A suite of tools that enable discovery of terrestrial plasmasphere characteristics from NASA IMAGE Extreme Ultra Violet (EUV) images is described. The tool suite is web-accessible, allowing easy remote access without the need for any software installation on the user's computer. The features supported by the tool include reconstruction of the plasmasphere plasma density distribution from a short sequence of EUV images, semi-automated selection of the plasmapause boundary in an EUV image, and mapping of the selected boundary to the geomagnetic equatorial plane. EUV image upload and result download is also supported. The tool suite's plasmapause mapping feature is achieved via the Roelof and Skinner (2000) Edge Algorithm. The plasma density reconstruction is achieved through a tomographic technique that exploits physical constraints to allow for a moderate resolution result. The tool suite's software architecture uses Java Server Pages (JSP) and Java Applets on the front side for user-software interaction and Java Servlets on the server side for task execution. The compute-intensive components of the tool suite are implemented in C++ and invoked by the server via Java Native Interface (JNI).
Feasibility of RACT for 3D dose measurement and range verification in a water phantom.
Alsanea, Fahed; Moskvin, Vadim; Stantz, Keith M
2015-02-01
The objective of this study is to establish the feasibility of using radiation-induced acoustics to measure the range and Bragg peak dose from a pulsed proton beam. Simulation studies implementing a prototype scanner design based on computed tomographic methods were performed to investigate the sensitivity to proton range and integral dose. Derived from thermodynamic wave equation, the pressure signals generated from the dose deposited from a pulsed proton beam with a 1 cm lateral beam width and a range of 16, 20, and 27 cm in water using Monte Carlo methods were simulated. The resulting dosimetric images were reconstructed implementing a 3D filtered backprojection algorithm and the pressure signals acquired from a 71-transducer array with a cylindrical geometry (30 × 40 cm) rotated over 2π about its central axis. Dependencies on the detector bandwidth and proton beam pulse width were performed, after which, different noise levels were added to the detector signals (using 1 μs pulse width and a 0.5 MHz cutoff frequency/hydrophone) to investigate the statistical and systematic errors in the proton range (at 20 cm) and Bragg peak dose (of 1 cGy). The reconstructed radioacoustic computed tomographic image intensity was shown to be linearly correlated to the dose within the Bragg peak. And, based on noise dependent studies, a detector sensitivity of 38 mPa was necessary to determine the proton range to within 1.0 mm (full-width at half-maximum) (systematic error < 150 μm) for a 1 cGy Bragg peak dose, where the integral dose within the Bragg peak was measured to within 2%. For existing hydrophone detector sensitivities, a Bragg peak dose of 1.6 cGy is possible. This study demonstrates that computed tomographic scanner based on ionizing radiation-induced acoustics can be used to verify dose distribution and proton range with centi-Gray sensitivity. Realizing this technology into the clinic has the potential to significantly impact beam commissioning, treatment verification during particle beam therapy and image guided techniques.
Image intensifier-based volume tomographic angiography imaging system: system evaluation
NASA Astrophysics Data System (ADS)
Ning, Ruola; Wang, Xiaohui; Shen, Jianjun; Conover, David L.
1995-05-01
An image intensifier-based rotational volume tomographic angiography imaging system has been constructed. The system consists of an x-ray tube and an image intensifier that are separately mounted on a gantry. This system uses an image intensifier coupled to a TV camera as a two-dimensional detector so that a set of two-dimensional projections can be acquired for a direct three-dimensional reconstruction (3D). This system has been evaluated with two phantoms: a vascular phantom and a monkey head cadaver. One hundred eighty projections of each phantom were acquired with the system. A set of three-dimensional images were directly reconstructed from the projection data. The experimental results indicate that good imaging quality can be obtained with this system.
Tomographic determination of the power distribution in electron beams
Teruya, Alan T.; Elmer, John W.
1996-01-01
A tomographic technique for determining the power distribution of an electron beam using electron beam profile data acquired from a modified Faraday cup to create an image of the current density in high and low power beams. A refractory metal disk with a number of radially extending slits is placed above a Faraday cup. The beam is swept in a circular pattern so that its path crosses each slit in a perpendicular manner, thus acquiring all the data needed for a reconstruction in one circular sweep. Also, a single computer is used to generate the signals actuating the sweep, to acquire that data, and to do the reconstruction, thus reducing the time and equipment necessary to complete the process.
Tomographic determination of the power distribution in electron beams
Teruya, A.T.; Elmer, J.W.
1996-12-10
A tomographic technique for determining the power distribution of an electron beam using electron beam profile data acquired from a modified Faraday cup to create an image of the current density in high and low power beams is disclosed. A refractory metal disk with a number of radially extending slits is placed above a Faraday cup. The beam is swept in a circular pattern so that its path crosses each slit in a perpendicular manner, thus acquiring all the data needed for a reconstruction in one circular sweep. Also, a single computer is used to generate the signals actuating the sweep, to acquire that data, and to do the reconstruction, thus reducing the time and equipment necessary to complete the process. 4 figs.
Impact of Time-of-Flight on PET Tumor Detection
Kadrmas, Dan J.; Casey, Michael E.; Conti, Maurizio; Jakoby, Bjoern W.; Lois, Cristina; Townsend, David W.
2009-01-01
Time-of-flight (TOF) PET uses very fast detectors to improve localization of events along coincidence lines-of-response. This information is then utilized to improve the tomographic reconstruction. This work evaluates the effect of TOF upon an observer's performance for detecting and localizing focal warm lesions in noisy PET images. Methods An advanced anthropomorphic lesion-detection phantom was scanned 12 times over 3 days on a prototype TOF PET/CT scanner (Siemens Medical Solutions). The phantom was devised to mimic whole-body oncologic 18F-FDG PET imaging, and a number of spheric lesions (diameters 6–16 mm) were distributed throughout the phantom. The data were reconstructed with the baseline line-of-response ordered-subsets expectation-maximization algorithm, with the baseline algorithm plus point spread function model (PSF), baseline plus TOF, and with both PSF+TOF. The lesion-detection performance of each reconstruction was compared and ranked using localization receiver operating characteristics (LROC) analysis with both human and numeric observers. The phantom results were then subjectively compared to 2 illustrative patient scans reconstructed with PSF and with PSF+TOF. Results Inclusion of TOF information provides a significant improvement in the area under the LROC curve compared to the baseline algorithm without TOF data (P = 0.002), providing a degree of improvement similar to that obtained with the PSF model. Use of both PSF+TOF together provided a cumulative benefit in lesion-detection performance, significantly outperforming either PSF or TOF alone (P < 0.002). Example patient images reflected the same image characteristics that gave rise to improved performance in the phantom data. Conclusion Time-of-flight PET provides a significant improvement in observer performance for detecting focal warm lesions in a noisy background. These improvements in image quality can be expected to improve performance for the clinical tasks of detecting lesions and staging disease. Further study in a large clinical population is warranted to assess the benefit of TOF for various patient sizes and count levels, and to demonstrate effective performance in the clinical environment. PMID:19617317
A forward model and conjugate gradient inversion technique for low-frequency ultrasonic imaging.
van Dongen, Koen W A; Wright, William M D
2006-10-01
Emerging methods of hyperthermia cancer treatment require noninvasive temperature monitoring, and ultrasonic techniques show promise in this regard. Various tomographic algorithms are available that reconstruct sound speed or contrast profiles, which can be related to temperature distribution. The requirement of a high enough frequency for adequate spatial resolution and a low enough frequency for adequate tissue penetration is a difficult compromise. In this study, the feasibility of using low frequency ultrasound for imaging and temperature monitoring was investigated. The transient probing wave field had a bandwidth spanning the frequency range 2.5-320.5 kHz. The results from a forward model which computed the propagation and scattering of low-frequency acoustic pressure and velocity wave fields were used to compare three imaging methods formulated within the Born approximation, representing two main types of reconstruction. The first uses Fourier techniques to reconstruct sound-speed profiles from projection or Radon data based on optical ray theory, seen as an asymptotical limit for comparison. The second uses backpropagation and conjugate gradient inversion methods based on acoustical wave theory. The results show that the accuracy in localization was 2.5 mm or better when using low frequencies and the conjugate gradient inversion scheme, which could be used for temperature monitoring.
Fast alternating projection methods for constrained tomographic reconstruction
Liu, Li; Han, Yongxin
2017-01-01
The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298
Tomographic imaging of transparent biological samples using the pyramid phase microscope
Iglesias, Ignacio
2016-01-01
We show how a pyramid phase microscope can be used to obtain tomographic information of the spatial variation of refractive index in biological samples using the Radon transform. A method that uses the information provided by the phase microscope for axial and lateral repositioning of the sample when it rotates is also described. Its application to the reconstruction of mouse embryos in the blastocyst stage is demonstrated. PMID:27570696
Medical ultrasonic tomographic system
NASA Technical Reports Server (NTRS)
Heyser, R. C.; Lecroissette, D. H.; Nathan, R.; Wilson, R. L.
1977-01-01
An electro-mechanical scanning assembly was designed and fabricated for the purpose of generating an ultrasound tomogram. A low cost modality was demonstrated in which analog instrumentation methods formed a tomogram on photographic film. Successful tomogram reconstructions were obtained on in vitro test objects by using the attenuation of the fist path ultrasound signal as it passed through the test object. The nearly half century tomographic methods of X-ray analysis were verified as being useful for ultrasound imaging.
A data acquisition and control system for high-speed gamma-ray tomography
NASA Astrophysics Data System (ADS)
Hjertaker, B. T.; Maad, R.; Schuster, E.; Almås, O. A.; Johansen, G. A.
2008-09-01
A data acquisition and control system (DACS) for high-speed gamma-ray tomography based on the USB (Universal Serial Bus) and Ethernet communication protocols has been designed and implemented. The high-speed gamma-ray tomograph comprises five 500 mCi 241Am gamma-ray sources, each at a principal energy of 59.5 keV, which corresponds to five detector modules, each consisting of 17 CdZnTe detectors. The DACS design is based on Microchip's PIC18F4550 and PIC18F4620 microcontrollers, which facilitates an USB 2.0 interface protocol and an Ethernet (IEEE 802.3) interface protocol, respectively. By implementing the USB- and Ethernet-based DACS, a sufficiently high data acquisition rate is obtained and no dedicated hardware installation is required for the data acquisition computer, assuming that it is already equipped with a standard USB and/or Ethernet port. The API (Application Programming Interface) for the DACS is founded on the National Instrument's LabVIEW® graphical development tool, which provides a simple and robust foundation for further application software developments for the tomograph. The data acquisition interval, i.e. the integration time, of the high-speed gamma-ray tomograph is user selectable and is a function of the statistical measurement accuracy required for the specific application. The bandwidth of the DACS is 85 kBytes s-1 for the USB communication protocol and 28 kBytes s-1 for the Ethernet protocol. When using the iterative least square technique reconstruction algorithm with a 1 ms integration time, the USB-based DACS provides an online image update rate of 38 Hz, i.e. 38 frames per second, whereas 31 Hz for the Ethernet-based DACS. The off-line image update rate (storage to disk) for the USB-based DACS is 278 Hz using a 1 ms integration time. Initial characterization of the high-speed gamma-ray tomograph using the DACS on polypropylene phantoms is presented in the paper.
Imaging of turbulent structures and tomographic reconstruction of TORPEX plasma emissivity
NASA Astrophysics Data System (ADS)
Iraji, D.; Furno, I.; Fasoli, A.; Theiler, C.
2010-12-01
In the TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], a simple magnetized plasma device, low frequency electrostatic fluctuations associated with interchange waves, are routinely measured by means of extensive sets of Langmuir probes. To complement the electrostatic probe measurements of plasma turbulence and study of plasma structures smaller than the spatial resolution of probes array, a nonperturbative direct imaging system has been developed on TORPEX, including a fast framing Photron-APX-RS camera and an image intensifier unit. From the line-integrated camera images, we compute the poloidal emissivity profile of the plasma by applying a tomographic reconstruction technique using a pixel method and solving an overdetermined set of equations by singular value decomposition. This allows comparing statistical, spectral, and spatial properties of visible light radiation with electrostatic fluctuations. The shape and position of the time-averaged reconstructed plasma emissivity are observed to be similar to those of the ion saturation current profile. In the core plasma, excluding the electron cyclotron and upper hybrid resonant layers, the mean value of the plasma emissivity is observed to vary with (Te)α(ne)β, in which α =0.25-0.7 and β =0.8-1.4, in agreement with collisional radiative model. The tomographic reconstruction is applied to the fast camera movie acquired with 50 kframes/s rate and 2 μs of exposure time to obtain the temporal evolutions of the emissivity fluctuations. Conditional average sampling is also applied to visualize and measure sizes of structures associated with the interchange mode. The ω-time and the two-dimensional k-space Fourier analysis of the reconstructed emissivity fluctuations show the same interchange mode that is detected in the ω and k spectra of the ion saturation current fluctuations measured by probes. Small scale turbulent plasma structures can be detected and tracked in the reconstructed emissivity movies with the spatial resolution down to 2 cm, well beyond the spatial resolution of the probe array.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Lijun, E-mail: lijunxu@buaa.edu.cn; Liu, Chang; Jing, Wenyang
2016-01-15
To monitor two-dimensional (2D) distributions of temperature and H{sub 2}O mole fraction, an on-line tomography system based on tunable diode laser absorption spectroscopy (TDLAS) was developed. To the best of the authors’ knowledge, this is the first report on a multi-view TDLAS-based system for simultaneous tomographic visualization of temperature and H{sub 2}O mole fraction in real time. The system consists of two distributed feedback (DFB) laser diodes, a tomographic sensor, electronic circuits, and a computer. The central frequencies of the two DFB laser diodes are at 7444.36 cm{sup −1} (1343.3 nm) and 7185.6 cm{sup −1} (1391.67 nm), respectively. The tomographicmore » sensor is used to generate fan-beam illumination from five views and to produce 60 ray measurements. The electronic circuits not only provide stable temperature and precise current controlling signals for the laser diodes but also can accurately sample the transmitted laser intensities and extract integrated absorbances in real time. Finally, the integrated absorbances are transferred to the computer, in which the 2D distributions of temperature and H{sub 2}O mole fraction are reconstructed by using a modified Landweber algorithm. In the experiments, the TDLAS-based tomography system was validated by using asymmetric premixed flames with fixed and time-varying equivalent ratios, respectively. The results demonstrate that the system is able to reconstruct the profiles of the 2D distributions of temperature and H{sub 2}O mole fraction of the flame and effectively capture the dynamics of the combustion process, which exhibits good potential for flame monitoring and on-line combustion diagnosis.« less
Ghost hunting—an assessment of ghost particle detection and removal methods for tomographic-PIV
NASA Astrophysics Data System (ADS)
Elsinga, G. E.; Tokgoz, S.
2014-08-01
This paper discusses and compares several methods, which aim to remove spurious peaks, i.e. ghost particles, from the volume intensity reconstruction in tomographic-PIV. The assessment is based on numerical simulations of time-resolved tomographic-PIV experiments in linear shear flows. Within the reconstructed volumes, intensity peaks are detected and tracked over time. These peaks are associated with particles (either ghosts or actual particles) and are characterized by their peak intensity, size and track length. Peak intensity and track length are found to be effective in discriminating between most ghosts and the actual particles, although not all ghosts can be detected using only a single threshold. The size of the reconstructed particles does not reveal an important difference between ghosts and actual particles. The joint distribution of peak intensity and track length however does, under certain conditions, allow a complete separation of ghosts and actual particles. The ghosts can have either a high intensity or a long track length, but not both combined, like all the actual particles. Removing the detected ghosts from the reconstructed volume and performing additional MART iterations can decrease the particle position error at low to moderate seeding densities, but increases the position error, velocity error and tracking errors at higher densities. The observed trends in the joint distribution of peak intensity and track length are confirmed by results from a real experiment in laminar Taylor-Couette flow. This diagnostic plot allows an estimate of the number of ghosts that are indistinguishable from the actual particles.
SPECTRE (www.noveltis.fr/spectre): a web Service for Ionospheric Products
NASA Astrophysics Data System (ADS)
Jeansou, E.; Crespon, F.; Garcia, R.; Helbert, J.; Moreaux, G.; Lognonne, P.
2005-12-01
The dense GPS networks developed for geodesic applications appear to be very efficient ionospheric sensors because of interaction between plasma and electromagnetic waves. Indeed, the dual frequency receivers provide data from which the Slant Total Electron Content (STEC) can be easily extracted to compute Vertical Total Electron Content (VTEC) maps. The SPECTRE project, Service and Products for ionospheric Electron Content and Tropospheric Refractivity over Europe, is currently a pre-operational service providing VTEC maps with high time and space resolution after 3 days time delay (http://www.noveltis.fr/spectre and http://ganymede.ipgp.jussieu.fr/spectre). This project is a part of SWENET, SpaceWeather European Network, initiated by the European Space Agency. The SPECTRE data products are useful for many applications. We will present these applications in term of interest for the scientific community with a special focus on spaceweather and transient ionospheric perturbations related to Earthquakes. Moreover, the pre-operational extensions of SPECTRE to the californian (SCIGN/BARD) and japanese (GEONET) dense GPS networks will be presented. Then the method of 3D tomography of the electron density from GPS data will be presented and its resolution discussed. The expected improvements of the 3D tomographic images by new tomographic reconstruction algorithms and by the advent of the Galileo system will conclude the presentation.
Optimal joule heating of the subsurface
Berryman, J.G.; Daily, W.D.
1994-07-05
A method for simultaneously heating the subsurface and imaging the effects of the heating is disclosed. This method combines the use of tomographic imaging (electrical resistance tomography or ERT) to image electrical resistivity distribution underground, with joule heating by electrical currents injected in the ground. A potential distribution is established on a series of buried electrodes resulting in energy deposition underground which is a function of the resistivity and injection current density. Measurement of the voltages and currents also permits a tomographic reconstruction of the resistivity distribution. Using this tomographic information, the current injection pattern on the driving electrodes can be adjusted to change the current density distribution and thus optimize the heating. As the heating changes conditions, the applied current pattern can be repeatedly adjusted (based on updated resistivity tomographs) to affect real time control of the heating.
Wang, Dengjiang; Zhang, Weifang; Wang, Xiangyu; Sun, Bo
2016-01-01
This study presents a novel monitoring method for hole-edge corrosion damage in plate structures based on Lamb wave tomographic imaging techniques. An experimental procedure with a cross-hole layout using 16 piezoelectric transducers (PZTs) was designed. The A0 mode of the Lamb wave was selected, which is sensitive to thickness-loss damage. The iterative algebraic reconstruction technique (ART) method was used to locate and quantify the corrosion damage at the edge of the hole. Hydrofluoric acid with a concentration of 20% was used to corrode the specimen artificially. To estimate the effectiveness of the proposed method, the real corrosion damage was compared with the predicted corrosion damage based on the tomographic method. The results show that the Lamb-wave-based tomographic method can be used to monitor the hole-edge corrosion damage accurately. PMID:28774041
Fluorescence and diffusive wave diffraction tomographic probes in turbid media
NASA Astrophysics Data System (ADS)
Li, Xingde
1998-10-01
Light transport over long distances in tissue-like highly scattering media is well approximated as a diffusive process. Diffusing photons can be used to detect, localize and characterize non-invasively optical inhomogeneities such as tumors and hematomas embedded in thick biological tissue. Most of the contrast relies on the endogenous optical property differences between the inhomogeneities and the surrounding media. Recently exogenous fluorescent contrast agents have been considered as a means to enhance the sensitivity and specificity for tumor detection. In the first part of the thesis (Chapter 2 and 3), a theoretical basis is established for modeling the transport, of fluorescent photons in highly scattering media. Fluorescent Diffuse Photon Density Waves (FDPDW) are used to describe the transport of fluorescent photons. A detailed analysis based upon a practical signal-to-noise model was used to access the utility of the fluorescent method. The analysis reveals that a small heterogeneity, embedded in deep tissue-like turbid media with biologically relevant parameters, and with a practically achievable 5-fold fluorophore concentration contrast, can be detected and localized when its radius is greater than 0.2 cm, and can be characterized when its radius is greater than 0.7 cm. In vivo and preliminary clinical studies demonstrate the feasibility of using FDPDW's for tumor diagnosis. Optical imaging with diffusing photons is challenging. Many of the imaging algorithms developed so far are either fundamentally incorrect as in the case of back- projection approach, or require a huge amount of computational resources and CPU time. In the second part of the thesis (Chapter 4), a fast, K-space diffraction tomographic imaging algorithm based upon spatial angular spectrum analysis is derived and applied. Absolute optical properties of thin inhomogeneities and relative optical properties of spatially extended inhomogeneities are reconstructed within a sub-second time scale. Phantom experiments have demonstrated the power of the K-space algorithm and preliminary clinical investigations have exhibited its potential for real time optical diagnosis and imaging of breast cancer.
Optimal reconstruction of the states in qutrit systems
NASA Astrophysics Data System (ADS)
Yan, Fei; Yang, Ming; Cao, Zhuo-Liang
2010-10-01
Based on mutually unbiased measurements, an optimal tomographic scheme for the multiqutrit states is presented explicitly. Because the reconstruction process of states based on mutually unbiased states is free of information waste, we refer to our scheme as the optimal scheme. By optimal we mean that the number of the required conditional operations reaches the minimum in this tomographic scheme for the states of qutrit systems. Special attention will be paid to how those different mutually unbiased measurements are realized; that is, how to decompose each transformation that connects each mutually unbiased basis with the standard computational basis. It is found that all those transformations can be decomposed into several basic implementable single- and two-qutrit unitary operations. For the three-qutrit system, there exist five different mutually unbiased-bases structures with different entanglement properties, so we introduce the concept of physical complexity to minimize the number of nonlocal operations needed over the five different structures. This scheme is helpful for experimental scientists to realize the most economical reconstruction of quantum states in qutrit systems.
NASA Astrophysics Data System (ADS)
Bykov, A. V.; Kirillin, M. Yu; Priezzhev, A. V.
2005-11-01
Model signals from one and two plane flows of a particle suspension are obtained for an optical coherence Doppler tomograph (OCDT) by the Monte-Carlo method. The optical properties of particles mimic the properties of non-aggregating erythrocytes. The flows are considered in a stationary scattering medium with optical properties close to those of the skin. It is shown that, as the flow position depth increases, the flow velocity determined from the OCDT signal becomes smaller than the specified velocity and the reconstructed profile extends in the direction of the distant boundary, which is accompanied by the shift of its maximum. In the case of two flows, an increase in the velocity of the near-surface flow leads to the overestimated values of velocity of the reconstructed profile of the second flow. Numerical simulations were performed by using a multiprocessor parallel-architecture computer.
NASA Astrophysics Data System (ADS)
Kunitsyn, V.; Nesterov, I.; Andreeva, E.; Zelenyi, L.; Veselov, M.; Galperin, Y.; Buchner, J.
A satellite radiotomography method for electron density distributions was recently proposed for closely-space multi-spacecraft group of high-altitude satellites to study the physics of reconnection process. The original idea of the ROY project is to use a constellation of spacecrafts (one main and several sub-satellites) in order to carry out closely-spaced multipoint measurements and 2D tomographic reconstruction of elec- tron density in the space between the main satellite and the subsatellites. The distances between the satellites were chosen to vary from dozens to few hundreds of kilometers. The easiest data interpretation is achieved when the subsatellites are placed along the plasma streamline. Then, whenever a plasma density irregularity moves between the main satellite and the subsatellites it will be scanned in different directions and we can get 2D distribution of plasma using these projections. However in general sub- satellites are not placed exactly along the plasma streamline. The method of plasma velocity determination relative to multi-spacecraft systems is considered. Possibilities of 3D tomographic imaging using multi-spacecraft systems are analyzed. The model- ing has shown that efficient scheme for 3D tomographic imaging would be to place spacecrafts in different planes so that the angle between the planes would make not more then ten degrees. Work is supported by INTAS PROJECT 2000-465.
Longitudinal phase space tomography using a booster cavity at PITZ
NASA Astrophysics Data System (ADS)
Malyutin, D.; Gross, M.; Isaev, I.; Khojoyan, M.; Kourkafas, G.; Krasilnikov, M.; Marchetti, B.; Otevrel, M.; Stephan, F.; Vashchenko, G.
2017-11-01
The knowledge of the longitudinal phase space (LPS) of electron beams is of great importance for optimizing the performance of high brightness photo injectors. To get the longitudinal phase space of an electron bunch in a linear accelerator a tomographic technique can be used. The method is based on measurements of the bunch momentum spectra while varying the bunch energy chirp. The energy chirp can be varied by one of the RF accelerating structures in the accelerator and the resulting momentum distribution can be measured with a dipole spectrometer further downstream. As a result, the longitudinal phase space can be reconstructed. Application of the tomographic technique for reconstruction of the longitudinal phase space is introduced in detail in this paper. Measurement results from the PITZ facility are shown and analyzed.
Portable imaging system method and apparatus
Freifeld, Barry M.; Kneafsley, Timothy J.; Pruess, Jacob; Tomutsa, Liviu; Reiter, Paul A.; deCastro, Ted M.
2006-07-25
An operator shielded X-ray imaging system has sufficiently low mass (less than 300 kg) and is compact enough to enable portability by reducing operator shielding requirements to a minimum shielded volume. The resultant shielded volume may require a relatively small mass of shielding in addition to the already integrally shielded X-ray source, intensifier, and detector. The system is suitable for portable imaging of well cores at remotely located well drilling sites. The system accommodates either small samples, or small cross-sectioned objects of unlimited length. By rotating samples relative to the imaging device, the information required for computer aided tomographic reconstruction may be obtained. By further translating the samples relative to the imaging system, fully three dimensional (3D) tomographic reconstructions may be obtained of samples having arbitrary length.
2012-03-28
Scintillation 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Comberiate, Joseph M. 5e. TASK NUMBER 5f. WORK...bubble climatology. A tomographic reconstruction technique was modified and applied to SSUSI data to reconstruct three-dimensional cubes of ionospheric... modified and applied to SSUSI data to reconstruct three-dimensional cubes of ionospheric electron density. These data cubes allowed for 3-D imaging of
Hartmann, C E A; Branford, O A; Malhotra, A; Chana, J S
2013-07-01
The latissimus dorsi flap, first performed by Tansini in 1892, was popularised for use by Olivari in 1976. The successful transfer of a latissimus dorsi flap during breast reconstruction has previously been thought to be dependent on having an intact thoracodorsal pedicle to ensure flap survival. It is well documented that the flap may also survive on the serratus branch in thoracodorsal pedicle division. We report a case of a 52-year-old female patient who underwent successful delayed breast reconstruction with a latissimus dorsi flap following previous mastectomy and axillary node clearance. Intraoperatively, the thoracodorsal pedicle and serratus branch were found to have been previously divided. On postoperative computer tomographic angiography the thoracodorsal pedicle was shown to be divided together with the serratus branch. The flap was seen to be supplied by the lateral thoracic artery. To our knowledge survival of a pedicled latissimus dorsi flap in breast reconstruction with a vascular supply from this vessel following thoracodorsal pedicle division has not previously been described. Previous thoracodorsal pedicle and serratus branch division may not be an absolute contraindication for the use of the latissimus dorsi flap in breast reconstruction, depending on the results of preoperative Doppler or computer tomographic angiography studies. Copyright © 2012 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
PSF reconstruction validated using on-sky CANARY data in MOAO mode
NASA Astrophysics Data System (ADS)
Martin, O. A.; Correia, C. M.; Gendron, E.; Rousset, G.; Gratadour, D.; Vidal, F.; Morris, T. J.; Basden, A. G.; Myers, R. M.; Neichel, B.; Fusco, T.
2016-07-01
CANARY is an open-loop tomographic adaptive optics (AO) demonstrator that was designed for use at the 4.2m William Herschel Telescope (WHT) in La Palma. Gearing up to extensive statistical studies of high redshifted galaxies surveyed with Multi-Object Spectrographs (MOS), the demonstrator CANARY has been designed to tackle technical challenges related to open-loop Adaptive-Optics (AO) control with mixed Natural Guide Star (NGS) and Laser Guide Star (LGS) tomography. We have developed a Point Spread Function (PSF)-Reconstruction algorithm dedicated to MOAO systems using system telemetry to estimate the PSF potentially anywhere in the observed field, a prerequisite to deconvolve AO-corrected science observations in Integral Field Spectroscopy (IFS). Additionally the ability to accurately reconstruct the PSF is the materialization of the broad and fine-detailed understanding of the residual error contributors, both atmospheric and opto-mechanical. In this paper we compare the classical PSF-r approach from Véran (1) that we take as reference on-axis using the truth-sensor telemetry to one tailored to atmospheric tomography by handling the off-axis data only. We've post-processed over 450 on-sky CANARY data sets with which we observe 92% and 88% of correlation on respectively the reconstructed Strehl Ratio (SR)/Full Width at Half Maximum (FWHM) compared to the sky values. The reference method achieves 95% and 92.5% exploiting directly the measurements of the residual phase from the Canary Truth Sensor (TS).
Advanced prior modeling for 3D bright field electron tomography
NASA Astrophysics Data System (ADS)
Sreehari, Suhas; Venkatakrishnan, S. V.; Drummy, Lawrence F.; Simmons, Jeffrey P.; Bouman, Charles A.
2015-03-01
Many important imaging problems in material science involve reconstruction of images containing repetitive non-local structures. Model-based iterative reconstruction (MBIR) could in principle exploit such redundancies through the selection of a log prior probability term. However, in practice, determining such a log prior term that accounts for the similarity between distant structures in the image is quite challenging. Much progress has been made in the development of denoising algorithms like non-local means and BM3D, and these are known to successfully capture non-local redundancies in images. But the fact that these denoising operations are not explicitly formulated as cost functions makes it unclear as to how to incorporate them in the MBIR framework. In this paper, we formulate a solution to bright field electron tomography by augmenting the existing bright field MBIR method to incorporate any non-local denoising operator as a prior model. We accomplish this using a framework we call plug-and-play priors that decouples the log likelihood and the log prior probability terms in the MBIR cost function. We specifically use 3D non-local means (NLM) as the prior model in the plug-and-play framework, and showcase high quality tomographic reconstructions of a simulated aluminum spheres dataset, and two real datasets of aluminum spheres and ferritin structures. We observe that streak and smear artifacts are visibly suppressed, and that edges are preserved. Also, we report lower RMSE values compared to the conventional MBIR reconstruction using qGGMRF as the prior model.
NASA Astrophysics Data System (ADS)
Rahman, Tasneem; Tahtali, Murat; Pickering, Mark R.
2014-09-01
The purpose of this study is to derive optimized parameters for a detector module employing an off-the-shelf X-ray camera and a pinhole array collimator applicable for a range of different SPECT systems. Monte Carlo simulations using the Geant4 application for tomographic emission (GATE) were performed to estimate the performance of the pinhole array collimators and were compared to that of low energy high resolution (LEHR) parallel-hole collimator in a four head SPECT system. A detector module was simulated to have 48 mm by 48 mm active area along with 1mm, 1.6mm and 2 mm pinhole aperture sizes at 0.48 mm pitch on a tungsten plate. Perpendicular lead septa were employed to verify overlapping and non-overlapping projections against a proper acceptance angle without lead septa. A uniform shape cylindrical water phantom was used to evaluate the performance of the proposed four head SPECT system of the pinhole array detector module. For each head, 100 pinhole configurations were evaluated based on sensitivity and detection efficiency for 140 keV γ-rays, and compared to LEHR parallel-hole collimator. SPECT images were reconstructed based on filtered back projection (FBP) algorithm where neither scatter nor attenuation corrections were performed. A better reconstruction algorithm development for this specific system is in progress. Nevertheless, activity distribution was well visualized using the backprojection algorithm. In this study, we have evaluated several quantitative and comparative analyses for a pinhole array imaging system providing high detection efficiency and better system sensitivity over a large FOV, comparing to the conventional four head SPECT system. The proposed detector module is expected to provide improved performance in various SPECT imaging.
NASA Astrophysics Data System (ADS)
Alinaghi, Alireza; Koulakov, Ivan; Thybo, Hans
2007-06-01
The inverse tomography method has been used to study the P- and S-waves velocity structure of the crust and upper mantle underneath Iran. The method, based on the principle of source-receiver reciprocity, allows for tomographic studies of regions with sparse distribution of seismic stations if the region has sufficient seismicity. The arrival times of body waves from earthquakes in the study area as reported in the ISC catalogue (1964-1996) at all available epicentral distances are used for calculation of residual arrival times. Prior to inversion we have relocated hypocentres based on a 1-D spherical earth's model taking into account variable crustal thickness and surface topography. During the inversion seismic sources are further relocated simultaneously with the calculation of velocity perturbations. With a series of synthetic tests we demonstrate the power of the algorithm and the data to reconstruct introduced anomalies using the ray paths of the real data set and taking into account the measurement errors and outliers. The velocity anomalies show that the crust and upper mantle beneath the Iranian Plateau comprises a low velocity domain between the Arabian Plate and the Caspian Block. This is in agreement with global tomographic models, and also tectonic models, in which active Iranian plateau is trapped between the stable Turan plate in the north and the Arabian shield in the south. Our results show clear evidence of the mainly aseismic subduction of the oceanic crust of the Oman Sea underneath the Iranian Plateau. However, along the Zagros suture zone, the subduction pattern is more complex than at Makran where the collision of the two plates is highly seismic.
Scanning transmission electron microscopy through-focal tilt-series on biological specimens.
Trepout, Sylvain; Messaoudi, Cédric; Perrot, Sylvie; Bastin, Philippe; Marco, Sergio
2015-10-01
Since scanning transmission electron microscopy can produce high signal-to-noise ratio bright-field images of thick (≥500 nm) specimens, this tool is emerging as the method of choice to study thick biological samples via tomographic approaches. However, in a convergent-beam configuration, the depth of field is limited because only a thin portion of the specimen (from a few nanometres to tens of nanometres depending on the convergence angle) can be imaged in focus. A method known as through-focal imaging enables recovery of the full depth of information by combining images acquired at different levels of focus. In this work, we compare tomographic reconstruction with the through-focal tilt-series approach (a multifocal series of images per tilt angle) with reconstruction with the classic tilt-series acquisition scheme (one single-focus image per tilt angle). We visualised the base of the flagellum in the protist Trypanosoma brucei via an acquisition and image-processing method tailored to obtain quantitative and qualitative descriptors of reconstruction volumes. Reconstructions using through-focal imaging contained more contrast and more details for thick (≥500 nm) biological samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fang, Jing-Jing; Liu, Jia-Kuang; Wu, Tzu-Chieh; Lee, Jing-Wei; Kuo, Tai-Hong
2013-05-01
Computer-aided design has gained increasing popularity in clinical practice, and the advent of rapid prototyping technology has further enhanced the quality and predictability of surgical outcomes. It provides target guides for complex bony reconstruction during surgery. Therefore, surgeons can efficiently and precisely target fracture restorations. Based on three-dimensional models generated from a computed tomographic scan, precise preoperative planning simulation on a computer is possible. Combining the interdisciplinary knowledge of surgeons and engineers, this study proposes a novel surgical guidance method that incorporates a built-in occlusal wafer that serves as the positioning reference.Two patients with complex facial deformity suffering from severe facial asymmetry problems were recruited. In vitro facial reconstruction was first rehearsed on physical models, where a customized surgical guide incorporating a built-in occlusal stent as the positioning reference was designed to implement the surgery plan. This study is intended to present the authors' preliminary experience in a complex facial reconstruction procedure. It suggests that in regions with less information, where intraoperative computed tomographic scans or navigation systems are not available, our approach could be an effective, expedient, straightforward aid to enhance surgical outcome in a complex facial repair.
Tomographic wavefront retrieval by combined use of geometric and plenoptic sensors
NASA Astrophysics Data System (ADS)
Trujillo-Sevilla, J. M.; Rodríguez-Ramos, L. F.; Fernández-Valdivia, Juan J.; Marichal-Hernández, José G.; Rodríguez-Ramos, J. M.
2014-05-01
Modern astronomic telescopes take advantage of multi-conjugate adaptive optics, in which wavefront sensors play a key role. A single sensor capable of measuring wavefront phases at any angle of observation would be helpful when improving atmospheric tomographic reconstruction. A new sensor combining both geometric and plenoptic arrangements is proposed, and a simulation demonstrating its working principle is also shown. Results show that this sensor is feasible, and also that single extended objects can be used to perform tomography of atmospheric turbulence.
NASA Astrophysics Data System (ADS)
Lavalle, M.; Hensley, S.; Lou, Y.; Saatchi, S. S.; Pinto, N.; Simard, M.; Fatoyinbo, T. E.; Duncanson, L.; Dubayah, R.; Hofton, M. A.; Blair, J. B.; Armston, J.
2016-12-01
In this paper we explore the derivation of canopy height and vertical structure from polarimetric-interferometric SAR (PolInSAR) data collected during the 2016 AfriSAR campaign in Gabon. AfriSAR is a joint effort between NASA and ESA to acquire multi-baseline L- and P-band radar data, lidar data and field data over tropical forests and savannah sites to support calibration, validation and algorithm development in preparation for the NISAR, GEDI and BIOMASS missions. Here we focus on the L-band UAVSAR dataset acquired over the Lope National Park in Central Gabon to demonstrate mapping of canopy height and vertical structure using PolInSAR and tomographic techniques. The Lope site features a natural gradient of forest biomass from the forest-savanna boundary (< 100 Mg/ha) to dense undisturbed humid tropical forests (> 400 Mg/ha). Our dataset includes 9 long-baseline, full-polarimetric UAVSAR acquisitions along with field and lidar data from the Laser Vegetation Ice Sensor (LVIS). We first present a brief theoretical background of the PolInSAR and tomographic techniques. We then show the results of our PolInSAR algorithms to create maps of canopy height generated via inversion of the random-volume-over-ground (RVOG) and random-motion-over-ground (RVoG) models. In our approach multiple interferometric baselines are merged incoherently to maximize the interferometric sensitivity over a broad range of tree heights. Finally we show how traditional tomographic algorithms are used for the retrieval of the full vertical canopy profile. We compare our results from the different PolInSAR/tomographic algorithms to validation data derived from lidar and field data.
A 3D tomographic reconstruction method to analyze Jupiter's electron-belt emission observations
NASA Astrophysics Data System (ADS)
Santos-Costa, Daniel; Girard, Julien; Tasse, Cyril; Zarka, Philippe; Kita, Hajime; Tsuchiya, Fuminori; Misawa, Hiroaki; Clark, George; Bagenal, Fran; Imai, Masafumi; Becker, Heidi N.; Janssen, Michael A.; Bolton, Scott J.; Levin, Steve M.; Connerney, John E. P.
2017-04-01
Multi-dimensional reconstruction techniques of Jupiter's synchrotron radiation from radio-interferometric observations were first developed by Sault et al. [Astron. Astrophys., 324, 1190-1196, 1997]. The tomographic-like technique introduced 20 years ago had permitted the first 3-dimensional mapping of the brightness distribution around the planet. This technique has demonstrated the advantage to be weakly dependent on planetary field models. It also does not require any knowledge on the energy and spatial distributions of the radiating electrons. On the downside, it is assumed that the volume emissivity of any punctual point source around the planet is isotropic. This assumption becomes incorrect when mapping the brightness distribution for non-equatorial point sources or any point sources from Juno's perspective. In this paper, we present our modeling effort to bypass the isotropy issue. Our approach is to use radio-interferometric observations and determine the 3-D brightness distribution in a cylindrical coordinate system. For each set (z, r), we constrain the longitudinal distribution with a Fourier series and the anisotropy is addressed with a simple periodic function when possible. We develop this new method over a wide range of frequencies using past VLA and LOFAR observations of Jupiter. We plan to test this reconstruction method with observations of Jupiter that are currently being carried out with LOFAR and GMRT in support to the Juno mission. We describe how this new 3D tomographic reconstruction method provides new model constraints on the energy and spatial distributions of Jupiter's ultra-relativistic electrons close to the planet and be used to interpret Juno MWR observations of Jupiter's electron-belt emission and assist in evaluating the background noise from the radiation environment in the atmospheric measurements.
Tan, A C; Richards, R
1989-01-01
Three-dimensional (3D) medical graphics is becoming popular in clinical use on tomographic scanners. Research work in 3D reconstructive display of computerized tomography (CT) and magnetic resonance imaging (MRI) scans on conventional computers has produced many so-called pseudo-3D images. The quality of these images depends on the rendering algorithm, the coarseness of the digitized object, the number of grey levels and the image screen resolution. CT and MRI data are fundamentally voxel based and they produce images that are coarse because of the resolution of the data acquisition system. 3D images produced by the Z-buffer depth shading technique suffer loss of detail when complex objects with fine textural detail need to be displayed. Attempts have been made to improve the display of voxel objects, and existing techniques have shown the improvement possible using these post-processing algorithms. The improved rendering technique works on the Z-buffer image to generate a shaded image using a single light source in any direction. The effectiveness of the technique in generating a shaded image has been shown to be a useful means of presenting 3D information for clinical use.
Ambient Noise Interferometry and Surface Wave Array Tomography: Promises and Problems
NASA Astrophysics Data System (ADS)
van der Hilst, R. D.; Yao, H.; de Hoop, M. V.; Campman, X.; Solna, K.
2008-12-01
In the late 1990ies most seismologists would have frowned at the possibility of doing high-resolution surface wave tomography with noise instead of with signal associated with ballistic source-receiver propagation. Some may still do, but surface wave tomography with Green's functions estimated through ambient noise interferometry ('sourceless tomography') has transformed from a curiosity into one of the (almost) standard tools for analysis of data from dense seismograph arrays. Indeed, spectacular applications of ambient noise surface wave tomography have recently been published. For example, application to data from arrays in SE Tibet revealed structures in the crust beneath the Tibetan plateau that could not be resolved by traditional tomography (Yao et al., GJI, 2006, 2008). While the approach is conceptually simple, in application the proverbial devil is in the detail. Full reconstruction of the Green's function requires that the wavefields used are diffusive and that ambient noise energy is evenly distributed in the spatial dimensions of interest. In the field, these conditions are not usually met, and (frequency dependent) non-uniformity of the noise sources may lead to incomplete reconstruction of the Green's function. Furthermore, ambient noise distributions can be time-dependent, and seasonal variations have been documented. Naive use of empirical Green's functions may produce (unknown) bias in the tomographic models. The degrading effect on EGFs of the directionality of noise distribution forms particular challenges for applications beyond isotropic surface wave inversions, such as inversions for (azimuthal) anisotropy and attempts to use higher modes (or body waves). Incomplete Green's function reconstruction can (probably) not be prevented, but it may be possible to reduce the problem and - at least - understand the degree of incomplete reconstruction and prevent it from degrading the tomographic model. We will present examples of Rayleigh wave inversions and discuss strategies to mitigate effects of incomplete Green's function reconstruction on tomographic images.
Resch, K J; Walther, P; Zeilinger, A
2005-02-25
We have performed the first experimental tomographic reconstruction of a three-photon polarization state. Quantum state tomography is a powerful tool for fully describing the density matrix of a quantum system. We measured 64 three-photon polarization correlations and used a "maximum-likelihood" reconstruction method to reconstruct the Greenberger-Horne-Zeilinger state. The entanglement class has been characterized using an entanglement witness operator and the maximum predicted values for the Mermin inequality were extracted.
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790
NASA Astrophysics Data System (ADS)
Atkinson, Callum; Coudert, Sebastien; Foucaut, Jean-Marc; Stanislas, Michel; Soria, Julio
2011-04-01
To investigate the accuracy of tomographic particle image velocimetry (Tomo-PIV) for turbulent boundary layer measurements, a series of synthetic image-based simulations and practical experiments are performed on a high Reynolds number turbulent boundary layer at Reθ = 7,800. Two different approaches to Tomo-PIV are examined using a full-volume slab measurement and a thin-volume "fat" light sheet approach. Tomographic reconstruction is performed using both the standard MART technique and the more efficient MLOS-SMART approach, showing a 10-time increase in processing speed. Random and bias errors are quantified under the influence of the near-wall velocity gradient, reconstruction method, ghost particles, seeding density and volume thickness, using synthetic images. Experimental Tomo-PIV results are compared with hot-wire measurements and errors are examined in terms of the measured mean and fluctuating profiles, probability density functions of the fluctuations, distributions of fluctuating divergence through the volume and velocity power spectra. Velocity gradients have a large effect on errors near the wall and also increase the errors associated with ghost particles, which convect at mean velocities through the volume thickness. Tomo-PIV provides accurate experimental measurements at low wave numbers; however, reconstruction introduces high noise levels that reduces the effective spatial resolution. A thinner volume is shown to provide a higher measurement accuracy at the expense of the measurement domain, albeit still at a lower effective spatial resolution than planar and Stereo-PIV.
High-Latitude Ionospheric Imaging using Canadian High Arctic Ionospheric Network (CHAIN)
NASA Astrophysics Data System (ADS)
Meziane, K.; Jayachandran, P. T.; Hamza, A. M.; MacDougall, J. W.
2013-12-01
Understanding the polar cap dynamics is a fundamental problem in solar-terrestrial physics; any breakthroughs would have to take into account the interactions that take place at the interfaces between the Solar Wind and the Magnetosphere and between the latter and the ionosphere, respectively. Over the past decade a significant number of ground-based GPS receivers and digital ionosondes have been deployed in the polar cap and auroral region. This deployment has allowed the harvest of much needed data, otherwise not available, which in turn helps understand the dynamics of the polar ionospheric regions. A technique, used consistently by researchers in the field, consists of inverting the Total Electron Content (TEC) along the ray path obtained from a system of GPS receivers. In the present study, a combination of tomography and ionosonde data from the CHAIN network is used to examine the dynamics of polar cap patches. First, the TEC derived from GPS receivers through tomographic reconstruction is directly compared with ionosonde data. The comparison includes periods of quite and disturbed geomagnetic activity. We then use the vertical density profiles derived from the CHAIN ionosondes as initial seeds for the reconstruction of the tomographic images of the polar cap regions. Precise electron density peaks obtained through the tomographic reconstruction fall within a range that is consistent with direct CHAIN measurements when certain conditions are met. An assessment of the performance of the resulting combination of GPS and ionosonde data is performed, and conclusions are presented.
NASA Astrophysics Data System (ADS)
Sasaki, Yoshiaki; Emori, Ryota; Inage, Hiroki; Goto, Masaki; Takahashi, Ryo; Yuasa, Tetsuya; Taniguchi, Hiroshi; Devaraj, Balasigamani; Akatsuka, Takao
2004-05-01
The heterodyne detection technique, on which the coherent detection imaging (CDI) method founds, can discriminate and select very weak, highly directional forward scattered, and coherence retaining photons that emerge from scattering media in spite of their complex and highly scattering nature. That property enables us to reconstruct tomographic images using the same reconstruction technique as that of X-Ray CT, i.e., the filtered backprojection method. Our group had so far developed a transillumination laser CT imaging method based on the CDI method in the visible and near-infrared regions and reconstruction from projections, and reported a variety of tomographic images both in vitro and in vivo of biological objects to demonstrate the effectiveness to biomedical use. Since the previous system was not optimized, it took several hours to obtain a single image. For a practical use, we developed a prototype CDI-based imaging system using parallel fiber array and optical switches to reduce the measurement time significantly. Here, we describe a prototype transillumination laser CT imaging system using fiber-optic based on optical heterodyne detection for early diagnosis of rheumatoid arthritis (RA), by demonstrating the tomographic imaging of acrylic phantom as well as the fundamental imaging properties. We expect that further refinements of the fiber-optic-based laser CT imaging system could lead to a novel and practical diagnostic tool for rheumatoid arthritis and other joint- and bone-related diseases in human finger.
NASA Astrophysics Data System (ADS)
Yatsishina, E. B.; Kovalchuk, M. V.; Loshak, M. D.; Vasilyev, S. V.; Vasilieva, O. A.; Dyuzheva, O. P.; Pojidaev, V. M.; Ushakov, V. L.
2018-05-01
Nine ancient Egyptian mummies (dated preliminarily to the period from the 1st mill. BCE to the first centuries CE) from the collection of the State Pushkin Museum of Fine Arts have been studied at the National Research Centre "Kurchatov Institute" (NRC KI) on the base of the complex of NBICS technologies. Tomographic scanning is performed using a magneto-resonance tomograph (3 T) and a hybrid positron emission tomography/computed tomography (PET-CT) scanner. Three-dimensional reconstructions of mummies and their anthropological measurements are carried out. Some medical conclusions are drawn based on the tomographic data. In addition, the embalming composition and tissue of one of the mummies are preliminarily analyzed.
Lumbar artery perforators: an anatomical study based on computed tomographic angiography imaging.
Sommeling, Casper Emile; Colebunders, Britt; Pardon, Heleen E; Stillaert, Filip B; Blondeel, Phillip N; van Landuyt, Koenraad
2017-08-01
The free lumbar artery perforator flap has recently been introduced as a potentially valuable option for autologous breast reconstruction in a subset of patients. Up to date, few anatomical studies, exploring the lumbar region as a donor site for perforator- based flaps, have been conducted. An anatomical study of the position of the dominant lumbar artery perforator was performed, using the preoperative computed tomographic angiography images of 24 autologous breast reconstruction patients. In total, 61 dominant perforators were determined, 28 on the left and 33 on the right side. A radiologist defined the position of the perforator as coordinates in an xy-grid. Dominant perforators were shown to originate from the lumbar arteries at the level of lumbar vertebrae three or four. Remarkably, approximately 85% of these lumbar artery perforators enter the skin at 7-10 cm lateral from the midline (mean left 8.6 cm, right 8.2 cm). This study concludes a rather constant position of the dominant perforator. Therefore, preoperative-computed tomographic angiography is not always essential to find this perforator and Doppler ultrasound could be considered as an alternative, thereby carefully assessing all advantages and disadvantages inherent to either of these imaging methods.
3D homogeneity study in PMMA layers using a Fourier domain OCT system
NASA Astrophysics Data System (ADS)
Briones-R., Manuel de J.; Torre-Ibarra, Manuel H. De La; Tavera, Cesar G.; Luna H., Juan M.; Mendoza-Santoyo, Fernando
2016-11-01
Micro-metallic particles embedded in polymers are now widely used in several industrial applications in order to modify the mechanical properties of the bulk. A uniform distribution of these particles inside the polymers is highly desired for instance, when a biological backscattering is simulated or a bio-framework is designed. A 3D Fourier domain optical coherence tomography system to detect the polymer's internal homogeneity is proposed. This optical system has a 2D camera sensor array that records a fringe pattern used to reconstruct with a single shot the tomographic image of the sample. The system gathers the full 3D tomographic and optical phase information during a controlled deformation by means of a motion linear stage. This stage avoids the use of expensive tilting stages, which in addition are commonly controlled by piezo drivers. As proof of principle, a series of different deformations were proposed to detect the uniform or non-uniform internal deposition of copper micro particles. The results are presented as images coming from the 3D tomographic micro reconstruction of the samples, and the 3D optical phase information that identifies the in-homogeneity regions within the Poly methyl methacrylate (PMMA) volume.
Hydrodynamic Simulations and Tomographic Reconstructions of the Intergalactic Medium
NASA Astrophysics Data System (ADS)
Stark, Casey William
The Intergalactic Medium (IGM) is the dominant reservoir of matter in the Universe from which the cosmic web and galaxies form. The structure and physical state of the IGM provides insight into the cosmological model of the Universe, the origin and timeline of the reionization of the Universe, as well as being an essential ingredient in our understanding of galaxy formation and evolution. Our primary handle on this information is a signal known as the Lyman-alpha forest (or Ly-alpha forest) -- the collection of absorption features in high-redshift sources due to intervening neutral hydrogen, which scatters HI Ly-alpha photons out of the line of sight. The Ly-alpha forest flux traces density fluctuations at high redshift and at moderate overdensities, making it an excellent tool for mapping large-scale structure and constraining cosmological parameters. Although the computational methodology for simulating the Ly-alpha forest has existed for over a decade, we are just now approaching the scale of computing power required to simultaneously capture large cosmological scales and the scales of the smallest absorption systems. My thesis focuses on using simulations at the edge of modern computing to produce precise predictions of the statistics of the Ly-alpha forest and to better understand the structure of the IGM. In the first part of my thesis, I review the state of hydrodynamic simulations of the IGM, including pitfalls of the existing under-resolved simulations. Our group developed a new cosmological hydrodynamics code to tackle the computational challenge, and I developed a distributed analysis framework to compute flux statistics from our simulations. I present flux statistics derived from a suite of our large hydrodynamic simulations and demonstrate convergence to the per cent level. I also compare flux statistics derived from simulations using different discretizations and hydrodynamic schemes (Eulerian finite volume vs. smoothed particle hydrodynamics) and discuss differences in their convergence behavior, their overall agreement, and the implications for cosmological constraints. In the second part of my thesis, I present a tomographic reconstruction method that allows us to make 3D maps of the IGM with Mpc resolution. In order to make reconstructions of large surveys computationally feasible, I developed a new Wiener Filter application with an algorithm specialized to our problem, which significantly reduces the space and time complexity compared to previous implementations. I explore two scientific applications of the maps: finding protoclusters by searching the maps for large, contiguous regions of low flux and finding cosmic voids by searching the maps for regions of high flux. Using a large N-body simulation, I identify and characterize both protoclusters and voids at z = 2.5, in the middle of the redshift range being mapped by ongoing surveys. I provide simple methods for identifying protocluster and void candidates in the tomographic flux maps, and then test them on mock surveys and reconstructions. I present forecasts for sample purity and completeness and other scientific applications of these large, high-redshift objects.
A concept for non-invasive temperature measurement during injection moulding processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopmann, Christian; Spekowius, Marcel, E-mail: spekowius@ikv.rwth-aachen.de; Wipperfürth, Jens
2016-03-09
Current models of the injection moulding process insufficiently consider the thermal interactions between melt, solidified material and the mould. A detailed description requires a deep understanding of the underlying processes and a precise observation of the temperature. Because todays measurement concepts do not allow a non-invasive analysis it is necessary to find new measurement techniques for temperature measurements during the manufacturing process. In this work we present the idea of a set up for a tomographic ultrasound measurement of the temperature field inside a plastics melt. The goal is to identify a concept that can be installed on a specializedmore » mould for the injection moulding process. The challenges are discussed and the design of a prototype is shown. Special attention is given to the spatial arrangement of the sensors. Besides the design of a measurement set up a reconstruction strategy for the ultrasound signals is required. We present an approach in which an image processing algorithm can be used to calculate a temperature distribution from the ultrasound scans. We discuss a reconstruction strategy in which the ultrasound signals are converted into a spartial temperature distribution by using pvT curves that are obtained by dilatometer measurements.« less
A Hybrid Approach to Data Assimilation for Reconstructing the Evolution of Mantle Dynamics
NASA Astrophysics Data System (ADS)
Zhou, Quan; Liu, Lijun
2017-11-01
Quantifying past mantle dynamic processes represents a major challenge in understanding the temporal evolution of the solid earth. Mantle convection modeling with data assimilation is one of the most powerful tools to investigate the dynamics of plate subduction and mantle convection. Although various data assimilation methods, both forward and inverse, have been created, these methods all have limitations in their capabilities to represent the real earth. Pure forward models tend to miss important mantle structures due to the incorrect initial condition and thus may lead to incorrect mantle evolution. In contrast, pure tomography-based models cannot effectively resolve the fine slab structure and would fail to predict important subduction-zone dynamic processes. Here we propose a hybrid data assimilation approach that combines the unique power of the sequential and adjoint algorithms, which can properly capture the detailed evolution of the downgoing slab and the tomographically constrained mantle structures, respectively. We apply this new method to reconstructing mantle dynamics below the western U.S. while considering large lateral viscosity variations. By comparing this result with those from several existing data assimilation methods, we demonstrate that the hybrid modeling approach recovers the realistic 4-D mantle dynamics the best.
Bayesian statistical ionospheric tomography improved by incorporating ionosonde measurements
NASA Astrophysics Data System (ADS)
Norberg, Johannes; Virtanen, Ilkka I.; Roininen, Lassi; Vierinen, Juha; Orispää, Mikko; Kauristie, Kirsti; Lehtinen, Markku S.
2016-04-01
We validate two-dimensional ionospheric tomography reconstructions against EISCAT incoherent scatter radar measurements. Our tomography method is based on Bayesian statistical inversion with prior distribution given by its mean and covariance. We employ ionosonde measurements for the choice of the prior mean and covariance parameters and use the Gaussian Markov random fields as a sparse matrix approximation for the numerical computations. This results in a computationally efficient tomographic inversion algorithm with clear probabilistic interpretation. We demonstrate how this method works with simultaneous beacon satellite and ionosonde measurements obtained in northern Scandinavia. The performance is compared with results obtained with a zero-mean prior and with the prior mean taken from the International Reference Ionosphere 2007 model. In validating the results, we use EISCAT ultra-high-frequency incoherent scatter radar measurements as the ground truth for the ionization profile shape. We find that in comparison to the alternative prior information sources, ionosonde measurements improve the reconstruction by adding accurate information about the absolute value and the altitude distribution of electron density. With an ionosonde at continuous disposal, the presented method enhances stand-alone near-real-time ionospheric tomography for the given conditions significantly.
NASA Astrophysics Data System (ADS)
Zhou, Q.; Liu, L.
2017-12-01
Quantifying past mantle dynamic processes represents a major challenge in understanding the temporal evolution of the solid earth. Mantle convection modeling with data assimilation is one of the most powerful tools to investigate the dynamics of plate subduction and mantle convection. Although various data assimilation methods, both forward and inverse, have been created, these methods all have limitations in their capabilities to represent the real earth. Pure forward models tend to miss important mantle structures due to the incorrect initial condition and thus may lead to incorrect mantle evolution. In contrast, pure tomography-based models cannot effectively resolve the fine slab structure and would fail to predict important subduction-zone dynamic processes. Here we propose a hybrid data assimilation method that combines the unique power of the sequential and adjoint algorithms, which can properly capture the detailed evolution of the downgoing slab and the tomographically constrained mantle structures, respectively. We apply this new method to reconstructing mantle dynamics below the western U.S. while considering large lateral viscosity variations. By comparing this result with those from several existing data assimilation methods, we demonstrate that the hybrid modeling approach recovers the realistic 4-D mantle dynamics to the best.
On a novel low cost high accuracy experimental setup for tomographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Discetti, Stefano; Ianiro, Andrea; Astarita, Tommaso; Cardone, Gennaro
2013-07-01
This work deals with the critical aspects related to cost reduction of a Tomo PIV setup and to the bias errors introduced in the velocity measurements by the coherent motion of the ghost particles. The proposed solution consists of using two independent imaging systems composed of three (or more) low speed single frame cameras, which can be up to ten times cheaper than double shutter cameras with the same image quality. Each imaging system is used to reconstruct a particle distribution in the same measurement region, relative to the first and the second exposure, respectively. The reconstructed volumes are then interrogated by cross-correlation in order to obtain the measured velocity field, as in the standard tomographic PIV implementation. Moreover, differently from tomographic PIV, the ghost particle distributions of the two exposures are uncorrelated, since their spatial distribution is camera orientation dependent. For this reason, the proposed solution promises more accurate results, without the bias effect of the coherent ghost particles motion. Guidelines for the implementation and the application of the present method are proposed. The performances are assessed with a parametric study on synthetic experiments. The proposed low cost system produces a much lower modulation with respect to an equivalent three-camera system. Furthermore, the potential accuracy improvement using the Motion Tracking Enhanced MART (Novara et al 2010 Meas. Sci. Technol. 21 035401) is much higher than in the case of the standard implementation of tomographic PIV.
NASA Astrophysics Data System (ADS)
Makeev, Andrey; Ikejimba, Lynda; Lo, Joseph Y.; Glick, Stephen J.
2016-03-01
Although digital mammography has reduced breast cancer mortality by approximately 30%, sensitivity and specificity are still far from perfect. In particular, the performance of mammography is especially limited for women with dense breast tissue. Two out of every three biopsies performed in the U.S. are unnecessary, thereby resulting in increased patient anxiety, pain, and possible complications. One promising tomographic breast imaging method that has recently been approved by the FDA is dedicated breast computed tomography (BCT). However, visualizing lesions with BCT can still be challenging for women with dense breast tissue due to the minimal contrast for lesions surrounded by fibroglandular tissue. In recent years there has been renewed interest in improving lesion conspicuity in x-ray breast imaging by administration of an iodinated contrast agent. Due to the fully 3-D imaging nature of BCT, as well as sub-optimal contrast enhancement while the breast is under compression with mammography and breast tomosynthesis, dedicated BCT of the uncompressed breast is likely to offer the best solution for injected contrast-enhanced x-ray breast imaging. It is well known that use of statistically-based iterative reconstruction in CT results in improved image quality at lower radiation dose. Here we investigate possible improvements in image reconstruction for BCT, by optimizing free regularization parameter in method of maximum likelihood and comparing its performance with clinical cone-beam filtered backprojection (FBP) algorithm.
A Web simulation of medical image reconstruction and processing as an educational tool.
Papamichail, Dimitrios; Pantelis, Evaggelos; Papagiannis, Panagiotis; Karaiskos, Pantelis; Georgiou, Evangelos
2015-02-01
Web educational resources integrating interactive simulation tools provide students with an in-depth understanding of the medical imaging process. The aim of this work was the development of a purely Web-based, open access, interactive application, as an ancillary learning tool in graduate and postgraduate medical imaging education, including a systematic evaluation of learning effectiveness. The pedagogic content of the educational Web portal was designed to cover the basic concepts of medical imaging reconstruction and processing, through the use of active learning and motivation, including learning simulations that closely resemble actual tomographic imaging systems. The user can implement image reconstruction and processing algorithms under a single user interface and manipulate various factors to understand the impact on image appearance. A questionnaire for pre- and post-training self-assessment was developed and integrated in the online application. The developed Web-based educational application introduces the trainee in the basic concepts of imaging through textual and graphical information and proceeds with a learning-by-doing approach. Trainees are encouraged to participate in a pre- and post-training questionnaire to assess their knowledge gain. An initial feedback from a group of graduate medical students showed that the developed course was considered as effective and well structured. An e-learning application on medical imaging integrating interactive simulation tools was developed and assessed in our institution.
Regional model-based computerized ionospheric tomography using GPS measurements: IONOLAB-CIT
NASA Astrophysics Data System (ADS)
Tuna, Hakan; Arikan, Orhan; Arikan, Feza
2015-10-01
Three-dimensional imaging of the electron density distribution in the ionosphere is a crucial task for investigating the ionospheric effects. Dual-frequency Global Positioning System (GPS) satellite signals can be used to estimate the slant total electron content (STEC) along the propagation path between a GPS satellite and ground-based receiver station. However, the estimated GPS-STEC is very sparse and highly nonuniformly distributed for obtaining reliable 3-D electron density distributions derived from the measurements alone. Standard tomographic reconstruction techniques are not accurate or reliable enough to represent the full complexity of variable ionosphere. On the other hand, model-based electron density distributions are produced according to the general trends of ionosphere, and these distributions do not agree with measurements, especially for geomagnetically active hours. In this study, a regional 3-D electron density distribution reconstruction method, namely, IONOLAB-CIT, is proposed to assimilate GPS-STEC into physical ionospheric models. The proposed method is based on an iterative optimization framework that tracks the deviations from the ionospheric model in terms of F2 layer critical frequency and maximum ionization height resulting from the comparison of International Reference Ionosphere extended to Plasmasphere (IRI-Plas) model-generated STEC and GPS-STEC. The suggested tomography algorithm is applied successfully for the reconstruction of electron density profiles over Turkey, during quiet and disturbed hours of ionosphere using Turkish National Permanent GPS Network.
Kim, Hyungjin; Park, Chang Min; Lee, Myunghee; Park, Sang Joon; Song, Yong Sub; Lee, Jong Hyuk; Hwang, Eui Jin; Goo, Jin Mo
2016-01-01
To identify the impact of reconstruction algorithms on CT radiomic features of pulmonary tumors and to reveal and compare the intra- and inter-reader and inter-reconstruction algorithm variability of each feature. Forty-two patients (M:F = 19:23; mean age, 60.43±10.56 years) with 42 pulmonary tumors (22.56±8.51mm) underwent contrast-enhanced CT scans, which were reconstructed with filtered back projection and commercial iterative reconstruction algorithm (level 3 and 5). Two readers independently segmented the whole tumor volume. Fifteen radiomic features were extracted and compared among reconstruction algorithms. Intra- and inter-reader variability and inter-reconstruction algorithm variability were calculated using coefficients of variation (CVs) and then compared. Among the 15 features, 5 first-order tumor intensity features and 4 gray level co-occurrence matrix (GLCM)-based features showed significant differences (p<0.05) among reconstruction algorithms. As for the variability, effective diameter, sphericity, entropy, and GLCM entropy were the most robust features (CV≤5%). Inter-reader variability was larger than intra-reader or inter-reconstruction algorithm variability in 9 features. However, for entropy, homogeneity, and 4 GLCM-based features, inter-reconstruction algorithm variability was significantly greater than inter-reader variability (p<0.013). Most of the radiomic features were significantly affected by the reconstruction algorithms. Inter-reconstruction algorithm variability was greater than inter-reader variability for entropy, homogeneity, and GLCM-based features.
Imaging of turbulent structures and tomographic reconstruction of TORPEX plasma emissivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iraji, D.; Furno, I.; Fasoli, A.
In the TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], a simple magnetized plasma device, low frequency electrostatic fluctuations associated with interchange waves, are routinely measured by means of extensive sets of Langmuir probes. To complement the electrostatic probe measurements of plasma turbulence and study of plasma structures smaller than the spatial resolution of probes array, a nonperturbative direct imaging system has been developed on TORPEX, including a fast framing Photron-APX-RS camera and an image intensifier unit. From the line-integrated camera images, we compute the poloidal emissivity profile of the plasma by applying a tomographic reconstruction technique usingmore » a pixel method and solving an overdetermined set of equations by singular value decomposition. This allows comparing statistical, spectral, and spatial properties of visible light radiation with electrostatic fluctuations. The shape and position of the time-averaged reconstructed plasma emissivity are observed to be similar to those of the ion saturation current profile. In the core plasma, excluding the electron cyclotron and upper hybrid resonant layers, the mean value of the plasma emissivity is observed to vary with (T{sub e}){sup {alpha}}(n{sub e}){sup {beta}}, in which {alpha}=0.25-0.7 and {beta}=0.8-1.4, in agreement with collisional radiative model. The tomographic reconstruction is applied to the fast camera movie acquired with 50 kframes/s rate and 2 {mu}s of exposure time to obtain the temporal evolutions of the emissivity fluctuations. Conditional average sampling is also applied to visualize and measure sizes of structures associated with the interchange mode. The {omega}-time and the two-dimensional k-space Fourier analysis of the reconstructed emissivity fluctuations show the same interchange mode that is detected in the {omega} and k spectra of the ion saturation current fluctuations measured by probes. Small scale turbulent plasma structures can be detected and tracked in the reconstructed emissivity movies with the spatial resolution down to 2 cm, well beyond the spatial resolution of the probe array.« less
In vivo fluorescence lifetime tomography of a FRET probe expressed in mouse
McGinty, James; Stuckey, Daniel W.; Soloviev, Vadim Y.; Laine, Romain; Wylezinska-Arridge, Marzena; Wells, Dominic J.; Arridge, Simon R.; French, Paul M. W.; Hajnal, Joseph V.; Sardini, Alessandro
2011-01-01
Förster resonance energy transfer (FRET) is a powerful biological tool for reading out cell signaling processes. In vivo use of FRET is challenging because of the scattering properties of bulk tissue. By combining diffuse fluorescence tomography with fluorescence lifetime imaging (FLIM), implemented using wide-field time-gated detection of fluorescence excited by ultrashort laser pulses in a tomographic imaging system and applying inverse scattering algorithms, we can reconstruct the three dimensional spatial localization of fluorescence quantum efficiency and lifetime. We demonstrate in vivo spatial mapping of FRET between genetically expressed fluorescent proteins in live mice read out using FLIM. Following transfection by electroporation, mouse hind leg muscles were imaged in vivo and the emission of free donor (eGFP) in the presence of free acceptor (mCherry) could be clearly distinguished from the fluorescence of the donor when directly linked to the acceptor in a tandem (eGFP-mCherry) FRET construct. PMID:21750768
Atomic electron tomography: 3D structures without crystals
Miao, Jianwei; Ercius, Peter; Billinge, S. J. L.
2016-09-23
Crystallography has been fundamental to the development of many fields of science over the last century. However, much of our modern science and technology relies on materials with defects and disorders, and their three-dimensional (3D) atomic structures are not accessible to crystallography. One method capable of addressing this major challenge is atomic electron tomography. By combining advanced electron microscopes and detectors with powerful data analysis and tomographic reconstruction algorithms, it is now possible to determine the 3D atomic structure of crystal defects such as grain boundaries, stacking faults, dislocations, and point defects, as well as to precisely localize the 3Dmore » coordinates of individual atoms in materials without assuming crystallinity. In this work, we review the recent advances and the interdisciplinary science enabled by this methodology. We also outline further research needed for atomic electron tomography to address long-standing unresolved problems in the physical sciences.« less
X-ray cargo container inspection system with few-view projection imaging
NASA Astrophysics Data System (ADS)
Duan, Xinhui; Cheng, Jianping; Zhang, Li; Xing, Yuxiang; Chen, Zhiqiang; Zhao, Ziran
2009-01-01
An X-ray cargo inspection system with few-view projection imaging is developed for detecting contraband in air containers. This paper describes this developing inspection system, including its configuration and the process of inspection using three imaging modalities: digital radiography (DR), few view imaging and computed tomography (CT). The few-view imaging can provide 3D images with much faster scanning speed than CT and do great help to quickly locate suspicious cargo in a container. An algorithm to reconstruct tomographic images from severely sparse projection data of few-view imaging is discussed. A cooperative work manner of the three modalities is presented to make the inspection more convenient and effective. Numerous experiments of performance tests and modality comparison are performed on our system for inspecting air containers. Results demonstrate the effectiveness of our methods and implementation of few-view imaging in practical inspection systems.
NASA Astrophysics Data System (ADS)
Aleksanyan, Grayr; Shcherbakov, Ivan; Kucher, Artem; Sulyz, Andrew
2018-04-01
Continuous monitoring of the patient's breathing by the method of multi-angle electric impedance tomography allows to obtain images of conduction change in the chest cavity during the monitoring. Direct analysis of images is difficult due to the large amount of information and low resolution images obtained by multi-angle electrical impedance tomography. This work presents a method for obtaining a graph of respiratory activity of the lungs based on the results of continuous lung monitoring using the multi-angle electrical impedance tomography method. The method makes it possible to obtain a graph of the respiratory activity of the left and right lungs separately, as well as a summary graph, to which it is possible to apply methods of processing the results of spirography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, Jianwei; Ercius, Peter; Billinge, S. J. L.
Crystallography has been fundamental to the development of many fields of science over the last century. However, much of our modern science and technology relies on materials with defects and disorders, and their three-dimensional (3D) atomic structures are not accessible to crystallography. One method capable of addressing this major challenge is atomic electron tomography. By combining advanced electron microscopes and detectors with powerful data analysis and tomographic reconstruction algorithms, it is now possible to determine the 3D atomic structure of crystal defects such as grain boundaries, stacking faults, dislocations, and point defects, as well as to precisely localize the 3Dmore » coordinates of individual atoms in materials without assuming crystallinity. In this work, we review the recent advances and the interdisciplinary science enabled by this methodology. We also outline further research needed for atomic electron tomography to address long-standing unresolved problems in the physical sciences.« less
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Intravenous volume tomographic pulmonary angiography imaging
NASA Astrophysics Data System (ADS)
Ning, Ruola; Strang, John G.; Chen, Biao; Conover, David L.; Yu, Rongfeng
1999-05-01
This study presents a new intravenous (IV) tomographic angiography imaging technique, called intravenous volume tomographic digital angiography (VTDA) for cross sectional pulmonary angiography. While the advantages of IV-VTDA over spiral CT in terms of volume scanning time and resolution have been validated and reported in our previous papers for head and neck vascular imaging, the superiority of IV-VTDA over spiral CT for cross sectional pulmonary angiography has not been explored yet. The purpose of this study is to demonstrate the advantage of isotropic resolution of IV-VTDA in the x, y and z directions through phantom and animal studies, and to explore its clinical application for detecting clots in pulmonary angiography. A prototype image intensifier-based VTDA imaging system has been designed and constructed by modifying a GE 8800 CT scanner. This system was used for a series of phantom and dog studies. A pulmonary vascular phantom was designed and constructed. The phantom was scanned using the prototype VTDA system for direct 3D reconstruction. Then the same phantom was scanned using a GE CT/i spiral CT scanner using the routine pulmonary CT angiography protocols. IV contrast injection and volume scanning protocols were developed during the dog studies. Both VTDA reconstructed images and spiral CT images of the specially designed phantom were analyzed and compared. The detectability of simulated vessels and clots was assessed as the function of iodine concentration levels, oriented angles, and diameters of the vessels and clots. A set of 3D VTDA reconstruction images of dog pulmonary arteries was obtained with different IV injection rates and isotropic resolution in the x, y and z directions. The results of clot detection studies in dog pulmonary arteries have also been shown. This study presents a new tomographic IV angiography imaging technique for cross sectional pulmonary angiography. The results of phantom and animal studies indicate that IV-VTDA is superior to spiral CT for cross sectional pulmonary angiography.
Interior tomographic imaging for x-ray coherent scattering (Conference Presentation)
NASA Astrophysics Data System (ADS)
Pang, Sean; Zhu, Zheyuan
2017-05-01
Conventional computed tomography reconstructs the attenuation only high-dimensional images. Coherent scatter computed tomography, which reconstructs the angular dependent scattering profiles of 3D objects, can provide molecular signatures that improves the accuracy of material identification and classification. Coherent scatter tomography are traditionally acquired by setups similar to x-ray powder diffraction machine; a collimated source in combination with 2D or 1D detector collimation in order to localize the scattering point. In addition, the coherent scatter cross-section is often 3 orders of magnitude lower than that of the absorption cross-section for the same material. Coded aperture and structured illumination approaches has been shown to greatly improve the collection efficiency. In many applications, especially in security imaging and medical diagnosis, fast and accurate identification of the material composition of a small volume within the whole object would lead to an accelerated imaging procedure and reduced radiation dose. Here, we report an imaging method to reconstruct the material coherent scatter profile within a small volume. The reconstruction along one radial direction can reconstruct a scalar coherent scattering tomographic image. Our methods takes advantage of the finite support of the scattering profile in small angle regime. Our system uses a pencil beam setup without using any detector side collimation. Coherent scatter profile of a 10 mm scattering sample embedded in a 30 mm diameter phantom was reconstructed. The setup has small form factor and is suitable for various portable non-destructive detection applications.
Tomographic PIV: particles versus blobs
NASA Astrophysics Data System (ADS)
Champagnat, Frédéric; Cornic, Philippe; Cheminet, Adam; Leclaire, Benjamin; Le Besnerais, Guy; Plyer, Aurélien
2014-08-01
We present an alternative approach to tomographic particle image velocimetry (tomo-PIV) that seeks to recover nearly single voxel particles rather than blobs of extended size. The baseline of our approach is a particle-based representation of image data. An appropriate discretization of this representation yields an original linear forward model with a weight matrix built with specific samples of the system’s point spread function (PSF). Such an approach requires only a few voxels to explain the image appearance, therefore it favors much more sparsely reconstructed volumes than classic tomo-PIV. The proposed forward model is general and flexible and can be embedded in a classical multiplicative algebraic reconstruction technique (MART) or a simultaneous multiplicative algebraic reconstruction technique (SMART) inversion procedure. We show, using synthetic PIV images and by way of a large exploration of the generating conditions and a variety of performance metrics, that the model leads to better results than the classical tomo-PIV approach, in particular in the case of seeding densities greater than 0.06 particles per pixel and of PSFs characterized by a standard deviation larger than 0.8 pixels.
Long-term efficacy of biomodeled polymethyl methacrylate implants for orbitofacial defects.
Groth, Michael J; Bhatnagar, Aparna; Clearihue, William J; Goldberg, Robert A; Douglas, Raymond S
2006-01-01
To report the long-term efficacy of custom polymethyl methacrylate implants using high-resolution computed tomographic modeling in the reconstruction of complex orbitofacial defects secondary to trauma. Nine patients with complex orbitofacial bone defects after trauma were evaluated for this retrospective, nonrandomized, noncomparative study. All the patients underwent reconstruction using custom, heat-cured polymethyl methacrylate implants. Patients were followed up postoperatively and evaluated for complications. Nine consecutive patients (5 men and 4 women) aged 28 to 63 years who underwent surgical reconstruction using prefabricated, heat-cured polymethyl methacrylate implants were included in the study. The interval between injury and presentation ranged from 1 month to 40 years. There were no significant complications, including infection, extrusion, or displacement of the implant. In all of the patients, wound healing was uneventful, with antibiotic drugs administered perioperatively. Mean follow-up was 4.3 years from the first visit (range, 6 months to 10 years). Computed tomographic biomodeled, prefabricated, heat-cured polymethyl methacrylate implants are well tolerated in the long term. Their advantages include customized design, long-term biocompatibility, and excellent aesthetic results.
NASA Astrophysics Data System (ADS)
Thampi, S. V.; Devasia, C. V.; Ravindran, S.; Pant, T. K.; Sridharan, R.
To investigate the equatorial ionospheric processes like the Equatorial Ionization Anomaly (EIA) and Equatorial Spread F and their inter relationships, a network of five stations receiving the 150 and 400 MHz transmissions from the Low Earth Orbiting Satellites (LEOs) covering the region from Trivandrum (8.5°N, Dip ˜0.3N°) to New Delhi (28°N, Dip ˜20°N) is set up along the 77-78°E longitude. The receivers measure the relative phase of 150 MHz with respect to 400 MHz, which is proportional to the slant relative Total Electron Content (TEC) along the line of sight. These simultaneous TEC measurements are inverted to obtain the tomographic image of the latitudinal distribution of electron densities in the meridional plane. The inversion is done using the Algebraic Reconstruction Technique (ART). In this paper, the tomographic images of the equatorial ionosphere along the 77-78° E meridians are presented. The images indicate the movement of the anomaly crest, as well as the strength of EIA at various local times, which in turn control the over all electrodynamics of the evening time ionosphere, favoring the occurrence of Equatorial Spread F (ESF) irregularities. These features are discussed in detail under varying geophysical conditions. The results of the sensitivity analysis of the inversion algorithm using model ionospheres are also presented.
Planning surgical reconstruction in Treacher-Collins syndrome using virtual simulation.
Nikkhah, Dariush; Ponniah, Allan; Ruff, Cliff; Dunaway, David
2013-11-01
Treacher-Collins syndrome is a rare autosomal dominant condition of varying phenotypic expression. The surgical correction in this syndrome is difficult, and the approach varies between craniofacial departments worldwide. The authors aimed to design standardized tools for planning orbitozygomatic and mandibular reconstruction in Treacher-Collins syndrome using geometric morphometrics. The Great Ormond Street Hospital database was retrospectively identified for patients with Treacher-Collins syndrome. Thirteen children (aged 2 to 15 years) who had suitable preoperative three-dimensional computed tomographic head scans were included. Six Treacher-Collins syndrome three-dimensional computed tomographic head scans were quantitatively compared using a template of 96 anatomically defined landmarks to 26 age-matched normal dry skulls. Thin-plate spline videos illustrated the characteristic deformities of retromicrognathia and maxillary and orbitozygomatic hypoplasia in the Treacher-Collins syndrome population. Geometric morphometrics was used in the virtual reconstruction of the orbitozygomatic and mandibular region in Treacher-Collins syndrome patients. Intrarater and interrater reliability of the landmarks was acceptable and within a standard deviation of less than 1 mm on 97 percent and 100 percent of 10 repeated scans, respectively. Virtual normalization of the Treacher-Collins syndrome skull effectively describes characteristic skeletal deformities and provides a useful guide to surgical reconstruction. Size-matched stereolithographic templates derived from thin-plate spline warps can provide effective intraoperative templates for zygomatic and mandibular reconstruction in the Treacher-Collins syndrome patient. Diagnostic, V.
Practical implementation of tetrahedral mesh reconstruction in emission tomography
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2014-01-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise. PMID:23588373
Practical implementation of tetrahedral mesh reconstruction in emission tomography
NASA Astrophysics Data System (ADS)
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2013-05-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersson, P., E-mail: peter.andersson@physics.uu.se; Andersson-Sunden, E.; Sjöstrand, H.
2014-08-01
In nuclear boiling water reactor cores, the distribution of water and steam (void) is essential for both safety and efficiency reasons. In order to enhance predictive capabilities, void distribution assessment is performed in two-phase test-loops under reactor-relevant conditions. This article proposes the novel technique of fast-neutron tomography using a portable deuterium-tritium neutron generator to determine the time-averaged void distribution in these loops. Fast neutrons have the advantage of high transmission through the metallic structures and pipes typically concealing a thermal-hydraulic test loop, while still being fairly sensitive to the water/void content. However, commercially available fast-neutron generators also have the disadvantagemore » of a relatively low yield and fast-neutron detection also suffers from relatively low detection efficiency. Fortunately, some loops are axially symmetric, a property which can be exploited to reduce the amount of data needed for tomographic measurement, thus limiting the interrogation time needed. In this article, three axially symmetric test objects depicting a thermal-hydraulic test loop have been examined; steel pipes with outer diameter 24 mm, thickness 1.5 mm, and with three different distributions of the plastic material POM inside the pipes. Data recorded with the FANTOM fast-neutron tomography instrument have been used to perform tomographic reconstructions to assess their radial material distribution. Here, a dedicated tomographic algorithm that exploits the symmetry of these objects has been applied, which is described in the paper. Results are demonstrated in 20 rixel (radial pixel) reconstructions of the interior constitution and 2D visualization of the pipe interior is demonstrated. The local POM attenuation coefficients in the rixels were measured with errors (RMS) of 0.025, 0.020, and 0.022 cm{sup −1}, solid POM attenuation coefficient. The accuracy and precision is high enough to provide a useful indication on the flow mode, and a visualization of the radial material distribution can be obtained. A benefit of this system is its potential to be mounted at any axial height of a two-phase test section without requirements for pre-fabricated entrances or windows. This could mean a significant increase in flexibility of the void distribution assessment capability at many existing two-phase test loops.« less
Andersson, P; Andersson-Sunden, E; Sjöstrand, H; Jacobsson-Svärd, S
2014-08-01
In nuclear boiling water reactor cores, the distribution of water and steam (void) is essential for both safety and efficiency reasons. In order to enhance predictive capabilities, void distribution assessment is performed in two-phase test-loops under reactor-relevant conditions. This article proposes the novel technique of fast-neutron tomography using a portable deuterium-tritium neutron generator to determine the time-averaged void distribution in these loops. Fast neutrons have the advantage of high transmission through the metallic structures and pipes typically concealing a thermal-hydraulic test loop, while still being fairly sensitive to the water/void content. However, commercially available fast-neutron generators also have the disadvantage of a relatively low yield and fast-neutron detection also suffers from relatively low detection efficiency. Fortunately, some loops are axially symmetric, a property which can be exploited to reduce the amount of data needed for tomographic measurement, thus limiting the interrogation time needed. In this article, three axially symmetric test objects depicting a thermal-hydraulic test loop have been examined; steel pipes with outer diameter 24 mm, thickness 1.5 mm, and with three different distributions of the plastic material POM inside the pipes. Data recorded with the FANTOM fast-neutron tomography instrument have been used to perform tomographic reconstructions to assess their radial material distribution. Here, a dedicated tomographic algorithm that exploits the symmetry of these objects has been applied, which is described in the paper. Results are demonstrated in 20 rixel (radial pixel) reconstructions of the interior constitution and 2D visualization of the pipe interior is demonstrated. The local POM attenuation coefficients in the rixels were measured with errors (RMS) of 0.025, 0.020, and 0.022 cm(-1), solid POM attenuation coefficient. The accuracy and precision is high enough to provide a useful indication on the flow mode, and a visualization of the radial material distribution can be obtained. A benefit of this system is its potential to be mounted at any axial height of a two-phase test section without requirements for pre-fabricated entrances or windows. This could mean a significant increase in flexibility of the void distribution assessment capability at many existing two-phase test loops.
NASA Astrophysics Data System (ADS)
Huh, C.; Bolch, W. E.
2003-10-01
Two classes of anatomic models currently exist for use in both radiation protection and radiation dose reconstruction: stylized mathematical models and tomographic voxel models. The former utilize 3D surface equations to represent internal organ structure and external body shape, while the latter are based on segmented CT or MR images of a single individual. While tomographic models are clearly more anthropomorphic than stylized models, a given model's characterization as being anthropometric is dependent upon the reference human to which the model is compared. In the present study, data on total body mass, standing/sitting heights and body mass index are collected and reviewed for the US population covering the time interval from 1971 to 2000. These same anthropometric parameters are then assembled for the ORNL series of stylized models, the GSF series of tomographic models (Golem, Helga, Donna, etc), the adult male Zubal tomographic model and the UF newborn tomographic model. The stylized ORNL models of the adult male and female are found to be fairly representative of present-day average US males and females, respectively, in terms of both standing and sitting heights for ages between 20 and 60-80 years. While the ORNL adult male model provides a reasonably close match to the total body mass of the average US 21-year-old male (within ~5%), present-day 40-year-old males have an average total body mass that is ~16% higher. For radiation protection purposes, the use of the larger 73.7 kg adult ORNL stylized hermaphrodite model provides a much closer representation of average present-day US females at ages ranging from 20 to 70 years. In terms of the adult tomographic models from the GSF series, only Donna (40-year-old F) closely matches her age-matched US counterpart in terms of average body mass. Regarding standing heights, the better matches to US age-correlated averages belong to Irene (32-year-old F) for the females and Golem (38-year-old M) for the males. Both Helga (27-year-old F) and Donna, however, provide good matches to average US sitting heights for adult females, while Golem and Otoko (male of unknown age) yield sitting heights that are slightly below US adult male averages. Finally, Helga is seen as the only GSF tomographic female model that yields a body mass index in line with her average US female counterpart at age 26. In terms of dose reconstruction activities, however, all current tomographic voxel models are valuable assets in attempting to cover the broad distribution of individual anthropometric parameters representative of the current US population. It is highly recommended that similar attempts to create a broad library of tomographic models be initiated in the United States and elsewhere to complement and extend the limited number of tomographic models presently available for these efforts.
Tomographic Imaging of a Forested Area By Airborne Multi-Baseline P-Band SAR.
Frey, Othmar; Morsdorf, Felix; Meier, Erich
2008-09-24
In recent years, various attempts have been undertaken to obtain information about the structure of forested areas from multi-baseline synthetic aperture radar data. Tomographic processing of such data has been demonstrated for airborne L-band data but the quality of the focused tomographic images is limited by several factors. In particular, the common Fourierbased focusing methods are susceptible to irregular and sparse sampling, two problems, that are unavoidable in case of multi-pass, multi-baseline SAR data acquired by an airborne system. In this paper, a tomographic focusing method based on the time-domain back-projection algorithm is proposed, which maintains the geometric relationship between the original sensor positions and the imaged target and is therefore able to cope with irregular sampling without introducing any approximations with respect to the geometry. The tomographic focusing quality is assessed by analysing the impulse response of simulated point targets and an in-scene corner reflector. And, in particular, several tomographic slices of a volume representing a forested area are given. The respective P-band tomographic data set consisting of eleven flight tracks has been acquired by the airborne E-SAR sensor of the German Aerospace Center (DLR).
Analysis of computer images in the presence of metals
NASA Astrophysics Data System (ADS)
Buzmakov, Alexey; Ingacheva, Anastasia; Prun, Victor; Nikolaev, Dmitry; Chukalina, Marina; Ferrero, Claudio; Asadchikov, Victor
2018-04-01
Artifacts caused by intensely absorbing inclusions are encountered in computed tomography via polychromatic scanning and may obscure or simulate pathologies in medical applications. To improve the quality of reconstruction if high-Z inclusions in presence, previously we proposed and tested with synthetic data an iterative technique with soft penalty mimicking linear inequalities on the photon-starved rays. This note reports a test at the tomographic laboratory set-up at the Institute of Crystallography FSRC "Crystallography and Photonics" RAS in which tomographic scans were successfully made of temporary tooth without inclusion and with Pb inclusion.
Coe, Ryan L; Seibel, Eric J
2013-09-01
We present theoretical and experimental results of axial displacement of objects relative to a fixed condenser focal plane (FP) in optical projection tomographic microscopy (OPTM). OPTM produces three-dimensional, reconstructed images of single cells from two-dimensional projections. The cell rotates in a microcapillary to acquire projections from different perspectives where the objective FP is scanned through the cell while the condenser FP remains fixed at the center of the microcapillary. This work uses a combination of experimental and theoretical methods to improve the OPTM instrument design.
Nose and Nasal Planum Neoplasia, Reconstruction.
Worley, Deanna R
2016-07-01
Most intranasal lesions are best treated with radiation therapy. Computed tomographic imaging with intravenous contrast is critical for treatment planning. Computed tomographic images of the nose will best assess the integrity of the cribriform plate for central nervous system invasion by a nasal tumor. Because of an owner's emotional response to an altered appearance of their dog's face, discussions need to include the entire family before proceeding with nasal planectomy or radical planectomy. With careful case selection, nasal planectomy and radical planectomy surgeries can be locally curative. Copyright © 2016 Elsevier Inc. All rights reserved.
Feasibility of hydrogen density estimation from tomographic sensing of Lyman alpha emission
NASA Astrophysics Data System (ADS)
Waldrop, L.; Kamalabadi, F.; Ren, D.
2015-12-01
In this work, we describe the scientific motivation, basic principles, and feasibility of a new approach to the estimation of neutral hydrogen (H) density in the terrestrial exosphere based on the 3-D tomographic sensing of optically thin H emission at 121.6 nm (Lyman alpha). In contrast to existing techniques, Lyman alpha tomography allows for model-independent reconstruction of the underlying H distribution in support of investigations regarding the origin and time-dependent evolution of exospheric structure. We quantitatively describe the trade-off space between the measurement sampling rate, viewing geometry, and the spatial and temporal resolution of the reconstruction that is supported by the data. We demonstrate that this approach is feasible from either earth-orbiting satellites such as the stereoscopic NASA TWINS mission or from a CubeSat platform along a trans-exosphere trajectory such as that enabled by the upcoming Exploration Mission 1 launch.
Using artificial neural networks (ANN) for open-loop tomography
NASA Astrophysics Data System (ADS)
Osborn, James; De Cos Juez, Francisco Javier; Guzman, Dani; Butterley, Timothy; Myers, Richard; Guesalaga, Andres; Laine, Jesus
2011-09-01
The next generation of adaptive optics (AO) systems require tomographic techniques in order to correct for atmospheric turbulence along lines of sight separated from the guide stars. Multi-object adaptive optics (MOAO) is one such technique. Here, we present a method which uses an artificial neural network (ANN) to reconstruct the target phase given off-axis references sources. This method does not require any input of the turbulence profile and is therefore less susceptible to changing conditions than some existing methods. We compare our ANN method with a standard least squares type matrix multiplication method (MVM) in simulation and find that the tomographic error is similar to the MVM method. In changing conditions the tomographic error increases for MVM but remains constant with the ANN model and no large matrix inversions are required.
Linear Optimization and Image Reconstruction
1994-06-01
final example is again a novel one. We formulate the problem of computer assisted tomographic ( CAT ) image reconstruction as a linear optimization...possibility that a patient, Fred, suffers from a brain tumor. Further, the physician opts to make use of the CAT (Computer Aided Tomography) scan device...and examine the inside of Fred’s head without exploratory surgery. The CAT scan machine works by projecting a finite number of X-rays of known
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
Linear single-step image reconstruction in the presence of nonscattering regions.
Dehghani, H; Delpy, D T
2002-06-01
There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.
Linear single-step image reconstruction in the presence of nonscattering regions
NASA Astrophysics Data System (ADS)
Dehghani, H.; Delpy, D. T.
2002-06-01
There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.
NASA Astrophysics Data System (ADS)
Hei, Matthew A.; Budzien, Scott A.; Dymond, Kenneth F.; Nicholas, Andrew C.; Paxton, Larry J.; Schaefer, Robert K.; Groves, Keith M.
2017-07-01
We present the Volume Emission Rate Tomography (VERT) technique for inverting satellite-based, multisensor limb and nadir measurements of atmospheric ultraviolet emission to create whole-orbit reconstructions of atmospheric volume emission rate. The VERT approach is more general than previous ionospheric tomography methods because it can reconstruct the volume emission rate field irrespective of the particular excitation mechanisms (e.g., radiative recombination, photoelectron impact excitation, and energetic particle precipitation in auroras); physical models are then applied to interpret the airglow. The technique was developed and tested using data from the Special Sensor Ultraviolet Limb Imager and Special Sensor Ultraviolet Spectrographic Imager instruments aboard the Defense Meteorological Satellite Program F-18 spacecraft and planned for use with upcoming remote sensing missions. The technique incorporates several features to optimize the tomographic solutions, such as the use of a nonnegative algorithm (Richardson-Lucy, RL) that explicitly accounts for the Poisson statistics inherent in optical measurements, capability to include extinction effects due to resonant scattering and absorption of the photons from the lines of sight, a pseudodiffusion-based regularization scheme implemented between iterations of the RL code to produce smoother solutions, and the capability to estimate error bars on the solutions. Tests using simulated atmospheric emissions verify that the technique performs well in a variety of situations, including daytime, nighttime, and even in the challenging terminator regions. Lastly, we consider ionospheric nightglow and validate reconstructions of the nighttime electron density against Advanced Research Project Agency (ARPA) Long-range Tracking and Identification Radar (ALTAIR) incoherent scatter radar data.
Plenoptic PIV: Towards simple, robust 3D flow measurements
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Tim
2013-11-01
In this work, we report on the recent development of plenoptic PIV for the measurement of 3D flow fields. Plenoptic PIV uses a plenoptic camera to record the 4D light-field generated by a volume of particles seeded into a flow field. Plenoptic cameras are primarily known for their ability to computational refocus or change the perspective of an image after it has been acquired. In this work, we use tomographic algorithms to reconstruct a 3D volume of the particle field and apply a cross-correlation algorithm to a pair of particle volumes to determine the 3D/3C velocity field. The primary advantage of plenoptic PIV over multi-camera techniques is that it only uses a single camera, which greatly reduces the cost and simplifies a typical experimental arrangement. In addition, plenoptic PIV is capable of making measurements over dimensions on the order of 100 mm × 100 mm × 100 mm. The spatial resolution and accuracy of the technique are presented along with examples of 3D velocity data acquired in turbulent boundary layers and supersonic jets. This work was primarily supported through an AFOSR grant.
Remote Sensing of Clouds for Solar Forecasting Applications
NASA Astrophysics Data System (ADS)
Mejia, Felipe
A method for retrieving cloud optical depth (tauc) using a UCSD developed ground- based Sky Imager (USI) is presented. The Radiance Red-Blue Ratio (RRBR) method is motivated from the analysis of simulated images of various tauc produced by a Radiative Transfer Model (RTM). From these images the basic parameters affecting the radiance and RBR of a pixel are identified as the solar zenith angle (SZA), tau c , solar pixel an- gle/scattering angle (SPA), and pixel zenith angle/view angle (PZA). The effects of these parameters are described and the functions for radiance, Ilambda (tau c ,SZA,SPA,PZA) , and the red-blue ratio, RBR(tauc ,SZA,SPA,PZA) , are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for tau c , where RBR increases with tauc up to about tauc = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured Imeaslambda (SPA,PZA) , in addition to RBRmeas (SPA,PZA ) to obtain a unique solution for tauc . The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement program (ARM) site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min [ MH96a ] method for overcast skies. tau c values ranged from 0-80 with values over 80 being capped and registered as 80. A tauc RMSE of 2.5 between the Min method [ MH96b ] and the USI are observed. The MWR and USI have an RMSE of 2.2 which is well within the uncertainty of the MWR. The procedure developed here provides a foundation to test and develop other cloud detection algorithms. Using the RRBR tauc estimate as an input we then explore the potential of using tomographic techniques for 3-D cloud reconstruction. The Algebraic Reconstruction Technique (ART) is applied to optical depth maps from sky images to reconstruct 3-D cloud extinction coefficients. Reconstruction accuracy is explored for different products, including surface irradiance, extinction coefficients and Liquid Water Path, as a function of the number of available sky imagers (SIs) and setup distance. Increasing the number of cameras improves the accuracy of the 3-D reconstruction: For surface irradiance, the error decreases significantly up to four imagers at which point the improvements become marginal while k error continues to decrease with more cameras. The ideal distance between imagers was also explored: For a cloud height of 1 km, increasing distance up to 3 km (the domain length) improved the 3-D reconstruction for surface irradiance, while k error continued to decrease with increasing decrease. An iterative reconstruction technique was also used to improve the results of the ART by minimizing the error between input images and reconstructed simulations. For the best case of a nine imager deployment, the ART and iterative method resulted in 53.4% and 33.6% mean average error (MAE) for the extinction coefficients, respectively. The tomographic methods were then tested on real world test cases in the Uni- versity of California San Diego's (UCSD) solar testbed. Five UCSD sky imagers (USI) were installed across the testbed based on the best performing distances in simulations. Topographic obstruction is explored as a source of error by analyzing the increased error with obstruction in the field of view of the horizon. As more of the horizon is obstructed the error increases. If at least a field of view of 70° is available for the camera the accuracy is within 2% of the full field of view. Errors caused by stray light are also explored by removing the circumsolar region from images and comparing the cloud reconstruction to a full image. Removing less than 30% of the circumsolar region image and GHI errors were within 0.2% of the full image while errors in k increased 1%. Removing more than 30° around the sun resulted in inaccurate cloud reconstruction. Using four of the five USI a 3D cloud is reconstructed and compared to the fifth camera. The image of the fifth camera (excluded from the reconstruction) was then simulated and found to have a 22.9% error compared to the ground truth.
Gilles, Luc; Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Ellerbroek, Brent
2013-05-01
This paper discusses the performance and cost of two computationally efficient Fourier-based tomographic wavefront reconstruction algorithms for wide-field laser guide star (LGS) adaptive optics (AO). The first algorithm is the iterative Fourier domain preconditioned conjugate gradient (FDPCG) algorithm developed by Yang et al. [Appl. Opt.45, 5281 (2006)], combined with pseudo-open-loop control (POLC). FDPCG's computational cost is proportional to N log(N), where N denotes the dimensionality of the tomography problem. The second algorithm is the distributed Kalman filter (DKF) developed by Massioni et al. [J. Opt. Soc. Am. A28, 2298 (2011)], which is a noniterative spatially invariant controller. When implemented in the Fourier domain, DKF's cost is also proportional to N log(N). Both algorithms are capable of estimating spatial frequency components of the residual phase beyond the wavefront sensor (WFS) cutoff frequency thanks to regularization, thereby reducing WFS spatial aliasing at the expense of more computations. We present performance and cost analyses for the LGS multiconjugate AO system under design for the Thirty Meter Telescope, as well as DKF's sensitivity to uncertainties in wind profile prior information. We found that, provided the wind profile is known to better than 10% wind speed accuracy and 20 deg wind direction accuracy, DKF, despite its spatial invariance assumptions, delivers a significantly reduced wavefront error compared to the static FDPCG minimum variance estimator combined with POLC. Due to its nonsequential nature and high degree of parallelism, DKF is particularly well suited for real-time implementation on inexpensive off-the-shelf graphics processing units.
Parallel Computing for the Computed-Tomography Imaging Spectrometer
NASA Technical Reports Server (NTRS)
Lee, Seungwon
2008-01-01
This software computes the tomographic reconstruction of spatial-spectral data from raw detector images of the Computed-Tomography Imaging Spectrometer (CTIS), which enables transient-level, multi-spectral imaging by capturing spatial and spectral information in a single snapshot.
Huang, Chih-Hao; Brunsvold, Michael A
2006-01-01
Maxillary sinusitis may develop from the extension of periodontal disease. In this case, reconstructed three-dimensional images from multidetector spiral computed tomographs were helpful in evaluating periodontal bony defects and their relationship with the maxillary sinus. A 42-year-old woman in good general health presented with a chronic deep periodontal pocket on the palatal and interproximal aspects of tooth #14. Probing depths of the tooth ranged from 2 to 9 mm, and it exhibited a Class 1 mobility. Radiographs revealed a close relationship between the root apex and the maxillary sinus. The patient's periodontal diagnosis was localized severe chronic periodontitis. Treatment of the tooth consisted of cause-related therapy, surgical exploration, and bone grafting. A very deep circumferential bony defect at the palatal root of tooth #14 was noted during surgery. After the operation, the wound healed without incidence, but 10 days later, a maxillary sinusitis and periapical abscess developed. To control the infection, an evaluation of sinus and alveolus using computed tomographs was performed, systemic antibiotics were prescribed, and endodontic treatment was initiated. Two weeks after surgical treatment, the infection was relieved with the help of antibiotics and endodontic treatment. Bilateral bony communications between the maxillary sinus and periodontal bony defect of maxillary first molars were shown on three-dimensional computed tomographs. The digitally reconstructed images added valuable information for evaluating the periodontal defects. Three-dimensional images from spiral computed tomographs (CT) aided in evaluating and treating the close relationship between maxillary sinus disease and adjacent periodontal defects.
GATE: a simulation toolkit for PET and SPECT.
Jan, S; Santin, G; Strul, D; Staelens, S; Assié, K; Autret, D; Avner, S; Barbier, R; Bardiès, M; Bloomfield, P M; Brasse, D; Breton, V; Bruyndonckx, P; Buvat, I; Chatziioannou, A F; Choi, Y; Chung, Y H; Comtat, C; Donnarieix, D; Ferrer, L; Glick, S J; Groiselle, C J; Guez, D; Honore, P F; Kerhoas-Cavata, S; Kirov, A S; Kohli, V; Koole, M; Krieguer, M; van der Laan, D J; Lamare, F; Largeron, G; Lartizien, C; Lazaro, D; Maas, M C; Maigne, L; Mayet, F; Melot, F; Merheb, C; Pennacchio, E; Perez, J; Pietrzyk, U; Rannou, F R; Rey, M; Schaart, D R; Schmidtlein, C R; Simon, L; Song, T Y; Vieira, J M; Visvikis, D; Van de Walle, R; Wieërs, E; Morel, C
2004-10-07
Monte Carlo simulation is an essential tool in emission tomography that can assist in the design of new medical imaging devices, the optimization of acquisition protocols and the development or assessment of image reconstruction algorithms and correction techniques. GATE, the Geant4 Application for Tomographic Emission, encapsulates the Geant4 libraries to achieve a modular, versatile, scripted simulation toolkit adapted to the field of nuclear medicine. In particular, GATE allows the description of time-dependent phenomena such as source or detector movement, and source decay kinetics. This feature makes it possible to simulate time curves under realistic acquisition conditions and to test dynamic reconstruction algorithms. This paper gives a detailed description of the design and development of GATE by the OpenGATE collaboration, whose continuing objective is to improve, document and validate GATE by simulating commercially available imaging systems for PET and SPECT. Large effort is also invested in the ability and the flexibility to model novel detection systems or systems still under design. A public release of GATE licensed under the GNU Lesser General Public License can be downloaded at http:/www-lphe.epfl.ch/GATE/. Two benchmarks developed for PET and SPECT to test the installation of GATE and to serve as a tutorial for the users are presented. Extensive validation of the GATE simulation platform has been started, comparing simulations and measurements on commercially available acquisition systems. References to those results are listed. The future prospects towards the gridification of GATE and its extension to other domains such as dosimetry are also discussed.
NVIDIA OptiX ray-tracing engine as a new tool for modelling medical imaging systems
NASA Astrophysics Data System (ADS)
Pietrzak, Jakub; Kacperski, Krzysztof; Cieślar, Marek
2015-03-01
The most accurate technique to model the X- and gamma radiation path through a numerically defined object is the Monte Carlo simulation which follows single photons according to their interaction probabilities. A simplified and much faster approach, which just integrates total interaction probabilities along selected paths, is known as ray tracing. Both techniques are used in medical imaging for simulating real imaging systems and as projectors required in iterative tomographic reconstruction algorithms. These approaches are ready for massive parallel implementation e.g. on Graphics Processing Units (GPU), which can greatly accelerate the computation time at a relatively low cost. In this paper we describe the application of the NVIDIA OptiX ray-tracing engine, popular in professional graphics and rendering applications, as a new powerful tool for X- and gamma ray-tracing in medical imaging. It allows the implementation of a variety of physical interactions of rays with pixel-, mesh- or nurbs-based objects, and recording any required quantities, like path integrals, interaction sites, deposited energies, and others. Using the OptiX engine we have implemented a code for rapid Monte Carlo simulations of Single Photon Emission Computed Tomography (SPECT) imaging, as well as the ray-tracing projector, which can be used in reconstruction algorithms. The engine generates efficient, scalable and optimized GPU code, ready to run on multi GPU heterogeneous systems. We have compared the results our simulations with the GATE package. With the OptiX engine the computation time of a Monte Carlo simulation can be reduced from days to minutes.
Effects of small variations of speed of sound in optoacoustic tomographic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deán-Ben, X. Luís; Ntziachristos, Vasilis; Razansky, Daniel, E-mail: dr@tum.de
2014-07-15
Purpose: Speed of sound difference in the imaged object and surrounding coupling medium may reduce the resolution and overall quality of optoacoustic tomographic reconstructions obtained by assuming a uniform acoustic medium. In this work, the authors investigate the effects of acoustic heterogeneities and discuss potential benefits of accounting for those during the reconstruction procedure. Methods: The time shift of optoacoustic signals in an acoustically heterogeneous medium is studied theoretically by comparing different continuous and discrete wave propagation models. A modification of filtered back-projection reconstruction is subsequently implemented by considering a straight acoustic rays model for ultrasound propagation. The results obtainedmore » with this reconstruction procedure are compared numerically and experimentally to those obtained assuming a heuristically fitted uniform speed of sound in both full-view and limited-view optoacoustic tomography scenarios. Results: The theoretical analysis showcases that the errors in the time-of-flight of the signals predicted by considering the straight acoustic rays model tend to be generally small. When using this model for reconstructing simulated data, the resulting images accurately represent the theoretical ones. On the other hand, significant deviations in the location of the absorbing structures are found when using a uniform speed of sound assumption. The experimental results obtained with tissue-mimicking phantoms and a mouse postmortem are found to be consistent with the numerical simulations. Conclusions: Accurate analysis of effects of small speed of sound variations demonstrates that accounting for differences in the speed of sound allows improving optoacoustic reconstruction results in realistic imaging scenarios involving acoustic heterogeneities in tissues and surrounding media.« less
GPU implementation of prior image constrained compressed sensing (PICCS)
NASA Astrophysics Data System (ADS)
Nett, Brian E.; Tang, Jie; Chen, Guang-Hong
2010-04-01
The Prior Image Constrained Compressed Sensing (PICCS) algorithm (Med. Phys. 35, pg. 660, 2008) has been applied to several computed tomography applications with both standard CT systems and flat-panel based systems designed for guiding interventional procedures and radiation therapy treatment delivery. The PICCS algorithm typically utilizes a prior image which is reconstructed via the standard Filtered Backprojection (FBP) reconstruction algorithm. The algorithm then iteratively solves for the image volume that matches the measured data, while simultaneously assuring the image is similar to the prior image. The PICCS algorithm has demonstrated utility in several applications including: improved temporal resolution reconstruction, 4D respiratory phase specific reconstructions for radiation therapy, and cardiac reconstruction from data acquired on an interventional C-arm. One disadvantage of the PICCS algorithm, just as other iterative algorithms, is the long computation times typically associated with reconstruction. In order for an algorithm to gain clinical acceptance reconstruction must be achievable in minutes rather than hours. In this work the PICCS algorithm has been implemented on the GPU in order to significantly reduce the reconstruction time of the PICCS algorithm. The Compute Unified Device Architecture (CUDA) was used in this implementation.
Pant, Jeevan K; Krishnan, Sridhar
2014-04-01
A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.
Fang, Chi-hua; Kong, Deshuai; Wang, Xiaojun; Wang, Huaizhi; Xiang, Nan; Fan, Yingfang; Yang, Jian; Zhong, Shi Zheng
2014-04-01
This study aimed to investigate the clinical significance of 3-dimensional (3D) reconstruction of peripancreatic vessels for patients with suspected pancreatic cancer (PC). A total of 89 patients with PC were included; 60 patients randomly underwent computed tomographic angiography. Based on the findings of 3D reconstruction of peripancreatic vessels, the appropriate method for individualized tumor resection was determined. These patients were compared with 29 conventionally treated patients with PC. The rate of visualization was 100% for great vessels around the pancreas. The detection rates for anterior superior pancreaticoduodenal artery, posterior superior pancreaticoduodenal artery, anterior inferior pancreaticoduodenal artery, posterior inferior pancreaticoduodenal artery, dorsal pancreatic artery, superior marginal arterial branch of the pancreatic head, anterior superior pancreaticoduodenal vein, posterior superior pancreaticoduodenal vein, anterior inferior pancreaticoduodenal vein, and posterior inferior pancreaticoduodenal vein were 86.6%, 85.0%, 76.6%, 71.6%, 91.6%, 53.3%, 61.6%, 55.0%, 43.3%, and 51.6%, respectively. Forty-three patients who had undergone 3D reconstruction underwent surgery. Of the 29 conventionally treated patients, 19 underwent surgery. The operative time, blood loss, length of hospital stay, and complication incidence of the 43 patients were superior to that of the 19 patients. A peripancreatic vascular reconstruction can reveal the vascular anatomy, variations of peripancreatic vascular, and tumor-induced vascular changes; the application of the simulation surgery platform could reduce surgical trauma and decrease operative time.
NASA Astrophysics Data System (ADS)
Chen, Y. W.; Wu, J.; Suppe, J.
2017-12-01
Global seismic tomography has provided new and increasingly higher resolution constraints on subducted lithospheric remnants in terms of their position, depth, and volumes. In this study we aim to link tomographic slab anomalies in the mantle under South America to Andean geology using methods to unfold (i.e. structurally restore) slabs back to earth surface and input them to globally consistent plate reconstructions (Wu et al., 2016). The Andean margin of South America has long been interpreted as a classic example of a continuous subduction system since early Jurassic or later. However, significant gaps in Andean plate tectonic reconstructions exist due to missing or incomplete geology from extensive Nazca-South America plate convergence (i.e. >5000 km since 80 Ma). We mapped and unfolded the Nazca slab from global seismic tomography to produce a quantitative plate reconstruction of the Andes back to the late Cretaceous 80 Ma. Our plate model predicts the latest phase of Nazca subduction began in the late Cretaceous subduction after a 100 to 80 Ma plate reorganization, which is supported by Andean geology that indicates a margin-wide compressional event at the mid-late Cretaceous (Tunik et al., 2010). Our Andean plate tectonic reconstructions predict the Andean margin experienced periods of strike-slip/transtensional and even divergent plate tectonics between 80 to 55 Ma. This prediction is roughly consistent with the arc magmatism from northern Chile between 20 to 36°S that resumed at 80 Ma after a magmatic gap. Our model indicates the Andean margin only became fully convergent after 55 Ma. We provide additional constraints on pre-subduction Nazca plate paleogeography by extracting P-wave velocity perturbations within our mapped slab surfaces following Wu et al. (2016). We identified localized slow anomalies within our mapped Nazca slab that apparently show the size and position of the subducted Nazca ridge, Carnegie ridge and the hypothesized Inca plateau within the Nazca slab. These intra-slab velocity anomalies provide the most complete tomographic evidence to date in support the classic, but still controversial hypothesis of subducted, relatively buoyant oceanic lithosphere features along the Andean margin.
Longitudinal Differences of Ionospheric Vertical Density Distribution and Equatorial Electrodynamics
NASA Technical Reports Server (NTRS)
Yizengaw, E.; Zesta, E.; Moldwin, M. B.; Damtie, B.; Mebrahtu, A.; Valledares, C.E.; Pfaff, R. F.
2012-01-01
Accurate estimation of global vertical distribution of ionospheric and plasmaspheric density as a function of local time, season, and magnetic activity is required to improve the operation of space-based navigation and communication systems. The vertical density distribution, especially at low and equatorial latitudes, is governed by the equatorial electrodynamics that produces a vertical driving force. The vertical structure of the equatorial density distribution can be observed by using tomographic reconstruction techniques on ground-based global positioning system (GPS) total electron content (TEC). Similarly, the vertical drift, which is one of the driving mechanisms that govern equatorial electrodynamics and strongly affect the structure and dynamics of the ionosphere in the low/midlatitude region, can be estimated using ground magnetometer observations. We present tomographically reconstructed density distribution and the corresponding vertical drifts at two different longitudes: the East African and west South American sectors. Chains of GPS stations in the east African and west South American longitudinal sectors, covering the equatorial anomaly region of meridian approx. 37 deg and 290 deg E, respectively, are used to reconstruct the vertical density distribution. Similarly, magnetometer sites of African Meridian B-field Education and Research (AMBER) and INTERMAGNET for the east African sector and South American Meridional B-field Array (SAMBA) and Low Latitude Ionospheric Sensor Network (LISN) are used to estimate the vertical drift velocity at two distinct longitudes. The comparison between the reconstructed and Jicamarca Incoherent Scatter Radar (ISR) measured density profiles shows excellent agreement, demonstrating the usefulness of tomographic reconstruction technique in providing the vertical density distribution at different longitudes. Similarly, the comparison between magnetometer estimated vertical drift and other independent drift observation, such as from VEFI onboard Communication/Navigation Outage Forecasting System (C/NOFS) satellite and JULIA radar, is equally promising. The observations at different longitudes suggest that the vertical drift velocities and the vertical density distribution have significant longitudinal differences; especially the equatorial anomaly peaks expand to higher latitudes more in American sector than the African sector, indicating that the vertical drift in the American sector is stronger than the African sector.
NASA Astrophysics Data System (ADS)
Huang, Shi-Hao; Wang, Shiang-Jiu; Tseng, Snow H.
2015-03-01
Optical coherence tomography (OCT) provides high resolution, cross-sectional image of internal microstructure of biological tissue. We use the Finite-Difference Time-Domain method (FDTD) to analyze the data acquired by OCT, which can help us reconstruct the refractive index of the biological tissue. We calculate the refractive index tomography and try to match the simulation with the data acquired by OCT. Specifically, we try to reconstruct the structure of melanin, which has complex refractive indices and is the key component of human pigment system. The results indicate that better reconstruction can be achieved for homogenous sample, whereas the reconstruction is degraded for samples with fine structure or with complex interface. Simulation reconstruction shows structures of the Melanin that may be useful for biomedical optics applications.
CMT for biomedical and other applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spanne, P.
This session includes two presentations describing applications for x-ray tomography using synchrotron radiation for biomedical uses and fluid flow modeling, and outlines advantages for using monoenergetic x-rays. Contrast mechanisms are briefly described and several graphs of absorbed doses and scattering of x-rays are included. Also presented are schematic diagrams of computerized tomographic instrumentation with camera head. A brief description of goals for a real time tomographic system and expected improvements to the system are described. Color photomicrographs of the Berea Sandstone and human bone are provided, as well as a 3-D microtomographic reconstruction of a human vertebra sample.
Low-contrast lesion detection in tomosynthetic breast imaging using a realistic breast phantom
NASA Astrophysics Data System (ADS)
Zhou, Lili; Oldan, Jorge; Fisher, Paul; Gindi, Gene
2006-03-01
Tomosynthesis mammography is a potentially valuable technique for detection of breast cancer. In this simulation study, we investigate the efficacy of three different tomographic reconstruction methods, EM, SART and Backprojection, in the context of an especially difficult mammographic detection task. The task is the detection of a very low-contrast mass embedded in very dense fibro-glandular tissue - a clinically useful task for which tomosynthesis may be well suited. The project uses an anatomically realistic 3D digital breast phantom whose normal anatomic variability limits lesion conspicuity. In order to capture anatomical object variability, we generate an ensemble of phantoms, each of which comprises random instances of various breast structures. We construct medium-sized 3D breast phantoms which model random instances of ductal structures, fibrous connective tissue, Cooper's ligaments and power law structural noise for small scale object variability. Random instances of 7-8 mm irregular masses are generated by a 3D random walk algorithm and placed in very dense fibro-glandular tissue. Several other components of the breast phantom are held fixed, i.e. not randomly generated. These include the fixed breast shape and size, nipple structure, fixed lesion location, and a pectoralis muscle. We collect low-dose data using an isocentric tomosynthetic geometry at 11 angles over 50 degrees and add Poisson noise. The data is reconstructed using the three algorithms. Reconstructed slices through the center of the lesion are presented to human observers in a 2AFC (two-alternative-forced-choice) test that measures detectability by computing AUC (area under the ROC curve). The data collected in each simulation includes two sources of variability, that due to the anatomical variability of the phantom and that due to the Poisson data noise. We found that for this difficult task that the AUC value for EM (0.89) was greater than that for SART (0.83) and Backprojection (0.66).
Kaemmerer, Nadine; Brand, Michael; Hammon, Matthias; May, Matthias; Wuest, Wolfgang; Krauss, Bernhard; Uder, Michael; Lell, Michael M
2016-10-01
Dual-energy computed tomographic angiography (DE-CTA) has been demonstrated to improve the visualization of the head and neck vessels. The aim of this study was to test the potential of split-filter single-source dual-energy CT to automatically remove bone from the final CTA data set. Dual-energy CTA was performed in 50 consecutive patients to evaluate the supra-aortic arteries, either to grade carotid artery stenosis or to rule out traumatic dissections. Dual-energy CTA was performed on a 128-slice single-source CT system equipped with a special filter array to separate the 120-kV spectrum into a high- and a low-energy spectrum for DE-based automated bone removal. Image quality of fully automated bone suppression and subsequent manual optimization was evaluated by 2 radiologists on maximum intensity projections using a 4-grade scoring system. The effect of image reconstruction with an iterative metal artifact reduction algorithm on DE postprocessing was tested using a 3-grade scoring system, and the time demand for each postprocessing step was measured. Two patients were excluded due to insufficient arterial contrast enhancement; in the remaining 48 patients, automated bone removal could be performed successfully. The addition of iterative metal artifact reduction algorithm improved image quality in 58.3% of the cases. After manual optimization, DE-CTA image quality was rated excellent in 7, good in 29, and moderate in 10 patients. Interobserver agreement was high (κ = 0.85). Stenosis grading was not influenced using DE-CTA with bone removal as compared with the original CTA. The time demand for DE image reconstruction was significantly higher than for single-energy reconstruction (42.1 vs 20.9 seconds). Our results suggest that bone removal in DE-CTA of the head and neck vessels with a single-source CT is feasible and can be performed within acceptable time and moderate user interaction.
Multislice spiral CT simulator for dynamic cardiopulmonary studies
NASA Astrophysics Data System (ADS)
De Francesco, Silvia; Ferreira da Silva, Augusto M.
2002-04-01
We've developed a Multi-slice Spiral CT Simulator modeling the acquisition process of a real tomograph over a 4-dimensional phantom (4D MCAT) of the human thorax. The simulator allows us to visually characterize artifacts due to insufficient temporal sampling and a priori evaluate the quality of the images obtained in cardio-pulmonary studies (both with single-/multi-slice and ECG gated acquisition processes). The simulating environment allows both for conventional and spiral scanning modes and includes a model of noise in the acquisition process. In case of spiral scanning, reconstruction facilities include longitudinal interpolation methods (360LI and 180LI both for single and multi-slice). Then, the reconstruction of the section is performed through FBP. The reconstructed images/volumes are affected by distortion due to insufficient temporal sampling of the moving object. The developed simulating environment allows us to investigate the nature of the distortion characterizing it qualitatively and quantitatively (using, for example, Herman's measures). Much of our work is focused on the determination of adequate temporal sampling and sinogram regularization techniques. At the moment, the simulator model is limited to the case of multi-slice tomograph, being planned as a next step of development the extension to cone beam or area detectors.
Coe, Ryan L; Seibel, Eric J
2012-12-01
We present a method for modeling image formation in optical projection tomographic microscopy (OPTM) using high numerical aperture (NA) condensers and objectives. Similar to techniques used in computed tomography, OPTM produces three-dimensional, reconstructed images of single cells from two-dimensional projections. The model is capable of simulating axial scanning of a microscope objective to produce projections, which are reconstructed using filtered backprojection. Simulation of optical scattering in transmission optical microscopy is designed to analyze all aspects of OPTM image formation, such as degree of specimen staining, refractive-index matching, and objective scanning. In this preliminary work, a set of simulations is performed to examine the effect of changing the condenser NA, objective scan range, and complex refractive index on the final reconstruction of a microshell with an outer radius of 1.5 μm and an inner radius of 0.9 μm. The model lays the groundwork for optimizing OPTM imaging parameters and triaging efforts to further improve the overall system design. As the model is expanded in the future, it will be used to simulate a more realistic cell, which could lead to even greater impact.
Ettinger, Kyle S; Alexander, Amy E; Arce, Kevin
2018-04-10
Virtual surgical planning (VSP), computer-aided design and computer-aided modeling, and 3-dimensional printing are 3 distinct technologies that have become increasingly used in head and neck oncology and microvascular reconstruction. Although each of these technologies has long been used for treatment planning in other surgical disciplines, such as craniofacial surgery, trauma surgery, temporomandibular joint surgery, and orthognathic surgery, its widespread use in head and neck reconstructive surgery remains a much more recent event. In response to the growing trend of VSP being used for the planning of fibular free flaps in head and neck reconstruction, some surgeons have questioned the technology's implementation based on its inadequacy in addressing other reconstructive considerations beyond hard tissue anatomy. Detractors of VSP for head and neck reconstruction highlight its lack of capability in accounting for multiple reconstructive factors, such as recipient vessel selection, vascular pedicle reach, need for dead space obliteration, and skin paddle perforator location. It is with this premise in mind that the authors report on a straightforward technique for anatomically localizing peroneal artery perforators during VSP for osteocutaneous fibular free flaps in which bone and a soft tissue skin paddle are required for ablative reconstruction. The technique allows for anatomic perforator localization during the VSP session based solely on data existent at preoperative computed tomographic angiography (CTA); it does not require any modifications to preoperative clinical workflows. It is the authors' presumption that many surgeons in the field are unaware of this planning capability within the context of modern VSP for head and neck reconstruction. The primary purpose of this report is to introduce and further familiarize surgeons with the technique of CTA perforator localization as a method of improving intraoperative fidelity for VSP of osteocutaneous fibular free flaps. Copyright © 2018. Published by Elsevier Inc.
The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2014-06-01
Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.
Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Thurow, Brian S.
2016-09-01
A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.
Rapidly converging multigrid reconstruction of cone-beam tomographic data
NASA Astrophysics Data System (ADS)
Myers, Glenn R.; Kingston, Andrew M.; Latham, Shane J.; Recur, Benoit; Li, Thomas; Turner, Michael L.; Beeching, Levi; Sheppard, Adrian P.
2016-10-01
In the context of large-angle cone-beam tomography (CBCT), we present a practical iterative reconstruction (IR) scheme designed for rapid convergence as required for large datasets. The robustness of the reconstruction is provided by the "space-filling" source trajectory along which the experimental data is collected. The speed of convergence is achieved by leveraging the highly isotropic nature of this trajectory to design an approximate deconvolution filter that serves as a pre-conditioner in a multi-grid scheme. We demonstrate this IR scheme for CBCT and compare convergence to that of more traditional techniques.
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A
2012-01-01
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062
Tang, Jie; Nett, Brian E; Chen, Guang-Hong
2009-10-07
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemkiewicz, J; Palmiotti, A; Miner, M
2014-06-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU valuesmore » were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation treatment planning accuracy.« less
Feng, Yanqiu; Song, Yanli; Wang, Cong; Xin, Xuegang; Feng, Qianjin; Chen, Wufan
2013-10-01
To develop and test a new algorithm for fast direct Fourier transform (DrFT) reconstruction of MR data on non-Cartesian trajectories composed of lines with equally spaced points. The DrFT, which is normally used as a reference in evaluating the accuracy of other reconstruction methods, can reconstruct images directly from non-Cartesian MR data without interpolation. However, DrFT reconstruction involves substantially intensive computation, which makes the DrFT impractical for clinical routine applications. In this article, the Chirp transform algorithm was introduced to accelerate the DrFT reconstruction of radial and Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) MRI data located on the trajectories that are composed of lines with equally spaced points. The performance of the proposed Chirp transform algorithm-DrFT algorithm was evaluated by using simulation and in vivo MRI data. After implementing the algorithm on a graphics processing unit, the proposed Chirp transform algorithm-DrFT algorithm achieved an acceleration of approximately one order of magnitude, and the speed-up factor was further increased to approximately three orders of magnitude compared with the traditional single-thread DrFT reconstruction. Implementation the Chirp transform algorithm-DrFT algorithm on the graphics processing unit can efficiently calculate the DrFT reconstruction of the radial and PROPELLER MRI data. Copyright © 2012 Wiley Periodicals, Inc.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
Kongskov, Rasmus Dalgas; Jørgensen, Jakob Sauer; Poulsen, Henning Friis; Hansen, Per Christian
2016-04-01
Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. [Opt. Express21, 12185 (2013)OPEXFF1094-408710.1364/OE.21.012185], and preliminary results demonstrated improved reconstruction compared with a given two-stage method. Using simulated free-space propagation experiments with a single sample-detector distance, we thoroughly compare the novel method with the two-stage method to address limitations of the preliminary results. We demonstrate that the novel method is substantially more robust toward noise; our simulations point to a possible reduction in counting times by an order of magnitude.
NASA Astrophysics Data System (ADS)
Edjlali, Ehsan; Bérubé-Lauzière, Yves
2018-01-01
We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.
Tomographic separation of composite spectra. 2: The components of 29 UW Canis Majoris
NASA Technical Reports Server (NTRS)
Bagnuolo, William G., Jr.; Gies, Douglas R.; Hahula, Michael E.; Wiemker, Rafael; Wiggs, Michael S.
1994-01-01
We have analyzed the UV photospheric lines of 29 CMa, a 4.39 day period, double-lined O-type spectroscopic binary. Archival data from International Ultraviolet Explorer (IUE)(28 spectra well distributed in oribital phase) were analyzed with several techniques. We find that the mass ratio is q = 1.20 +/- 0.16 (secondary more massive) based on three independent arguments. A tomography algorithm was used to produce the separate spectra of the two stars in six UV spectral regions. The MK spectral classifications of the primary and secondary, O7.5-8 Iab and O9.7 Ib, respectively, were estimated through a comparison of UV line ratios with those in spectral standard stars. The flux ratio of the stars in the UV is 0.36 +/- 0.07 (primary brighter). The primary has a strong P Cygni NIV wavelength 1718 feature, indicating a strong stellar wind. We also present tomographic reconstructions of visual spectral data in the range 4300-4950 A, based on seven observations of differing orbital phases, which confirm the UV classifications, and show that the primary is an Of star. From the spectral classifications, we estimate the temperatures of the stars to be 33,750 K and 29,000 K for primary and secondary, respectively. We then fit visual and UV light curves and show that reasonably good fits can be obtained with these temperatures, a semicontact configuration, an inclination of 74 deg. +/- 2 deg., and an intensity ratio r is less than 0.5.
NASA Astrophysics Data System (ADS)
Edwards, A. W.; Blackler, K.; Gill, R. D.; van der Goot, E.; Holm, J.
1990-10-01
Based upon the experience gained with the present soft x-ray data acquisition system, new techniques are being developed which make extensive use of digital signal processors (DSPs). Digital filters make 13 further frequencies available in real time from the input sampling frequency of 200 kHz. In parallel, various algorithms running on further DSPs generate triggers in response to a range of events in the plasma. The sawtooth crash can be detected, for example, with a delay of only 50 μs from the onset of the collapse. The trigger processor interacts with the digital filter boards to ensure data of the appropriate frequency is recorded throughout a plasma discharge. An independent link is used to pass 780 and 24 Hz filtered data to a network of transputers. A full tomographic inversion and display of the 24 Hz data is carried out in real time using this 15 transputer array. The 780 Hz data are stored for immediate detailed playback following the pulse. Such a system could considerably improve the quality of present plasma diagnostic data which is, in general, sampled at one fixed frequency throughout a discharge. Further, it should provide valuable information towards designing diagnostic data acquisition systems for future long pulse operation machines when a high degree of real-time processing will be required, while retaining the ability to detect, record, and analyze events of interest within such long plasma discharges.
Kashani, Amir H.; Kirkman, Erlinda; Martin, Gabriel; Humayun, Mark S.
2011-01-01
Diagnosis of retinal vascular diseases depends on ophthalmoscopic findings that most often occur after severe visual loss (as in vein occlusions) or chronic changes that are irreversible (as in diabetic retinopathy). Despite recent advances, diagnostic imaging currently reveals very little about the vascular function and local oxygen delivery. One potentially useful measure of vascular function is measurement of hemoglobin oxygen content. In this paper, we demonstrate a novel method of accurately, rapidly and easily measuring oxygen saturation within retinal vessels using in vivo imaging spectroscopy. This method uses a commercially available fundus camera coupled to two-dimensional diffracting optics that scatter the incident light onto a focal plane array in a calibrated pattern. Computed tomographic algorithms are used to reconstruct the diffracted spectral patterns into wavelength components of the original image. In this paper the spectral components of oxy- and deoxyhemoglobin are analyzed from the vessels within the image. Up to 76 spectral measurements can be made in only a few milliseconds and used to quantify the oxygen saturation within the retinal vessels over a 10–15 degree field. The method described here can acquire 10-fold more spectral data in much less time than conventional oximetry systems (while utilizing the commonly accepted fundus camera platform). Application of this method to animal models of retinal vascular disease and clinical subjects will provide useful and novel information about retinal vascular disease and physiology. PMID:21931729
NASA Astrophysics Data System (ADS)
Heublein, Marion; Alshawaf, Fadwa; Zhu, Xiao Xiang; Hinz, Stefan
2016-04-01
An accurate knowledge of the 3D distribution of water vapor in the atmosphere is a key element for weather forecasting and climate research. On the other hand, as water vapor causes a delay in the microwave signal propagation within the atmosphere, a precise determination of water vapor is required for accurate positioning and deformation monitoring using Global Navigation Satellite Systems (GNSS) and Interferometric Synthetic Aperture Radar (InSAR). However, due to its high variability in time and space, the atmospheric water vapor distribution is difficult to model. Since GNSS meteorology was introduced about twenty years ago, it has increasingly been used as a geodetic technique to generate maps of 2D Precipitable Water Vapor (PWV). Moreover, several approaches for 3D tomographic water vapor reconstruction from GNSS-based estimates using the simple least squares adjustment were presented. In this poster, we present an innovative and sophisticated Compressive Sensing (CS) concept for sparsity-driven tomographic reconstruction of 3D atmospheric wet refractivity fields using data from GNSS and InSAR. The 2D zenith wet delay (ZWD) estimates are obtained by a combination of point-wise estimates of the wet delay using GNSS observations and partial InSAR wet delay maps. These ZWD estimates are aggregated to derive realistic wet delay input data of 100 points as if corresponding to 100 GNSS sites within an area of 100 km × 100 km in the test region of the Upper Rhine Graben. The made-up ZWD values can be mapped into different elevation and azimuth angles. Using the Cosine transform, a sparse representation of the wet refractivity field is obtained. In contrast to existing tomographic approaches, we exploit sparsity as a prior for the regularization of the underdetermined inverse system. The new aspects of this work include both the combination of GNSS and InSAR data for water vapor tomography and the sophisticated CS estimation. The accuracy of the estimated 3D water vapor field is determined by comparing slant integrated wet delays computed from the estimated wet refractivities with real GNSS wet delay estimates. This comparison is performed along different elevation and azimuth angles.
Methods for coherent lensless imaging and X-ray wavefront measurements
NASA Astrophysics Data System (ADS)
Guizar Sicairos, Manuel
X-ray diffractive imaging is set apart from other high-resolution imaging techniques (e.g. scanning electron or atomic force microscopy) for its high penetration depth, which enables tomographic 3D imaging of thick samples and buried structures. Furthermore, using short x-ray pulses, it enables the capability to take ultrafast snapshots, giving a unique opportunity to probe nanoscale dynamics at femtosecond time scales. In this thesis we present improvements to phase retrieval algorithms, assess their performance through numerical simulations, and develop new methods for both imaging and wavefront measurement. Building on the original work by Faulkner and Rodenburg, we developed an improved reconstruction algorithm for phase retrieval with transverse translations of the object relative to the illumination beam. Based on gradient-based nonlinear optimization, this algorithm is capable of estimating the object, and at the same time refining the initial knowledge of the incident illumination and the object translations. The advantages of this algorithm over the original iterative transform approach are shown through numerical simulations. Phase retrieval has already shown substantial success in wavefront sensing at optical wavelengths. Although in principle the algorithms can be used at any wavelength, in practice the focus-diversity mechanism that makes optical phase retrieval robust is not practical to implement for x-rays. In this thesis we also describe the novel application of phase retrieval with transverse translations to the problem of x-ray wavefront sensing. This approach allows the characterization of the complex-valued x-ray field in-situ and at-wavelength and has several practical and algorithmic advantages over conventional focused beam measurement techniques. A few of these advantages include improved robustness through diverse measurements, reconstruction from far-field intensity measurements only, and significant relaxation of experimental requirements over other beam characterization approaches. Furthermore, we show that a one-dimensional version of this technique can be used to characterize an x-ray line focus produced by a cylindrical focusing element. We provide experimental demonstrations of the latter at hard x-ray wavelengths, where we have characterized the beams focused by a kinoform lens and an elliptical mirror. In both experiments the reconstructions exhibited good agreement with independent measurements, and in the latter a small mirror misalignment was inferred from the phase retrieval reconstruction. These experiments pave the way for the application of robust phase retrieval algorithms for in-situ alignment and performance characterization of x-ray optics for nanofocusing. We also present a study on how transverse translations help with the well-known uniqueness problem of one-dimensional phase retrieval. We also present a novel method for x-ray holography that is capable of reconstructing an image using an off-axis extended reference in a non-iterative computation, greatly generalizing an earlier approach by Podorov et al. The approach, based on the numerical application of derivatives on the field autocorrelation, was developed from first mathematical principles. We conducted a thorough theoretical study to develop technical and intuitive understanding of this technique and derived sufficient separation conditions required for an artifact-free reconstruction. We studied the effects of missing information in the Fourier domain, and of an imperfect reference, and we provide a signal-to-noise ratio comparison with the more traditional approach of Fourier transform holography. We demonstrated this new holographic approach through proof-of-principle optical experiments and later experimentally at soft x-ray wavelengths, where we compared its performance to Fourier transform holography, iterative phase retrieval and state-of-the-art zone-plate x-ray imaging techniques (scanning and full-field). Finally, we present a demonstration of the technique using a single 20 fs pulse from a high-harmonic table-top source. Holography with an extended reference is shown to provide fast, good quality images that are robust to noise and artifacts that arise from missing information due to a beam stop. (Abstract shortened by UMI.)
Large-scale tomographic particle image velocimetry using helium-filled soap bubbles
NASA Astrophysics Data System (ADS)
Kühn, Matthias; Ehrenfried, Klaus; Bosbach, Johannes; Wagner, Claus
2011-04-01
To measure large-scale flow structures in air, a tomographic particle image velocimetry (tomographic PIV) system for measurement volumes of the order of one cubic metre is developed, which employs helium-filled soap bubbles (HFSBs) as tracer particles. The technique has several specific characteristics compared to most conventional tomographic PIV systems, which are usually applied to small measurement volumes. One of them is spot lights on the HFSB tracers, which slightly change their position, when the direction of observation is altered. Further issues are the large particle to voxel ratio and the short focal length of the used camera lenses, which result in a noticeable variation of the magnification factor in volume depth direction. Taking the specific characteristics of the HFSBs into account, the feasibility of our large-scale tomographic PIV system is demonstrated by showing that the calibration errors can be reduced down to 0.1 pixels as required. Further, an accurate and fast implementation of the multiplicative algebraic reconstruction technique, which calculates the weighting coefficients when needed instead of storing them, is discussed. The tomographic PIV system is applied to measure forced convection in a convection cell at a Reynolds number of 530 based on the inlet channel height and the mean inlet velocity. The size of the measurement volume and the interrogation volumes amount to 750 mm × 450 mm × 165 mm and 48 mm × 48 mm × 24 mm, respectively. Validation of the tomographic PIV technique employing HFSBs is further provided by comparing profiles of the mean velocity and of the root mean square velocity fluctuations to respective planar PIV data.
Ping, Bo; Su, Fenzhen; Meng, Yunshan
2016-01-01
In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.
NASA Astrophysics Data System (ADS)
Stupina, T.; Koulakov, I.; Kopp, H.
2009-04-01
We consider questions of creating structural models and resolution assessment in tomographic inversion of wide-angle active seismic profiling data. For our investigations, we use the PROFIT (Profile Forward and Inverse Tomographic modeling) algorithm which was tested earlier with different datasets. Here we consider offshore seismic profiling data from three areas (Chile, Java and Central Pacific). Two of the study areas are characterized by subduction zones whereas the third data set covers a seamount province. We have explored different algorithmic issues concerning the quality of the solution, such as (1) resolution assessment using different sizes and complexity of synthetic anomalies; (2) grid spacing effects; (3) amplitude damping and smoothing; (4) criteria for rejection of outliers; (5) quantitative criteria for comparing models. Having determined optimal algorithmic parameters for the observed seismic profiling data we have created structural synthetic models which reproduce the results of the observed data inversion. For the Chilean and Java subduction zones our results show similar patterns: a relatively thin sediment layer on the oceanic plate, thicker inhomogeneous sediments in the overlying plate and a large area of very strong low velocity anomalies in the accretionary wedge. For two seamounts in the Pacific we observe high velocity anomalies in the crust which can be interpreted as frozen channels inside the dormant volcano cones. Along both profiles we obtain considerable crustal thickening beneath the seamounts.
Markov prior-based block-matching algorithm for superdimension reconstruction of porous media
NASA Astrophysics Data System (ADS)
Li, Yang; He, Xiaohai; Teng, Qizhi; Feng, Junxi; Wu, Xiaohong
2018-04-01
A superdimension reconstruction algorithm is used for the reconstruction of three-dimensional (3D) structures of a porous medium based on a single two-dimensional image. The algorithm borrows the concepts of "blocks," "learning," and "dictionary" from learning-based superresolution reconstruction and applies them to the 3D reconstruction of a porous medium. In the neighborhood-matching process of the conventional superdimension reconstruction algorithm, the Euclidean distance is used as a criterion, although it may not really reflect the structural correlation between adjacent blocks in an actual situation. Hence, in this study, regular items are adopted as prior knowledge in the reconstruction process, and a Markov prior-based block-matching algorithm for superdimension reconstruction is developed for more accurate reconstruction. The algorithm simultaneously takes into consideration the probabilistic relationship between the already reconstructed blocks in three different perpendicular directions (x , y , and z ) and the block to be reconstructed, and the maximum value of the probability product of the blocks to be reconstructed (as found in the dictionary for the three directions) is adopted as the basis for the final block selection. Using this approach, the problem of an imprecise spatial structure caused by a point simulation can be overcome. The problem of artifacts in the reconstructed structure is also addressed through the addition of hard data and by neighborhood matching. To verify the improved reconstruction accuracy of the proposed method, the statistical and morphological features of the results from the proposed method and traditional superdimension reconstruction method are compared with those of the target system. The proposed superdimension reconstruction algorithm is confirmed to enable a more accurate reconstruction of the target system while also eliminating artifacts.
Classification of JET Neutron and Gamma Emissivity Profiles
NASA Astrophysics Data System (ADS)
Craciunescu, T.; Murari, A.; Kiptily, V.; Vega, J.; Contributors, JET
2016-05-01
In thermonuclear plasmas, emission tomography uses integrated measurements along lines of sight (LOS) to determine the two-dimensional (2-D) spatial distribution of the volume emission intensity. Due to the availability of only a limited number views and to the coarse sampling of the LOS, the tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET. In specific experimental conditions the availability of LOSs is restricted to a single view. In this case an explicit reconstruction of the emissivity profile is no longer possible. However, machine learning classification methods can be used in order to derive the type of the distribution. In the present approach the classification is developed using the theory of belief functions which provide the support to fuse the results of independent clustering and supervised classification. The method allows to represent the uncertainty of the results provided by different independent techniques, to combine them and to manage possible conflicts.
A scanning PIV method for fine-scale turbulence measurements
NASA Astrophysics Data System (ADS)
Lawson, John M.; Dawson, James R.
2014-12-01
A hybrid technique is presented that combines scanning PIV with tomographic reconstruction to make spatially and temporally resolved measurements of the fine-scale motions in turbulent flows. The technique uses one or two high-speed cameras to record particle images as a laser sheet is rapidly traversed across a measurement volume. This is combined with a fast method for tomographic reconstruction of the particle field for use in conjunction with PIV cross-correlation. The method was tested numerically using DNS data and with experiments in a large mixing tank that produces axisymmetric homogeneous turbulence at . A parametric investigation identifies the important parameters for a scanning PIV set-up and provides guidance to the interested experimentalist in achieving the best accuracy. Optimal sheet spacings and thicknesses are reported, and it was found that accurate results could be obtained at quite low scanning speeds. The two-camera method is the most robust to noise, permitting accurate measurements of the velocity gradients and direct determination of the dissipation rate.
Multi-ray-based system matrix generation for 3D PET reconstruction
NASA Astrophysics Data System (ADS)
Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi
2008-12-01
Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.
In-line phase shift tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammonds, Jeffrey C.; Price, Ronald R.; Pickens, David R.
2013-08-15
Purpose: The purpose of this work is to (1) demonstrate laboratory measurements of phase shift images derived from in-line phase-contrast radiographs using the attenuation-partition based algorithm (APBA) of Yan et al.[Opt. Express 18(15), 16074–16089 (2010)], (2) verify that the APBA reconstructed images obey the linearity principle, and (3) reconstruct tomosynthesis phase shift images from a collection of angularly sampled planar phase shift images.Methods: An unmodified, commercially available cabinet x-ray system (Faxitron LX-60) was used in this experiment. This system contains a tungsten anode x-ray tube with a nominal focal spot size of 10 μm. The digital detector uses CsI/CMOS withmore » a pixel size of 50 × 50 μm. The phantoms used consisted of one acrylic plate, two polystyrene plates, and a habanero pepper. Tomosynthesis images were reconstructed from 51 images acquired over a ±25° arc. All phase shift images were reconstructed using the APBA.Results: Image contrast derived from the planar phase shift image of an acrylic plate of uniform thickness exceeded the contrast of the traditional attenuation image by an approximate factor of two. Comparison of the planar phase shift images from a single, uniform thickness polystyrene plate with two polystyrene plates demonstrated an approximate linearity of the estimated phase shift with plate thickness (−1600 rad vs −2970 rad). Tomographic phase shift images of the habanero pepper exhibited acceptable spatial resolution and contrast comparable to the corresponding attenuation image.Conclusions: This work demonstrated the feasibility of laboratory-based phase shift tomosynthesis and suggests that phase shift imaging could potentially provide a new imaging biomarker. Further investigation will be needed to determine if phase shift contrast will be able to provide new tissue contrast information or improved clinical performance.« less
Single-shot ultrafast tomographic imaging by spectral multiplexing
NASA Astrophysics Data System (ADS)
Matlis, N. H.; Axley, A.; Leemans, W. P.
2012-10-01
Computed tomography has profoundly impacted science, medicine and technology by using projection measurements scanned over multiple angles to permit cross-sectional imaging of an object. The application of computed tomography to moving or dynamically varying objects, however, has been limited by the temporal resolution of the technique, which is set by the time required to complete the scan. For objects that vary on ultrafast timescales, traditional scanning methods are not an option. Here we present a non-scanning method capable of resolving structure on femtosecond timescales by using spectral multiplexing of a single laser beam to perform tomographic imaging over a continuous range of angles simultaneously. We use this technique to demonstrate the first single-shot ultrafast computed tomography reconstructions and obtain previously inaccessible structure and position information for laser-induced plasma filaments. This development enables real-time tomographic imaging for ultrafast science, and offers a potential solution to the challenging problem of imaging through scattering surfaces.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Huang, Zhen
2012-11-01
The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.
TAIWO, OLUWADAMILOLA O.; FINEGAN, DONAL P.; EASTWOOD, DAVID S.; FIFE, JULIE L.; BROWN, LEON D.; DARR, JAWWAD A.; LEE, PETER D.; BRETT, DANIEL J.L.
2016-01-01
Summary Lithium‐ion battery performance is intrinsically linked to electrode microstructure. Quantitative measurement of key structural parameters of lithium‐ion battery electrode microstructures will enable optimization as well as motivate systematic numerical studies for the improvement of battery performance. With the rapid development of 3‐D imaging techniques, quantitative assessment of 3‐D microstructures from 2‐D image sections by stereological methods appears outmoded; however, in spite of the proliferation of tomographic imaging techniques, it remains significantly easier to obtain two‐dimensional (2‐D) data sets. In this study, stereological prediction and three‐dimensional (3‐D) analysis techniques for quantitative assessment of key geometric parameters for characterizing battery electrode microstructures are examined and compared. Lithium‐ion battery electrodes were imaged using synchrotron‐based X‐ray tomographic microscopy. For each electrode sample investigated, stereological analysis was performed on reconstructed 2‐D image sections generated from tomographic imaging, whereas direct 3‐D analysis was performed on reconstructed image volumes. The analysis showed that geometric parameter estimation using 2‐D image sections is bound to be associated with ambiguity and that volume‐based 3‐D characterization of nonconvex, irregular and interconnected particles can be used to more accurately quantify spatially‐dependent parameters, such as tortuosity and pore‐phase connectivity. PMID:26999804
Taiwo, Oluwadamilola O; Finegan, Donal P; Eastwood, David S; Fife, Julie L; Brown, Leon D; Darr, Jawwad A; Lee, Peter D; Brett, Daniel J L; Shearing, Paul R
2016-09-01
Lithium-ion battery performance is intrinsically linked to electrode microstructure. Quantitative measurement of key structural parameters of lithium-ion battery electrode microstructures will enable optimization as well as motivate systematic numerical studies for the improvement of battery performance. With the rapid development of 3-D imaging techniques, quantitative assessment of 3-D microstructures from 2-D image sections by stereological methods appears outmoded; however, in spite of the proliferation of tomographic imaging techniques, it remains significantly easier to obtain two-dimensional (2-D) data sets. In this study, stereological prediction and three-dimensional (3-D) analysis techniques for quantitative assessment of key geometric parameters for characterizing battery electrode microstructures are examined and compared. Lithium-ion battery electrodes were imaged using synchrotron-based X-ray tomographic microscopy. For each electrode sample investigated, stereological analysis was performed on reconstructed 2-D image sections generated from tomographic imaging, whereas direct 3-D analysis was performed on reconstructed image volumes. The analysis showed that geometric parameter estimation using 2-D image sections is bound to be associated with ambiguity and that volume-based 3-D characterization of nonconvex, irregular and interconnected particles can be used to more accurately quantify spatially-dependent parameters, such as tortuosity and pore-phase connectivity. © 2016 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
Improving image quality in laboratory x-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.
2017-03-01
Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.
Chest tomosynthesis: technical principles and clinical update.
Dobbins, James T; McAdams, H Page
2009-11-01
Digital tomosynthesis is a radiographic technique that can produce an arbitrary number of section images of a patient from a single pass of the X-ray tube. It utilizes a conventional X-ray tube, a flat-panel detector, a computer-controlled tube mover, and special reconstruction algorithms to produce section images. While it does not have the depth resolution of computed tomography (CT), tomosynthesis provides some of the tomographic benefits of CT but at lower cost and radiation dose than CT. Compared to conventional chest radiography, chest tomosynthesis results in improved visibility of normal structures such as vessels, airway and spine. By reducing visual clutter from overlying normal anatomy, it also enhances detection of small lung nodules. This review article outlines the components of a tomosynthesis system, discusses results regarding improved lung nodule detection from the recent literature, and presents examples of nodule detection from a clinical trial in human subjects. Possible implementation strategies for use in clinical chest imaging are discussed.
Characterization of airborne transducers by optical tomography
Bou Matar O; Pizarro; Certon; Remenieras; Patat
2000-03-01
This paper describes the application of an acousto-optic method to the measurement of airborne ultrasound. The method consists of a heterodyne interferometric probing of the pressure emitted by the transducer combined with a tomographic algorithm. The heterodyne interferometer measures the optical phase shift of the probe laser beam, proportional to the acoustic pressure integrated along the light path. A number of projections of the sound field, e.g. a set of ray integrals obtained along parallel paths, are made in moving the transducer to be tested. The main advantage of the method is its very high sensitivity in air (2 x 10(-4) Pa Hz-1/2), combined with a large bandwidth. Using the same principle as X-ray tomography the ultrasonic pressure in a plane perpendicular to the transducer axis can be reconstructed. Several ultrasonic fields emitted by wide-band home made electrostatic transducers, with operating frequencies between 200 and 700 kHz, have been measured. The sensitivities compared favorably with those of commercial airborne transducers.
Imaging sensor constellation for tomographic chemical cloud mapping.
Cosofret, Bogdan R; Konno, Daisei; Faghfouri, Aram; Kindle, Harry S; Gittins, Christopher M; Finson, Michael L; Janov, Tracy E; Levreault, Mark J; Miyashiro, Rex K; Marinelli, William J
2009-04-01
A sensor constellation capable of determining the location and detailed concentration distribution of chemical warfare agent simulant clouds has been developed and demonstrated on government test ranges. The constellation is based on the use of standoff passive multispectral infrared imaging sensors to make column density measurements through the chemical cloud from two or more locations around its periphery. A computed tomography inversion method is employed to produce a 3D concentration profile of the cloud from the 2D line density measurements. We discuss the theoretical basis of the approach and present results of recent field experiments where controlled releases of chemical warfare agent simulants were simultaneously viewed by three chemical imaging sensors. Systematic investigations of the algorithm using synthetic data indicate that for complex functions, 3D reconstruction errors are less than 20% even in the case of a limited three-sensor measurement network. Field data results demonstrate the capability of the constellation to determine 3D concentration profiles that account for ~?86%? of the total known mass of material released.
Experimental adaptive quantum tomography of two-qubit states
NASA Astrophysics Data System (ADS)
Struchalin, G. I.; Pogorelov, I. A.; Straupe, S. S.; Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.
2016-01-01
We report an experimental realization of adaptive Bayesian quantum state tomography for two-qubit states. Our implementation is based on the adaptive experimental design strategy proposed in the work by Huszár and Houlsby [F. Huszár and N. M. T. Houlsby, Phys. Rev. A 85, 052120 (2012)., 10.1103/PhysRevA.85.052120] and provides an optimal measurement approach in terms of the information gain. We address the practical questions which one faces in any experimental application: the influence of technical noise and the behavior of the tomographic algorithm for an easy-to-implement class of factorized measurements. In an experiment with polarization states of entangled photon pairs, we observe a lower instrumental noise floor and superior reconstruction accuracy for nearly pure states of the adaptive protocol compared to a nonadaptive protocol. At the same time, we show that for the mixed states, the restriction to factorized measurements results in no advantage for adaptive measurements, so general measurements have to be used.
Majewski, Stanislaw [Yorktown, VA; Proffitt, James [Newport News, VA
2011-12-06
A compact, mobile, dedicated SPECT brain imager that can be easily moved to the patient to provide in-situ imaging, especially when the patient cannot be moved to the Nuclear Medicine imaging center. As a result of the widespread availability of single photon labeled biomarkers, the SPECT brain imager can be used in many locations, including remote locations away from medical centers. The SPECT imager improves the detection of gamma emission from the patient's head and neck area with a large field of view. Two identical lightweight gamma imaging detector heads are mounted to a rotating gantry and precisely mechanically co-registered to each other at 180 degrees. A unique imaging algorithm combines the co-registered images from the detector heads and provides several SPECT tomographic reconstructions of the imaged object thereby improving the diagnostic quality especially in the case of imaging requiring higher spatial resolution and sensitivity at the same time.
Imaging cells and sub-cellular structures with ultrahigh resolution full-field X-ray microscopy.
Chien, C C; Tseng, P Y; Chen, H H; Hua, T E; Chen, S T; Chen, Y Y; Leng, W H; Wang, C H; Hwu, Y; Yin, G C; Liang, K S; Chen, F R; Chu, Y S; Yeh, H I; Yang, Y C; Yang, C S; Zhang, G L; Je, J H; Margaritondo, G
2013-01-01
Our experimental results demonstrate that full-field hard-X-ray microscopy is finally able to investigate the internal structure of cells in tissues. This result was made possible by three main factors: the use of a coherent (synchrotron) source of X-rays, the exploitation of contrast mechanisms based on the real part of the refractive index and the magnification provided by high-resolution Fresnel zone-plate objectives. We specifically obtained high-quality microradiographs of human and mouse cells with 29 nm Rayleigh spatial resolution and verified that tomographic reconstruction could be implemented with a final resolution level suitable for subcellular features. We also demonstrated that a phase retrieval method based on a wave propagation algorithm could yield good subcellular images starting from a series of defocused microradiographs. The concluding discussion compares cellular and subcellular hard-X-ray microradiology with other techniques and evaluates its potential impact on biomedical research. Copyright © 2012 Elsevier Inc. All rights reserved.
Tomographic image reconstruction using x-ray phase information
NASA Astrophysics Data System (ADS)
Momose, Atsushi; Takeda, Tohoru; Itai, Yuji; Hirano, Keiichi
1996-04-01
We have been developing phase-contrast x-ray computed tomography (CT) to make possible the observation of biological soft tissues without contrast enhancement. Phase-contrast x-ray CT requires for its input data the x-ray phase-shift distributions or phase-mapping images caused by an object. These were measured with newly developed fringe-scanning x-ray interferometry. Phase-mapping images at different projection directions were obtained by rotating the object in an x-ray interferometer, and were processed with a standard CT algorithm. A phase-contrast x-ray CT image of a nonstained cancerous tissue was obtained using 17.7 keV synchrotron x rays with 12 micrometer voxel size, although the size of the observation area was at most 5 mm. The cancerous lesions were readily distinguishable from normal tissues. Moreover, fine structures corresponding to cancerous degeneration and fibrous tissues were clearly depicted. It is estimated that the present system is sensitive down to a density deviation of 4 mg/cm3.
Calf Perforator Flaps: A Freestyle Solution for Oral Cavity Reconstruction.
Molina, Alexandra R; Citron, Isabelle; Chinaka, Fungayi; Cascarini, Luke; Townley, William A
2017-02-01
Reconstruction of oral cavity defects requires a thin, pliable flap for optimal functional results. Traditional flap choices are imperfect: the anterolateral thigh flap is excessively thick, whereas the radial forearm flap has a poor donor site. The authors therefore favor calf perforator flaps such as the medial sural artery perforator flap to provide thin tissue with an acceptable donor site. This two-part study aims to demonstrate their suitability for intraoral reconstruction. In the radiologic part of the study, the authors compared thigh and calf tissue thickness by examining lower limb computed tomographic scans of 100 legs. For their clinical study, they collected data prospectively on 20 cases of oral cavity reconstruction using calf perforator flaps. The mean thickness of the calf tissue envelope was significantly less than that of the thigh (8.4 mm compared with 17 mm) based on computed tomographic analysis. In the clinical study, a medial sural artery perforator was used in the majority of cases (17 of 20). The mean pedicle length was 10.2 cm and the mean time to raise a flap was 85 minutes. There were no flap losses. One patient was returned to the operating room for management of late hematoma and wound dehiscence. Calf perforator flaps provide ideal tissue for intraoral reconstruction and are significantly thinner than anterolateral thigh flaps. In addition to medial sural artery perforator flaps, the authors raised both sural and soleal artery perforator flaps in this series. Opportunistic use of the calf donor site allows the harvest of thin tissue with minimal donor-site morbidity. Therapeutic, IV.
Yan, Rui; Edwards, Thomas J.; Pankratz, Logan M.; Kuhn, Richard J.; Lanman, Jason K.; Liu, Jun; Jiang, Wen
2015-01-01
Cryo-electron tomography (cryo-ET) is an emerging technique that can elucidate the architecture of macromolecular complexes and cellular ultrastructure in a near-native state. Some important sample parameters, such as thickness and tilt, are needed for 3-D reconstruction. However, these parameters can currently only be determined using trial 3-D reconstructions. Accurate electron mean free path plays a significant role in modeling image formation process essential for simulation of electron microscopy images and model-based iterative 3-D reconstruction methods; however, their values are voltage and sample dependent and have only been experimentally measured for a limited number of sample conditions. Here, we report a computational method, tomoThickness, based on the Beer-Lambert law, to simultaneously determine the sample thickness, tilt and electron inelastic mean free path by solving an overdetermined nonlinear least square optimization problem utilizing the strong constraints of tilt relationships. The method has been extensively tested with both stained and cryo datasets. The fitted electron mean free paths are consistent with reported experimental measurements. The accurate thickness estimation eliminates the need for a generous assignment of Z-dimension size of the tomogram. Interestingly, we have also found that nearly all samples are a few degrees tilted relative to the electron beam. Compensation of the intrinsic sample tilt can result in horizontal structure and reduced Z-dimension of tomograms. Our fast, pre-reconstruction method can thus provide important sample parameters that can help improve performance of tomographic reconstruction of a wide range of samples. PMID:26433027
Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen
2015-11-01
Cryo-electron tomography (cryo-ET) is an emerging technique that can elucidate the architecture of macromolecular complexes and cellular ultrastructure in a near-native state. Some important sample parameters, such as thickness and tilt, are needed for 3-D reconstruction. However, these parameters can currently only be determined using trial 3-D reconstructions. Accurate electron mean free path plays a significant role in modeling image formation process essential for simulation of electron microscopy images and model-based iterative 3-D reconstruction methods; however, their values are voltage and sample dependent and have only been experimentally measured for a limited number of sample conditions. Here, we report a computational method, tomoThickness, based on the Beer-Lambert law, to simultaneously determine the sample thickness, tilt and electron inelastic mean free path by solving an overdetermined nonlinear least square optimization problem utilizing the strong constraints of tilt relationships. The method has been extensively tested with both stained and cryo datasets. The fitted electron mean free paths are consistent with reported experimental measurements. The accurate thickness estimation eliminates the need for a generous assignment of Z-dimension size of the tomogram. Interestingly, we have also found that nearly all samples are a few degrees tilted relative to the electron beam. Compensation of the intrinsic sample tilt can result in horizontal structure and reduced Z-dimension of tomograms. Our fast, pre-reconstruction method can thus provide important sample parameters that can help improve performance of tomographic reconstruction of a wide range of samples. Copyright © 2015 Elsevier Inc. All rights reserved.
Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.
2017-01-01
Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930
CT cardiac imaging: evolution from 2D to 3D backprojection
NASA Astrophysics Data System (ADS)
Tang, Xiangyang; Pan, Tinsu; Sasaki, Kosuke
2004-04-01
The state-of-the-art multiple detector-row CT, which usually employs fan beam reconstruction algorithms by approximating a cone beam geometry into a fan beam geometry, has been well recognized as an important modality for cardiac imaging. At present, the multiple detector-row CT is evolving into volumetric CT, in which cone beam reconstruction algorithms are needed to combat cone beam artifacts caused by large cone angle. An ECG-gated cardiac cone beam reconstruction algorithm based upon the so-called semi-CB geometry is implemented in this study. To get the highest temporal resolution, only the projection data corresponding to 180° plus the cone angle are row-wise rebinned into the semi-CB geometry for three-dimensional reconstruction. Data extrapolation is utilized to extend the z-coverage of the ECG-gated cardiac cone beam reconstruction algorithm approaching the edge of a CT detector. A helical body phantom is used to evaluate the ECG-gated cone beam reconstruction algorithm"s z-coverage and capability of suppressing cone beam artifacts. Furthermore, two sets of cardiac data scanned by a multiple detector-row CT scanner at 16 x 1.25 (mm) and normalized pitch 0.275 and 0.3 respectively are used to evaluate the ECG-gated CB reconstruction algorithm"s imaging performance. As a reference, the images reconstructed by a fan beam reconstruction algorithm for multiple detector-row CT are also presented. The qualitative evaluation shows that, the ECG-gated cone beam reconstruction algorithm outperforms its fan beam counterpart from the perspective of cone beam artifact suppression and z-coverage while the temporal resolution is well maintained. Consequently, the scan speed can be increased to reduce the contrast agent amount and injection time, improve the patient comfort and x-ray dose efficiency. Based up on the comparison, it is believed that, with the transition of multiple detector-row CT into volumetric CT, ECG-gated cone beam reconstruction algorithms will provide better image quality for CT cardiac applications.
Image reconstruction through thin scattering media by simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Zhang, Xicheng; Zhu, Jianhua
2018-07-01
An idea for reconstructing the image of an object behind thin scattering media is proposed by phase modulation. The optimized phase mask is achieved by modulating the scattered light using simulated annealing algorithm. The correlation coefficient is exploited as a fitness function to evaluate the quality of reconstructed image. The reconstructed images optimized from simulated annealing algorithm and genetic algorithm are compared in detail. The experimental results show that our proposed method has better definition and higher speed than genetic algorithm.
Time-of-flight PET image reconstruction using origin ensembles.
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-07
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Time-of-flight PET image reconstruction using origin ensembles
NASA Astrophysics Data System (ADS)
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-01
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
NASA Astrophysics Data System (ADS)
Almeida, A. P.; Braz, D.; Nogueira, L. P.; Colaço, M. V.; Soares, J.; Cardoso, S. C.; Garcia, E. S.; Azambuja, P.; Gonzalez, M. S.; Mohammadi, S.; Tromba, G.; Barroso, R. C.
2014-02-01
We have used phase-contrast X-ray microtomography (PPC-μCT) to study the head of the blood-feeding bug, Rhodnius prolixus, which is one of the most important insect vector of Trypanosoma cruzi, ethiologic agent of Chagas disease in Latin America. Images reconstructed from phase-retrieved projections processed by ANKA phase are compared to those obtained through direct tomographic reconstruction of the flat-field-corrected transmission radiographs. It should be noted that the relative locations of the important morphological internal structures are observable with a precision that is difficult to obtain without the phase retrieval approach.
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk
2016-05-01
In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.
The algorithm of central axis in surface reconstruction
NASA Astrophysics Data System (ADS)
Zhao, Bao Ping; Zhang, Zheng Mei; Cai Li, Ji; Sun, Da Ming; Cao, Hui Ying; Xing, Bao Liang
2017-09-01
Reverse engineering is an important technique means of product imitation and new product development. Its core technology -- surface reconstruction is the current research for scholars. In the various algorithms of surface reconstruction, using axis reconstruction is a kind of important method. For the various reconstruction, using medial axis algorithm was summarized, pointed out the problems existed in various methods, as well as the place needs to be improved. Also discussed the later surface reconstruction and development of axial direction.
A reconstruction algorithm for helical CT imaging on PI-planes.
Liang, Hongzhu; Zhang, Cishen; Yan, Ming
2006-01-01
In this paper, a Feldkamp type approximate reconstruction algorithm is presented for helical cone-beam Computed Tomography. To effectively suppress artifacts due to large cone angle scanning, it is proposed to reconstruct the object point-wisely on unique customized tilted PI-planes which are close to the data collecting helices of the corresponding points. Such a reconstruction scheme can considerably suppress the artifacts in the cone-angle scanning. Computer simulations show that the proposed algorithm can provide improved imaging performance compared with the existing approximate cone-beam reconstruction algorithms.
Photoacoustic image reconstruction via deep learning
NASA Astrophysics Data System (ADS)
Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes
2018-02-01
Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.
NASA Astrophysics Data System (ADS)
Wang, Fei; Wu, Qi; Huang, Qunxing; Zhang, Haidan; Yan, Jianhua; Cen, Kefa
2015-07-01
An innovative tomographic method using tunable diode laser absorption spectroscopy (TDLAS) and algebraic reconstruction technique (ART) is presented in this paper for detecting two-dimensional distribution of H2O concentration and temperature in a premixed flame. The collimated laser beam emitted from a low cost diode laser module was delicately split into 24 sub-beams passing through the flame from different angles and the acquired laser absorption signals were used to retrieve flame temperature and H2O concentration simultaneously. The efficiency of the proposed reconstruction system and the effect of measurement noise were numerically evaluated. The temperature and H2O concentration in flat methane/air premixed flames under three different equivalence ratios were experimentally measured and reconstruction results were compared with model calculations. Numerical assessments indicate that the TDLAS tomographic system is capable for temperature and H2O concentration profiles detecting even the noise strength reaches 3% of absorption signal. Experimental results under different combustion conditions are well demonstrated along the vertical direction and the distribution profiles are in good agreement with model calculation. The proposed method exhibits great potential for 2-D or 3-D combustion diagnostics including non-uniform flames.
Hyperspectral image reconstruction for x-ray fluorescence tomography
Gürsoy, Doǧa; Biçer, Tekin; Lanzirotti, Antonio; ...
2015-01-01
A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversionmore » approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.« less
3D spectral imaging with synchrotron Fourier transform infrared spectro-microtomography
Michael C. Martin; Charlotte Dabat-Blondeau; Miriam Unger; Julia Sedlmair; Dilworth Y. Parkinson; Hans A. Bechtel; Barbara Illman; Jonathan M. Castro; Marco Keiluweit; David Buschke; Brenda Ogle; Michael J. Nasse; Carol J. Hirschmugl
2013-01-01
We report Fourier transform infrared spectro-microtomography, a nondestructive three-dimensional imaging approach that reveals the distribution of distinctive chemical compositions throughout an intact biological or materials sample. The method combines mid-infrared absorption contrast with computed tomographic data acquisition and reconstruction to enhance chemical...
NASA Astrophysics Data System (ADS)
Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr
2017-12-01
There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-01-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
NASA Astrophysics Data System (ADS)
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.