Science.gov

Sample records for 4d image denoising

  1. True 4D Image Denoising on the GPU.

    PubMed

    Eklund, Anders; Andersson, Mats; Knutsson, Hans

    2011-01-01

    The use of image denoising techniques is an important part of many medical imaging applications. One common application is to improve the image quality of low-dose (noisy) computed tomography (CT) data. While 3D image denoising previously has been applied to several volumes independently, there has not been much work done on true 4D image denoising, where the algorithm considers several volumes at the same time. The problem with 4D image denoising, compared to 2D and 3D denoising, is that the computational complexity increases exponentially. In this paper we describe a novel algorithm for true 4D image denoising, based on local adaptive filtering, and how to implement it on the graphics processing unit (GPU). The algorithm was applied to a 4D CT heart dataset of the resolution 512  × 512  × 445  × 20. The result is that the GPU can complete the denoising in about 25 minutes if spatial filtering is used and in about 8 minutes if FFT-based filtering is used. The CPU implementation requires several days of processing time for spatial filtering and about 50 minutes for FFT-based filtering. The short processing time increases the clinical value of true 4D image denoising significantly.

  2. Denoising and 4D visualization of OCT images

    PubMed Central

    Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.

    2009-01-01

    We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509

  3. Performance evaluation and optimization of BM4D-AV denoising algorithm for cone-beam CT images

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Tian, Xiaofei; Zhang, Dinghua; Zhang, Hua

    2015-12-01

    The broadening application of cone-beam Computed Tomography (CBCT) in medical diagnostics and nondestructive testing, necessitates advanced denoising algorithms for its 3D images. The block-matching and four dimensional filtering algorithm with adaptive variance (BM4D-AV) is applied to the 3D image denoising in this research. To optimize it, the key filtering parameters of the BM4D-AV algorithm are assessed firstly based on the simulated CBCT images and a table of optimized filtering parameters is obtained. Then, considering the complexity of the noise in realistic CBCT images, possible noise standard deviations in BM4D-AV are evaluated to attain the chosen principle for the realistic denoising. The results of corresponding experiments demonstrate that the BM4D-AV algorithm with optimized parameters presents excellent denosing effect on the realistic 3D CBCT images.

  4. Progressive image denoising.

    PubMed

    Knaus, Claude; Zwicker, Matthias

    2014-07-01

    Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts.While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.

  5. Multiscale image blind denoising.

    PubMed

    Lebrun, Marc; Colom, Miguel; Morel, Jean-Michel

    2015-10-01

    Arguably several thousands papers are dedicated to image denoising. Most papers assume a fixed noise model, mainly white Gaussian or Poissonian. This assumption is only valid for raw images. Yet, in most images handled by the public and even by scientists, the noise model is imperfectly known or unknown. End users only dispose the result of a complex image processing chain effectuated by uncontrolled hardware and software (and sometimes by chemical means). For such images, recent progress in noise estimation permits to estimate from a single image a noise model, which is simultaneously signal and frequency dependent. We propose here a multiscale denoising algorithm adapted to this broad noise model. This leads to a blind denoising algorithm which we demonstrate on real JPEG images and on scans of old photographs for which the formation model is unknown. The consistency of this algorithm is also verified on simulated distorted images. This algorithm is finally compared with the unique state of the art previous blind denoising method.

  6. Quantum Boolean image denoising

    NASA Astrophysics Data System (ADS)

    Mastriani, Mario

    2015-05-01

    A quantum Boolean image processing methodology is presented in this work, with special emphasis in image denoising. A new approach for internal image representation is outlined together with two new interfaces: classical to quantum and quantum to classical. The new quantum Boolean image denoising called quantum Boolean mean filter works with computational basis states (CBS), exclusively. To achieve this, we first decompose the image into its three color components, i.e., red, green and blue. Then, we get the bitplanes for each color, e.g., 8 bits per pixel, i.e., 8 bitplanes per color. From now on, we will work with the bitplane corresponding to the most significant bit (MSB) of each color, exclusive manner. After a classical-to-quantum interface (which includes a classical inverter), we have a quantum Boolean version of the image within the quantum machine. This methodology allows us to avoid the problem of quantum measurement, which alters the results of the measured except in the case of CBS. Said so far is extended to quantum algorithms outside image processing too. After filtering of the inverted version of MSB (inside quantum machine), the result passes through a quantum-classical interface (which involves another classical inverter) and then proceeds to reassemble each color component and finally the ending filtered image. Finally, we discuss the more appropriate metrics for image denoising in a set of experimental results.

  7. Global Image Denoising.

    PubMed

    Talebi, Hossein; Milanfar, Peyman

    2014-02-01

    Most existing state-of-the-art image denoising algorithms are based on exploiting similarity between a relatively modest number of patches. These patch-based methods are strictly dependent on patch matching, and their performance is hamstrung by the ability to reliably find sufficiently similar patches. As the number of patches grows, a point of diminishing returns is reached where the performance improvement due to more patches is offset by the lower likelihood of finding sufficiently close matches. The net effect is that while patch-based methods, such as BM3D, are excellent overall, they are ultimately limited in how well they can do on (larger) images with increasing complexity. In this paper, we address these shortcomings by developing a paradigm for truly global filtering where each pixel is estimated from all pixels in the image. Our objectives in this paper are two-fold. First, we give a statistical analysis of our proposed global filter, based on a spectral decomposition of its corresponding operator, and we study the effect of truncation of this spectral decomposition. Second, we derive an approximation to the spectral (principal) components using the Nyström extension. Using these, we demonstrate that this global filter can be implemented efficiently by sampling a fairly small percentage of the pixels in the image. Experiments illustrate that our strategy can effectively globalize any existing denoising filters to estimate each pixel using all pixels in the image, hence improving upon the best patch-based methods.

  8. Nonlinear Image Denoising Methodologies

    DTIC Science & Technology

    2002-05-01

    53 5.3 A Multiscale Approach to Scale-Space Analysis . . . . . . . . . . . . . . . . 53 5.4...etc. In this thesis, Our approach to denoising is first based on a controlled nonlinear stochastic random walk to achieve a scale space analysis ( as in... stochastic treatment or interpretation of the diffusion. In addition, unless a specific stopping time is known to be adequate, the resulting evolution

  9. Image denoising using local tangent space alignment

    NASA Astrophysics Data System (ADS)

    Feng, JianZhou; Song, Li; Huo, Xiaoming; Yang, XiaoKang; Zhang, Wenjun

    2010-07-01

    We propose a novel image denoising approach, which is based on exploring an underlying (nonlinear) lowdimensional manifold. Using local tangent space alignment (LTSA), we 'learn' such a manifold, which approximates the image content effectively. The denoising is performed by minimizing a newly defined objective function, which is a sum of two terms: (a) the difference between the noisy image and the denoised image, (b) the distance from the image patch to the manifold. We extend the LTSA method from manifold learning to denoising. We introduce the local dimension concept that leads to adaptivity to different kind of image patches, e.g. flat patches having lower dimension. We also plug in a basic denoising stage to estimate the local coordinate more accurately. It is found that the proposed method is competitive: its performance surpasses the K-SVD denoising method.

  10. Image denoising using a combined criterion

    NASA Astrophysics Data System (ADS)

    Semenishchev, Evgeny; Marchuk, Vladimir; Shrafel, Igor; Dubovskov, Vadim; Onoyko, Tatyana; Maslennikov, Stansilav

    2016-05-01

    A new image denoising method is proposed in this paper. We are considering an optimization problem with a linear objective function based on two criteria, namely, L2 norm and the first order square difference. This method is a parametric, so by a choice of the parameters we can adapt a proposed criteria of the objective function. The denoising algorithm consists of the following steps: 1) multiple denoising estimates are found on local areas of the image; 2) image edges are determined; 3) parameters of the method are fixed and denoised estimates of the local area are found; 4) local window is moved to the next position (local windows are overlapping) in order to produce the final estimate. A proper choice of parameters of the introduced method is discussed. A comparative analysis of a new denoising method with existed ones is performed on a set of test images.

  11. Green channel guiding denoising on bayer image.

    PubMed

    Tan, Xin; Lai, Shiming; Liu, Yu; Zhang, Maojun

    2014-01-01

    Denoising is an indispensable function for digital cameras. In respect that noise is diffused during the demosaicking, the denoising ought to work directly on bayer data. The difficulty of denoising on bayer image is the interlaced mosaic pattern of red, green, and blue. Guided filter is a novel time efficient explicit filter kernel which can incorporate additional information from the guidance image, but it is still not applied for bayer image. In this work, we observe that the green channel of bayer mode is higher in both sampling rate and Signal-to-Noise Ratio (SNR) than the red and blue ones. Therefore the green channel can be used to guide denoising. This kind of guidance integrates the different color channels together. Experiments on both actual and simulated bayer images indicate that green channel acts well as the guidance signal, and the proposed method is competitive with other popular filter kernel denoising methods.

  12. Denoising Medical Images using Calculus of Variations.

    PubMed

    Kohan, Mahdi Nakhaie; Behnam, Hamid

    2011-07-01

    We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively.

  13. Cardiac 4D Ultrasound Imaging

    NASA Astrophysics Data System (ADS)

    D'hooge, Jan

    Volumetric cardiac ultrasound imaging has steadily evolved over the last 20 years from an electrocardiography (ECC) gated imaging technique to a true real-time imaging modality. Although the clinical use of echocardiography is still to a large extent based on conventional 2D ultrasound imaging it can be anticipated that the further developments in image quality, data visualization and interaction and image quantification of three-dimensional cardiac ultrasound will gradually make volumetric ultrasound the modality of choice. In this chapter, an overview is given of the technological developments that allow for volumetric imaging of the beating heart by ultrasound.

  14. A Decomposition Framework for Image Denoising Algorithms.

    PubMed

    Ghimpeteanu, Gabriela; Batard, Thomas; Bertalmio, Marcelo; Levine, Stacey

    2016-01-01

    In this paper, we consider an image decomposition model that provides a novel framework for image denoising. The model computes the components of the image to be processed in a moving frame that encodes its local geometry (directions of gradients and level lines). Then, the strategy we develop is to denoise the components of the image in the moving frame in order to preserve its local geometry, which would have been more affected if processing the image directly. Experiments on a whole image database tested with several denoising methods show that this framework can provide better results than denoising the image directly, both in terms of Peak signal-to-noise ratio and Structural similarity index metrics.

  15. Adaptive image denoising by targeted databases.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2015-07-01

    We propose a data-dependent denoising procedure to restore noisy images. Different from existing denoising algorithms which search for patches from either the noisy image or a generic database, the new algorithm finds patches from a database that contains relevant patches. We formulate the denoising problem as an optimal filter design problem and make two contributions. First, we determine the basis function of the denoising filter by solving a group sparsity minimization problem. The optimization formulation generalizes existing denoising algorithms and offers systematic analysis of the performance. Improvement methods are proposed to enhance the patch search process. Second, we determine the spectral coefficients of the denoising filter by considering a localized Bayesian prior. The localized prior leverages the similarity of the targeted database, alleviates the intensive Bayesian computation, and links the new method to the classical linear minimum mean squared error estimation. We demonstrate applications of the proposed method in a variety of scenarios, including text images, multiview images, and face images. Experimental results show the superiority of the new algorithm over existing methods.

  16. Image-Specific Prior Adaptation for Denoising.

    PubMed

    Lu, Xin; Lin, Zhe; Jin, Hailin; Yang, Jianchao; Wang, James Z

    2015-12-01

    Image priors are essential to many image restoration applications, including denoising, deblurring, and inpainting. Existing methods use either priors from the given image (internal) or priors from a separate collection of images (external). We find through statistical analysis that unifying the internal and external patch priors may yield a better patch prior. We propose a novel prior learning algorithm that combines the strength of both internal and external priors. In particular, we first learn a generic Gaussian mixture model from a collection of training images and then adapt the model to the given image by simultaneously adding additional components and refining the component parameters. We apply this image-specific prior to image denoising. The experimental results show that our approach yields better or competitive denoising results in terms of both the peak signal-to-noise ratio and structural similarity.

  17. Nonlocal means image denoising using orthogonal moments.

    PubMed

    Kumar, Ahlad

    2015-09-20

    An image denoising method in moment domain has been proposed. The denoising involves the development and evaluation based on the modified nonlocal means (NLM) algorithm. It uses the similarity of the neighborhood, evaluated using Krawtchouk moments. The results of the proposed denoising method have been validated using peak signal-to-noise ratio (PSNR), a well-known quality measure such as structural similarity (SSIM) index and blind/referenceless image spatial quality evaluator (BRISQUE). The denoising algorithm has been evaluated for synthetic and real clinical images contaminated by Gaussian, Poisson, and Rician noise. The algorithm performs well compared to the Zernike based denoising as indicated by the PSNR, SSIM, and BRISQUE scores of the denoised images with an improvement of 3.1 dB, 0.1285, and 4.23, respectively. Further, comparative analysis of the proposed work with the existing techniques has also been performed. It has been observed that the results are competitive in terms of PSNR, SSIM, and BRISQUE scores when evaluated for varying levels of noise.

  18. Geodesic denoising for optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Shahrian Varnousfaderani, Ehsan; Vogl, Wolf-Dieter; Wu, Jing; Gerendas, Bianca S.; Simader, Christian; Langs, Georg; Waldstein, Sebastian M.; Schmidt-Erfurth, Ursula

    2016-03-01

    Optical coherence tomography (OCT) is an optical signal acquisition method capturing micrometer resolution, cross-sectional three-dimensional images. OCT images are used widely in ophthalmology to diagnose and monitor retinal diseases such as age-related macular degeneration (AMD) and Glaucoma. While OCT allows the visualization of retinal structures such as vessels and retinal layers, image quality and contrast is reduced by speckle noise, obfuscating small, low intensity structures and structural boundaries. Existing denoising methods for OCT images may remove clinically significant image features such as texture and boundaries of anomalies. In this paper, we propose a novel patch based denoising method, Geodesic Denoising. The method reduces noise in OCT images while preserving clinically significant, although small, pathological structures, such as fluid-filled cysts in diseased retinas. Our method selects optimal image patch distribution representations based on geodesic patch similarity to noisy samples. Patch distributions are then randomly sampled to build a set of best matching candidates for every noisy sample, and the denoised value is computed based on a geodesic weighted average of the best candidate samples. Our method is evaluated qualitatively on real pathological OCT scans and quantitatively on a proposed set of ground truth, noise free synthetic OCT scans with artificially added noise and pathologies. Experimental results show that performance of our method is comparable with state of the art denoising methods while outperforming them in preserving the critical clinically relevant structures.

  19. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.

    PubMed

    Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei

    2017-02-01

    Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

  20. 4D flow imaging with MRI

    PubMed Central

    Stankovic, Zoran; Allen, Bradley D.; Garcia, Julio; Jarvis, Kelly B.

    2014-01-01

    Magnetic resonance imaging (MRI) has become an important tool for the clinical evaluation of patients with cardiovascular disease. Since its introduction in the late 1980s, 2-dimensional phase contrast MRI (2D PC-MRI) has become a routine part of standard-of-care cardiac MRI for the assessment of regional blood flow in the heart and great vessels. More recently, time-resolved PC-MRI with velocity encoding along all three flow directions and three-dimensional (3D) anatomic coverage (also termed ‘4D flow MRI’) has been developed and applied for the evaluation of cardiovascular hemodynamics in multiple regions of the human body. 4D flow MRI allows for the comprehensive evaluation of complex blood flow patterns by 3D blood flow visualization and flexible retrospective quantification of flow parameters. Recent technical developments, including the utilization of advanced parallel imaging techniques such as k-t GRAPPA, have resulted in reasonable overall scan times, e.g., 8-12 minutes for 4D flow MRI of the aorta and 10-20 minutes for whole heart coverage. As a result, the application of 4D flow MRI in a clinical setting has become more feasible, as documented by an increased number of recent reports on the utility of the technique for the assessment of cardiac and vascular hemodynamics in patient studies. A number of studies have demonstrated the potential of 4D flow MRI to provide an improved assessment of hemodynamics which might aid in the diagnosis and therapeutic management of cardiovascular diseases. The purpose of this review is to describe the methods used for 4D flow MRI acquisition, post-processing and data analysis. In addition, the article provides an overview of the clinical applications of 4D flow MRI and includes a review of applications in the heart, thoracic aorta and hepatic system. PMID:24834414

  1. Advances in 4D radiation therapy for managing respiration: part I - 4D imaging.

    PubMed

    Hugo, Geoffrey D; Rosu, Mihaela

    2012-12-01

    Techniques for managing respiration during imaging and planning of radiation therapy are reviewed, concentrating on free-breathing (4D) approaches. First, we focus on detailing the historical development and basic operational principles of currently-available "first generation" 4D imaging modalities: 4D computed tomography, 4D cone beam computed tomography, 4D magnetic resonance imaging, and 4D positron emission tomography. Features and limitations of these first generation systems are described, including necessity of breathing surrogates for 4D image reconstruction, assumptions made in acquisition and reconstruction about the breathing pattern, and commonly-observed artifacts. Both established and developmental methods to deal with these limitations are detailed. Finally, strategies to construct 4D targets and images and, alternatively, to compress 4D information into static targets and images for radiation therapy planning are described.

  2. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  3. Adaptive Image Denoising by Mixture Adaptation.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  4. Nonlocal Markovian models for image denoising

    NASA Astrophysics Data System (ADS)

    Salvadeo, Denis H. P.; Mascarenhas, Nelson D. A.; Levada, Alexandre L. M.

    2016-01-01

    Currently, the state-of-the art methods for image denoising are patch-based approaches. Redundant information present in nonlocal regions (patches) of the image is considered for better image modeling, resulting in an improved quality of filtering. In this respect, nonlocal Markov random field (MRF) models are proposed by redefining the energy functions of classical MRF models to adopt a nonlocal approach. With the new energy functions, the pairwise pixel interaction is weighted according to the similarities between the patches corresponding to each pair. Also, a maximum pseudolikelihood estimation of the spatial dependency parameter (β) for these models is presented here. For evaluating this proposal, these models are used as an a priori model in a maximum a posteriori estimation to denoise additive white Gaussian noise in images. Finally, results display a notable improvement in both quantitative and qualitative terms in comparison with the local MRFs.

  5. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  6. Controlled Source 4D Seismic Imaging

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Morency, C.; Tromp, J.

    2009-12-01

    Earth's material properties may change after significant tectonic events, e.g., volcanic eruptions, earthquake ruptures, landslides, and hydrocarbon migration. While many studies focus on how to interpret observations in terms of changes in wavespeeds and attenuation, the oil industry is more interested in how we can identify and locate such temporal changes using seismic waves generated by controlled sources. 4D seismic analysis is indeed an important tool to monitor fluid movement in hydrocarbon reservoirs during production, improving fields management. Classic 4D seismic imaging involves comparing images obtained from two subsequent seismic surveys. Differences between the two images tell us where temporal changes occurred. However, when the temporal changes are small, it may be quite hard to reliably identify and characterize the differences between the two images. We propose to back-project residual seismograms between two subsequent surveys using adjoint methods, which results in images highlighting temporal changes. We use the SEG/EAGE salt dome model to illustrate our approach. In two subsequent surveys, the wavespeeds and density within a target region are changed, mimicking possible fluid migration. Due to changes in material properties induced by fluid migration, seismograms recorded in the two surveys differ. By back propagating these residuals, the adjoint images identify the location of the affected region. An important issue involves the nature of model. For instance, are we characterizing only changes in wavespeed, or do we also consider density and attenuation? How many model parameters characterize the model, e.g., is our model isotropic or anisotropic? Is acoustic wave propagation accurate enough or do we need to consider elastic or poroelastic effects? We will investigate how imaging strategies based upon acoustic, elastic and poroelastic simulations affect our imaging capabilities.

  7. Musculoskeletal ultrasound image denoising using Daubechies wavelets

    NASA Astrophysics Data System (ADS)

    Gupta, Rishu; Elamvazuthi, I.; Vasant, P.

    2012-11-01

    Among various existing medical imaging modalities Ultrasound is providing promising future because of its ease availability and use of non-ionizing radiations. In this paper we have attempted to denoise ultrasound image using daubechies wavelet and analyze the results with peak signal to noise ratio and coefficient of correlation as performance measurement index. The different daubechies from 1 to 6 is used on four different ultrasound bone fracture images with three different levels from 1 to 3. The images for visual inspection and PSNR, Coefficient of correlation values are graphically shown for quantitaive analysis of resultant images.

  8. [A non-local means approach for PET image denoising].

    PubMed

    Yin, Yong; Sun, Weifeng; Lu, Jie; Liu, Tonghai

    2010-04-01

    Denoising is an important issue for medical image processing. Based on the analysis of the Non-local means algorithm recently reported by Buades A, et al. in international journals we herein propose adapting it for PET image denoising. Experimental de-noising results for real clinical PET images show that Non-local means method is superior to median filtering and wiener filtering methods and it can suppress noise in PET images effectively and preserve important details of structure for diagnosis.

  9. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  10. Denoising of 3D magnetic resonance images by using higher-order singular value decomposition.

    PubMed

    Zhang, Xinyuan; Xu, Zhongbiao; Jia, Nan; Yang, Wei; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-01-01

    The denoising of magnetic resonance (MR) images is important to improve the inspection quality and reliability of quantitative image analysis. Nonlocal filters by exploiting similarity and/or sparseness among patches or cubes achieve excellent performance in denoising MR images. Recently, higher-order singular value decomposition (HOSVD) has been demonstrated to be a simple and effective method for exploiting redundancy in the 3D stack of similar patches during denoising 2D natural image. This work aims to investigate the application and improvement of HOSVD to denoising MR volume data. The wiener-augmented HOSVD method achieves comparable performance to that of BM4D. For further improvement, we propose to augment the standard HOSVD stage by a second recursive stage, which is a repeated HOSVD filtering of the weighted summation of the residual and denoised image in the first stage. The appropriate weights have been investigated by experiments with different image types and noise levels. Experimental results over synthetic and real 3D MR data demonstrate that the proposed method outperforms current state-of-the-art denoising methods.

  11. Optimized PET imaging for 4D treatment planning in radiotherapy: the virtual 4D PET strategy.

    PubMed

    Gianoli, Chiara; Riboldi, Marco; Fontana, Giulia; Giri, Maria G; Grigolato, Daniela; Ferdeghini, Marco; Cavedon, Carlo; Baroni, Guido

    2015-02-01

    The purpose of the study is to evaluate the performance of a novel strategy, referred to as "virtual 4D PET", aiming at the optimization of hybrid 4D CT-PET scan for radiotherapy treatment planning. The virtual 4D PET strategy applies 4D CT motion modeling to avoid time-resolved PET image acquisition. This leads to a reduction of radioactive tracer administered to the patient and to a total acquisition time comparable to free-breathing PET studies. The proposed method exploits a motion model derived from 4D CT, which is applied to the free-breathing PET to recover respiratory motion and motion blur. The free-breathing PET is warped according to the motion model, in order to generate the virtual 4D PET. The virtual 4D PET strategy was tested on images obtained from a 4D computational anthropomorphic phantom. The performance was compared to conventional motion compensated 4D PET. Tests were also carried out on clinical 4D CT-PET scans coming from seven lung and liver cancer patients. The virtual 4D PET strategy was able to recover lesion motion, with comparable performance with respect to the motion compensated 4D PET. The compensation of the activity blurring due to motion was successfully achieved in terms of spill out removal. Specific limitations were highlighted in terms of partial volume compensation. Results on clinical 4D CT-PET scans confirmed the efficacy in 4D PET count statistics optimization, as equal to the free-breathing PET, and recovery of lesion motion. Compared to conventional motion compensation strategies that explicitly require 4D PET imaging, the virtual 4D PET strategy reduces clinical workload and computational costs, resulting in significant advantages for radiotherapy treatment planning.

  12. PRINCIPAL COMPONENTS FOR NON-LOCAL MEANS IMAGE DENOISING.

    PubMed

    Tasdizen, Tolga

    2008-01-01

    This paper presents an image denoising algorithm that uses principal component analysis (PCA) in conjunction with the non-local means image denoising. Image neighborhood vectors used in the non-local means algorithm are first projected onto a lower-dimensional subspace using PCA. Consequently, neighborhood similarity weights for denoising are computed using distances in this subspace rather than the full space. This modification to the non-local means algorithm results in improved accuracy and computational performance. We present an analysis of the proposed method's accuracy as a function of the dimensionality of the projection subspace and demonstrate that denoising accuracy peaks at a relatively low number of dimensions.

  13. Fast Translation Invariant Multiscale Image Denoising.

    PubMed

    Li, Meng; Ghosal, Subhashis

    2015-12-01

    Translation invariant (TI) cycle spinning is an effective method for removing artifacts from images. However, for a method using O(n) time, the exact TI cycle spinning by averaging all possible circulant shifts requires O(n(2)) time where n is the number of pixels, and therefore is not feasible in practice. Existing literature has investigated efficient algorithms to calculate TI version of some denoising approaches such as Haar wavelet. Multiscale methods, especially those based on likelihood decomposition, such as penalized likelihood estimator and Bayesian methods, have become popular in image processing because of their effectiveness in denoising images. As far as we know, there is no systematic investigation of the TI calculation corresponding to general multiscale approaches. In this paper, we propose a fast TI (FTI) algorithm and a more general k-TI (k-TI) algorithm allowing TI for the last k scales of the image, which are applicable to general d-dimensional images (d = 2, 3, …) with either Gaussian or Poisson noise. The proposed FTI leads to the exact TI estimation but only requires O(n log2 n) time. The proposed k-TI can achieve almost the same performance as the exact TI estimation, but requires even less time. We achieve this by exploiting the regularity present in the multiscale structure, which is justified theoretically. The proposed FTI and k-TI are generic in that they are applicable on any smoothing techniques based on the multiscale structure. We demonstrate the FTI and k-TI algorithms on some recently proposed state-of-the-art methods for both Poisson and Gaussian noised images. Both simulations and real data application confirm the appealing performance of the proposed algorithms. MATLAB toolboxes are online accessible to reproduce the results and be implemented for general multiscale denoising approaches provided by the users.

  14. Image denoising by exploring external and internal correlations.

    PubMed

    Yue, Huanjing; Sun, Xiaoyan; Yang, Jingyu; Wu, Feng

    2015-06-01

    Single image denoising suffers from limited data collection within a noisy image. In this paper, we propose a novel image denoising scheme, which explores both internal and external correlations with the help of web images. For each noisy patch, we build internal and external data cubes by finding similar patches from the noisy and web images, respectively. We then propose reducing noise by a two-stage strategy using different filtering approaches. In the first stage, since the noisy patch may lead to inaccurate patch selection, we propose a graph based optimization method to improve patch matching accuracy in external denoising. The internal denoising is frequency truncation on internal cubes. By combining the internal and external denoising patches, we obtain a preliminary denoising result. In the second stage, we propose reducing noise by filtering of external and internal cubes, respectively, on transform domain. In this stage, the preliminary denoising result not only enhances the patch matching accuracy but also provides reliable estimates of filtering parameters. The final denoising image is obtained by fusing the external and internal filtering results. Experimental results show that our method constantly outperforms state-of-the-art denoising schemes in both subjective and objective quality measurements, e.g., it achieves >2 dB gain compared with BM3D at a wide range of noise levels.

  15. Image denoising using a tight frame.

    PubMed

    Shen, Lixin; Papadakis, Manos; Kakadiaris, Ioannis A; Konstantinidis, Ioannis; Kouri, Donald; Hoffman, David

    2006-05-01

    We present a general mathematical theory for lifting frames that allows us to modify existing filters to construct new ones that form Parseval frames. We apply our theory to design nonseparable Parseval frames from separable (tensor) products of a piecewise linear spline tight frame. These new frame systems incorporate the weighted average operator, the Sobel operator, and the Laplacian operator in directions that are integer multiples of 45 degrees. A new image denoising algorithm is then proposed, tailored to the specific properties of these new frame filters. We demonstrate the performance of our algorithm on a diverse set of images with very encouraging results.

  16. Combining interior and exterior characteristics for remote sensing image denoising

    NASA Astrophysics Data System (ADS)

    Peng, Ni; Sun, Shujin; Wang, Runsheng; Zhong, Ping

    2016-04-01

    Remote sensing image denoising faces many challenges since a remote sensing image usually covers a wide area and thus contains complex contents. Using the patch-based statistical characteristics is a flexible method to improve the denoising performance. There are usually two kinds of statistical characteristics available: interior and exterior characteristics. Different statistical characteristics have their own strengths to restore specific image contents. Combining different statistical characteristics to use their strengths together may have the potential to improve denoising results. This work proposes a method combining statistical characteristics to adaptively select statistical characteristics for different image contents. The proposed approach is implemented through a new characteristics selection criterion learned over training data. Moreover, with the proposed combination method, this work develops a denoising algorithm for remote sensing images. Experimental results show that our method can make full use of the advantages of interior and exterior characteristics for different image contents and thus improve the denoising performance.

  17. Remote sensing image denoising by using discrete multiwavelet transform techniques

    NASA Astrophysics Data System (ADS)

    Wang, Haihui; Wang, Jun; Zhang, Jian

    2006-01-01

    We present a new method by using GHM discrete multiwavelet transform in image denoising on this paper. The developments in wavelet theory have given rise to the wavelet thresholding method, for extracting a signal from noisy data. The method of signal denoising via wavelet thresholding was popularized. Multiwavelets have recently been introduced and they offer simultaneous orthogonality, symmetry and short support. This property makes multiwavelets more suitable for various image processing applications, especially denoising. It is based on thresholding of multiwavelet coefficients arising from the standard scalar orthogonal wavelet transform. It takes into account the covariance structure of the transform. Denoising of images via thresholding of the multiwavelet coefficients result from preprocessing and the discrete multiwavelet transform can be carried out by treating the output in this paper. The form of the threshold is carefully formulated and is the key to the excellent results obtained in the extensive numerical simulations of image denoising. We apply the multiwavelet-based to remote sensing image denoising. Multiwavelet transform technique is rather a new method, and it has a big advantage over the other techniques that it less distorts spectral characteristics of the image denoising. The experimental results show that multiwavelet based image denoising schemes outperform wavelet based method both subjectively and objectively.

  18. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  19. Denoising two-photon calcium imaging data.

    PubMed

    Malik, Wasim Q; Schummers, James; Sur, Mriganka; Brown, Emery N

    2011-01-01

    Two-photon calcium imaging is now an important tool for in vivo imaging of biological systems. By enabling neuronal population imaging with subcellular resolution, this modality offers an approach for gaining a fundamental understanding of brain anatomy and physiology. Proper analysis of calcium imaging data requires denoising, that is separating the signal from complex physiological noise. To analyze two-photon brain imaging data, we present a signal plus colored noise model in which the signal is represented as harmonic regression and the correlated noise is represented as an order autoregressive process. We provide an efficient cyclic descent algorithm to compute approximate maximum likelihood parameter estimates by combing a weighted least-squares procedure with the Burg algorithm. We use Akaike information criterion to guide selection of the harmonic regression and the autoregressive model orders. Our flexible yet parsimonious modeling approach reliably separates stimulus-evoked fluorescence response from background activity and noise, assesses goodness of fit, and estimates confidence intervals and signal-to-noise ratio. This refined separation leads to appreciably enhanced image contrast for individual cells including clear delineation of subcellular details and network activity. The application of our approach to in vivo imaging data recorded in the ferret primary visual cortex demonstrates that our method yields substantially denoised signal estimates. We also provide a general Volterra series framework for deriving this and other signal plus correlated noise models for imaging. This approach to analyzing two-photon calcium imaging data may be readily adapted to other computational biology problems which apply correlated noise models.

  20. Patch-based near-optimal image denoising.

    PubMed

    Chatterjee, Priyam; Milanfar, Peyman

    2012-04-01

    In this paper, we propose a denoising method motivated by our previous analysis of the performance bounds for image denoising. Insights from that study are used here to derive a high-performance practical denoising algorithm. We propose a patch-based Wiener filter that exploits patch redundancy for image denoising. Our framework uses both geometrically and photometrically similar patches to estimate the different filter parameters. We describe how these parameters can be accurately estimated directly from the input noisy image. Our denoising approach, designed for near-optimal performance (in the mean-squared error sense), has a sound statistical foundation that is analyzed in detail. The performance of our approach is experimentally verified on a variety of images and noise levels. The results presented here demonstrate that our proposed method is on par or exceeding the current state of the art, both visually and quantitatively.

  1. Image denoising via sparse and redundant representations over learned dictionaries.

    PubMed

    Elad, Michael; Aharon, Michal

    2006-12-01

    We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods.

  2. A new method for mobile phone image denoising

    NASA Astrophysics Data System (ADS)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  3. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  4. Nonlocal hierarchical dictionary learning using wavelets for image denoising.

    PubMed

    Yan, Ruomei; Shao, Ling; Liu, Yan

    2013-12-01

    Exploiting the sparsity within representation models for images is critical for image denoising. The best currently available denoising methods take advantage of the sparsity from image self-similarity, pre-learned, and fixed representations. Most of these methods, however, still have difficulties in tackling high noise levels or noise models other than Gaussian. In this paper, the multiresolution structure and sparsity of wavelets are employed by nonlocal dictionary learning in each decomposition level of the wavelets. Experimental results show that our proposed method outperforms two state-of-the-art image denoising algorithms on higher noise levels. Furthermore, our approach is more adaptive to the less extensively researched uniform noise.

  5. Dual-domain denoising in three dimensional magnetic resonance imaging.

    PubMed

    Peng, Jing; Zhou, Jiliu; Wu, Xi

    2016-08-01

    Denoising is a crucial preprocessing procedure for three dimensional magnetic resonance imaging (3D MRI). Existing denoising methods are predominantly implemented in a single domain, ignoring information in other domains. However, denoising methods are becoming increasingly complex, making analysis and implementation challenging. The present study aimed to develop a dual-domain image denoising (DDID) algorithm for 3D MRI that encapsulates information from the spatial and transform domains. In the present study, the DDID method was used to distinguish signal from noise in the spatial and frequency domains, after which robust accurate noise estimation was introduced for iterative filtering, which is simple and beneficial for computation. In addition, the proposed method was compared quantitatively and qualitatively with existing methods for synthetic and in vivo MRI datasets. The results of the present study suggested that the novel DDID algorithm performed well and provided competitive results, as compared with existing MRI denoising filters.

  6. Gradient histogram estimation and preservation for texture enhanced image denoising.

    PubMed

    Zuo, Wangmeng; Zhang, Lei; Song, Chunwei; Zhang, David; Gao, Huijun

    2014-06-01

    Natural image statistics plays an important role in image denoising, and various natural image priors, including gradient-based, sparse representation-based, and nonlocal self-similarity-based ones, have been widely studied and exploited for noise removal. In spite of the great success of many denoising algorithms, they tend to smooth the fine scale image textures when removing noise, degrading the image visual quality. To address this problem, in this paper, we propose a texture enhanced image denoising method by enforcing the gradient histogram of the denoised image to be close to a reference gradient histogram of the original image. Given the reference gradient histogram, a novel gradient histogram preservation (GHP) algorithm is developed to enhance the texture structures while removing noise. Two region-based variants of GHP are proposed for the denoising of images consisting of regions with different textures. An algorithm is also developed to effectively estimate the reference gradient histogram from the noisy observation of the unknown image. Our experimental results demonstrate that the proposed GHP algorithm can well preserve the texture appearance in the denoised images, making them look more natural.

  7. Color Image Denoising via Discriminatively Learned Iterative Shrinkage.

    PubMed

    Sun, Jian; Sun, Jian; Xu, Zingben

    2015-11-01

    In this paper, we propose a novel model, a discriminatively learned iterative shrinkage (DLIS) model, for color image denoising. The DLIS is a generalization of wavelet shrinkage by iteratively performing shrinkage over patch groups and whole image aggregation. We discriminatively learn the shrinkage functions and basis from the training pairs of noisy/noise-free images, which can adaptively handle different noise characteristics in luminance/chrominance channels, and the unknown structured noise in real-captured color images. Furthermore, to remove the splotchy real color noises, we design a Laplacian pyramid-based denoising framework to progressively recover the clean image from the coarsest scale to the finest scale by the DLIS model learned from the real color noises. Experiments show that our proposed approach can achieve the state-of-the-art denoising results on both synthetic denoising benchmark and real-captured color images.

  8. Terahertz digital holography image denoising using stationary wavelet transform

    NASA Astrophysics Data System (ADS)

    Cui, Shan-Shan; Li, Qi; Chen, Guanghao

    2015-04-01

    Terahertz (THz) holography is a frontier technology in terahertz imaging field. However, reconstructed images of holograms are inherently affected by speckle noise, on account of the coherent nature of light scattering. Stationary wavelet transform (SWT) is an effective tool in speckle noise removal. In this paper, two algorithms for despeckling SAR images are implemented to THz images based on SWT, which are threshold estimation and smoothing operation respectively. Denoised images are then quantitatively assessed by speckle index. Experimental results show that the stationary wavelet transform has superior denoising performance and image detail preservation to discrete wavelet transform. In terms of the threshold estimation, high levels of decomposing are needed for better denoising result. The smoothing operation combined with stationary wavelet transform manifests the optimal denoising effect at single decomposition level, with 5×5 average filtering.

  9. Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework.

    PubMed

    Guo, Qing; Dong, Fangmin; Sun, Shuifa; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi

    A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images.

  10. Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.

    PubMed

    Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong

    2016-10-03

    An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.

  11. Computed tomography perfusion imaging denoising using Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna

    2012-06-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study.

  12. Sparsity based denoising of spectral domain optical coherence tomography images

    PubMed Central

    Fang, Leyuan; Li, Shutao; Nie, Qing; Izatt, Joseph A.; Toth, Cynthia A.; Farsiu, Sina

    2012-01-01

    In this paper, we make contact with the field of compressive sensing and present a development and generalization of tools and results for reconstructing irregularly sampled tomographic data. In particular, we focus on denoising Spectral-Domain Optical Coherence Tomography (SDOCT) volumetric data. We take advantage of customized scanning patterns, in which, a selected number of B-scans are imaged at higher signal-to-noise ratio (SNR). We learn a sparse representation dictionary for each of these high-SNR images, and utilize such dictionaries to denoise the low-SNR B-scans. We name this method multiscale sparsity based tomographic denoising (MSBTD). We show the qualitative and quantitative superiority of the MSBTD algorithm compared to popular denoising algorithms on images from normal and age-related macular degeneration eyes of a multi-center clinical trial. We have made the corresponding data set and software freely available online. PMID:22567586

  13. Image denoising based on wavelet cone of influence analysis

    NASA Astrophysics Data System (ADS)

    Pang, Wei; Li, Yufeng

    2009-11-01

    Donoho et al have proposed a method for denoising by thresholding based on wavelet transform, and indeed, the application of their method to image denoising has been extremely successful. But this method is based on the assumption that the type of noise is only additive Gaussian white noise, which is not efficient to impulse noise. In this paper, a new image denoising algorithm based on wavelet cone of influence (COI) analyzing is proposed, and which can effectively remove the impulse noise and preserve the image edges via undecimated discrete wavelet transform (UDWT). Furthermore, combining with the traditional wavelet thresholding denoising method, it can be also used to restrain more widely type of noise such as Gaussian noise, impulse noise, poisson noise and other mixed noise. Experiment results illustrate the advantages of this method.

  14. Applications of discrete multiwavelet techniques to image denoising

    NASA Astrophysics Data System (ADS)

    Wang, Haihui; Peng, Jiaxiong; Wu, Wei; Ye, Bin

    2003-09-01

    In this paper, we present a new method by using 2-D discrete multiwavelet transform in image denoising. The developments in wavelet theory have given rise to the wavelet thresholding method, for extracting a signal from noisy data. The method of signal denoising via wavelet thresholding was popularized. Multiwavelets have recently been introduced and they offer simultaneous orthogonality, symmetry and short support. This property makes multiwavelets more suitable for various image processing applications, especially denoising. It is based on thresholding of multiwavelet coefficients arising from the standard scalar orthogonal wavelet transform. It takes into account the covariance structure of the transform. Denoising is images via thresholding of the multiwavelet coefficients result from preprocessing and the discrete multiwavelet transform can be carried out by threating the output in this paper. The form of the threshold is carefully formulated and is the key to the excellent results obtained in the extensive numerical simulations of image denoising. The performances of multiwavelets are compared with those of scalar wavelets. Simulations reveal that multiwavelet based image denoising schemes outperform wavelet based method both subjectively and objectively.

  15. Denoising MR spectroscopic imaging data with low-rank approximations.

    PubMed

    Nguyen, Hien M; Peng, Xi; Do, Minh N; Liang, Zhi-Pei

    2013-01-01

    This paper addresses the denoising problem associated with magnetic resonance spectroscopic imaging (MRSI), where signal-to-noise ratio (SNR) has been a critical problem. A new scheme is proposed, which exploits two low-rank structures that exist in MRSI data, one due to partial separability and the other due to linear predictability. Denoising is performed by arranging the measured data in appropriate matrix forms (i.e., Casorati and Hankel) and applying low-rank approximations by singular value decomposition (SVD). The proposed method has been validated using simulated and experimental data, producing encouraging results. Specifically, the method can effectively denoise MRSI data in a wide range of SNR values while preserving spatial-spectral features. The method could prove useful for denoising MRSI data and other spatial-spectral and spatial-temporal imaging data as well.

  16. An adaptive nonlocal means scheme for medical image denoising

    NASA Astrophysics Data System (ADS)

    Thaipanich, Tanaphol; Kuo, C.-C. Jay

    2010-03-01

    Medical images often consist of low-contrast objects corrupted by random noise arising in the image acquisition process. Thus, image denoising is one of the fundamental tasks required by medical imaging analysis. In this work, we investigate an adaptive denoising scheme based on the nonlocal (NL)-means algorithm for medical imaging applications. In contrast with the traditional NL-means algorithm, the proposed adaptive NL-means (ANL-means) denoising scheme has three unique features. First, it employs the singular value decomposition (SVD) method and the K-means clustering (K-means) technique for robust classification of blocks in noisy images. Second, the local window is adaptively adjusted to match the local property of a block. Finally, a rotated block matching algorithm is adopted for better similarity matching. Experimental results from both additive white Gaussian noise (AWGN) and Rician noise are given to demonstrate the superior performance of the proposed ANL denoising technique over various image denoising benchmarks in term of both PSNR and perceptual quality comparison.

  17. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.

    PubMed

    Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng

    2015-11-01

    Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.

  18. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  19. Image sequence denoising via sparse and redundant representations.

    PubMed

    Protter, Matan; Elad, Michael

    2009-01-01

    In this paper, we consider denoising of image sequences that are corrupted by zero-mean additive white Gaussian noise. Relative to single image denoising techniques, denoising of sequences aims to also utilize the temporal dimension. This assists in getting both faster algorithms and better output quality. This paper focuses on utilizing sparse and redundant representations for image sequence denoising, extending the work reported in. In the single image setting, the K-SVD algorithm is used to train a sparsifying dictionary for the corrupted image. This paper generalizes the above algorithm by offering several extensions: i) the atoms used are 3-D; ii) the dictionary is propagated from one frame to the next, reducing the number of required iterations; and iii) averaging is done on patches in both spatial and temporal neighboring locations. These modifications lead to substantial benefits in complexity and denoising performance, compared to simply running the single image algorithm sequentially. The algorithm's performance is experimentally compared to several state-of-the-art algorithms, demonstrating comparable or favorable results.

  20. Image denoising via group Sparse representation over learned dictionary

    NASA Astrophysics Data System (ADS)

    Cheng, Pan; Deng, Chengzhi; Wang, Shengqian; Zhang, Chunfeng

    2013-10-01

    Images are one of vital ways to get information for us. However, in the practical application, images are often subject to a variety of noise, so that solving the problem of image denoising becomes particularly important. The K-SVD algorithm can improve the denoising effect by sparse coding atoms instead of the traditional method of sparse coding dictionary. In order to further improve the effect of denoising, we propose to extended the K-SVD algorithm via group sparse representation. The key point of this method is dividing the sparse coefficients into groups, so that adjusts the correlation among the elements by controlling the size of the groups. This new approach can improve the local constraints between adjacent atoms, thereby it is very important to increase the correlation between the atoms. The experimental results show that our method has a better effect on image recovery, which is efficient to prevent the block effect and can get smoother images.

  1. Pixon Based Image Denoising Scheme by Preserving Exact Edge Locations

    NASA Astrophysics Data System (ADS)

    Srikrishna, Atluri; Reddy, B. Eswara; Pompapathi, Manasani

    2016-09-01

    Denoising of an image is an essential step in many image processing applications. In any image de-noising algorithm, it is a major concern to keep interesting structures of the image like abrupt changes in image intensity values (edges). In this paper an efficient algorithm for image de-noising is proposed that obtains integrated and consecutive original image from noisy image using diffusion equations in pixon domain. The process mainly consists of two steps. In first step, the pixons for noisy image are obtained by using K-means clustering process and next step includes applying diffusion equations on the pixonal model of the image to obtain new intensity values for the restored image. The process has been applied on a variety of standard images and the objective fidelity has been compared with existing algorithms. The experimental results show that the proposed algorithm has a better performance by preserving edge details compared in terms of Figure of Merit and improved Peak-to-Signal-Noise-Ratio value. The proposed method brings out a denoising technique which preserves edge details.

  2. Blind source separation based x-ray image denoising from an image sequence.

    PubMed

    Yu, Chun-Yu; Li, Yan; Fei, Bin; Li, Wei-Liang

    2015-09-01

    Blind source separation (BSS) based x-ray image denoising from an image sequence is proposed. Without priori knowledge, the useful image signal can be separated from an x-ray image sequence, for original images are supposed as different combinations of stable image signal and random image noise. The BSS algorithms such as fixed-point independent component analysis and second-order statistics singular value decomposition are used and compared with multi-frame averaging which is a common algorithm for improving image's signal-to-noise ratio (SNR). Denoising performance is evaluated in SNR, standard deviation, entropy, and runtime. Analysis indicates that BSS is applicable to image denoising; the denoised image's quality will get better when more frames are included in an x-ray image sequence, but it will cost more time; there should be trade-off between denoising performance and runtime, which means that the number of frames included in an image sequence is enough.

  3. Image denoising based on wavelets and multifractals for singularity detection.

    PubMed

    Zhong, Junmei; Ning, Ruola

    2005-10-01

    This paper presents a very efficient algorithm for image denoising based on wavelets and multifractals for singularity detection. A challenge of image denoising is how to preserve the edges of an image when reducing noise. By modeling the intensity surface of a noisy image as statistically self-similar multifractal processes and taking advantage of the multiresolution analysis with wavelet transform to exploit the local statistical self-similarity at different scales, the pointwise singularity strength value characterizing the local singularity at each scale was calculated. By thresholding the singularity strength, wavelet coefficients at each scale were classified into two categories: the edge-related and regular wavelet coefficients and the irregular coefficients. The irregular coefficients were denoised using an approximate minimum mean-squared error (MMSE) estimation method, while the edge-related and regular wavelet coefficients were smoothed using the fuzzy weighted mean (FWM) filter aiming at preserving the edges and details when reducing noise. Furthermore, to make the FWM-based filtering more efficient for noise reduction at the lowest decomposition level, the MMSE-based filtering was performed as the first pass of denoising followed by performing the FWM-based filtering. Experimental results demonstrated that this algorithm could achieve both good visual quality and high PSNR for the denoised images.

  4. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  5. Adaptively Tuned Iterative Low Dose CT Image Denoising.

    PubMed

    Hashemi, SayedMasoud; Paul, Narinder S; Beheshti, Soosan; Cobbold, Richard S C

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction.

  6. Study on Underwater Image Denoising Algorithm Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Jian, Sun; Wen, Wang

    2017-02-01

    This paper analyzes the application of MATLAB in underwater image processing, the transmission characteristics of the underwater laser light signal and the kinds of underwater noise has been described, the common noise suppression algorithm: Wiener filter, median filter, average filter algorithm is brought out. Then the advantages and disadvantages of each algorithm in image sharpness and edge protection areas have been compared. A hybrid filter algorithm based on wavelet transform has been proposed which can be used for Color Image Denoising. At last the PSNR and NMSE of each algorithm has been given out, which compares the ability to de-noising

  7. Image denoising with the dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Yaseen, Alauldeen S.; Pavlova, Olga N.; Pavlov, Alexey N.; Hramov, Alexander E.

    2016-04-01

    The purpose of this study is to compare image denoising techniques based on real and complex wavelet-transforms. Possibilities provided by the classical discrete wavelet transform (DWT) with hard and soft thresholding are considered, and influences of the wavelet basis and image resizing are discussed. The quality of image denoising for the standard 2-D DWT and the dual-tree complex wavelet transform (DT-CWT) is studied. It is shown that DT-CWT outperforms 2-D DWT at the appropriate selection of the threshold level.

  8. Image denoising via adaptive eigenvectors of graph Laplacian

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Tang, Yibin; Xu, Ning; Zhou, Lin; Zhao, Li

    2016-07-01

    An image denoising method via adaptive eigenvectors of graph Laplacian (EGL) is proposed. Unlike the trivial parameter setting of the used eigenvectors in the traditional EGL method, in our method, the eigenvectors are adaptively selected in the whole denoising procedure. In detail, a rough image is first built with the eigenvectors from the noisy image, where the eigenvectors are selected by using the deviation estimation of the clean image. Subsequently, a guided image is effectively restored with a weighted average of the noisy and rough images. In this operation, the average coefficient is adaptively obtained to set the deviation of the guided image to approximately that of the clean image. Finally, the denoised image is achieved by a group-sparse model with the pattern from the guided image, where the eigenvectors are chosen in the error control of the noise deviation. Moreover, a modified group orthogonal matching pursuit algorithm is developed to efficiently solve the above group sparse model. The experiments show that our method not only improves the practicality of the EGL methods with the dependence reduction of the parameter setting, but also can outperform some well-developed denoising methods, especially for noise with large deviations.

  9. Denoising Magnetic Resonance Images Using Collaborative Non-Local Means.

    PubMed

    Chen, Geng; Zhang, Pei; Wu, Yafeng; Shen, Dinggang; Yap, Pew-Thian

    2016-02-12

    Noise artifacts in magnetic resonance (MR) images increase the complexity of image processing workflows and decrease the reliability of inferences drawn from the images. It is thus often desirable to remove such artifacts beforehand for more robust and effective quantitative analysis. It is important to preserve the integrity of relevant image information while removing noise in MR images. A variety of approaches have been developed for this purpose, and the non-local means (NLM) filter has been shown to be able to achieve state-of-the-art denoising performance. For effective denoising, NLM relies heavily on the existence of repeating structural patterns, which however might not always be present within a single image. This is especially true when one considers the fact that the human brain is complex and contains a lot of unique structures. In this paper we propose to leverage the repeating structures from multiple images to collaboratively denoise an image. The underlying assumption is that it is more likely to find repeating structures from multiple scans than from a single scan. Specifically, to denoise a target image, multiple images, which may be acquired from different subjects, are spatially aligned to the target image, and an NLM-like block matching is performed on these aligned images with the target image as the reference. This will significantly increase the number of matching structures and thus boost the denoising performance. Experiments on both synthetic and real data show that the proposed approach, collaborative non-local means (CNLM), outperforms the classic NLM and yields results with markedly improved structural details.

  10. Region-based image denoising through wavelet and fast discrete curvelet transform

    NASA Astrophysics Data System (ADS)

    Gu, Yanfeng; Guo, Yan; Liu, Xing; Zhang, Ye

    2008-10-01

    Image denoising always is one of important research topics in the image processing field. In this paper, fast discrete curvelet transform (FDCT) and undecimated wavelet transform (UDWT) are proposed for image denoising. A noisy image is first denoised by FDCT and UDWT separately. The whole image space is then divided into edge region and non-edge regions. After that, wavelet transform is performed on the images denoised by FDCT and UDWT respectively. Finally, the resultant image is fused through using both of edge region wavelet cofficients of the image denoised by FDCT and non-edge region wavelet cofficients of the image denoised by UDWT. The proposed method is validated through numerical experiments conducted on standard test images. The experimental results show that the proposed algorithm outperforms wavelet-based and curvelet-based image denoising methods and preserve linear features well.

  11. Single-image noise level estimation for blind denoising.

    PubMed

    Liu, Xinhao; Tanaka, Masayuki; Okutomi, Masatoshi

    2013-12-01

    Noise level is an important parameter to many image processing applications. For example, the performance of an image denoising algorithm can be much degraded due to the poor noise level estimation. Most existing denoising algorithms simply assume the noise level is known that largely prevents them from practical use. Moreover, even with the given true noise level, these denoising algorithms still cannot achieve the best performance, especially for scenes with rich texture. In this paper, we propose a patch-based noise level estimation algorithm and suggest that the noise level parameter should be tuned according to the scene complexity. Our approach includes the process of selecting low-rank patches without high frequency components from a single noisy image. The selection is based on the gradients of the patches and their statistics. Then, the noise level is estimated from the selected patches using principal component analysis. Because the true noise level does not always provide the best performance for nonblind denoising algorithms, we further tune the noise level parameter for nonblind denoising. Experiments demonstrate that both the accuracy and stability are superior to the state of the art noise level estimation algorithm for various scenes and noise levels.

  12. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  13. Image denoising using the higher order singular value decomposition.

    PubMed

    Rajwade, Ajit; Rangarajan, Anand; Banerjee, Arunava

    2013-04-01

    In this paper, we propose a very simple and elegant patch-based, machine learning technique for image denoising using the higher order singular value decomposition (HOSVD). The technique simply groups together similar patches from a noisy image (with similarity defined by a statistically motivated criterion) into a 3D stack, computes the HOSVD coefficients of this stack, manipulates these coefficients by hard thresholding, and inverts the HOSVD transform to produce the final filtered image. Our technique chooses all required parameters in a principled way, relating them to the noise model. We also discuss our motivation for adopting the HOSVD as an appropriate transform for image denoising. We experimentally demonstrate the excellent performance of the technique on grayscale as well as color images. On color images, our method produces state-of-the-art results, outperforming other color image denoising algorithms at moderately high noise levels. A criterion for optimal patch-size selection and noise variance estimation from the residual images (after denoising) is also presented.

  14. Blind Image Denoising via Dependent Dirichlet Process Tree.

    PubMed

    Zhu, Fengyuan; Chen, Guangyong; Hao, Jianye; Heng, Pheng-Ann

    2016-08-31

    Most existing image denoising approaches assumed the noise to be homogeneous white Gaussian distributed with known intensity. However, in real noisy images, the noise models are usually unknown beforehand and can be much more complex. This paper addresses this problem and proposes a novel blind image denoising algorithm to recover the clean image from noisy one with the unknown noise model. To model the empirical noise of an image, our method introduces the mixture of Gaussian distribution, which is flexible enough to approximate different continuous distributions. The problem of blind image denoising is reformulated as a learning problem. The procedure is to first build a two-layer structural model for noisy patches and consider the clean ones as latent variable. To control the complexity of the noisy patch model, this work proposes a novel Bayesian nonparametric prior called "Dependent Dirichlet Process Tree" to build the model. Then, this study derives a variational inference algorithm to estimate model parameters and recover clean patches. We apply our method on synthesis and real noisy images with different noise models. Comparing with previous approaches, ours achieves better performance. The experimental results indicate the efficiency of the proposed algorithm to cope with practical image denoising tasks.

  15. Statistical Methods for Image Registration and Denoising

    DTIC Science & Technology

    2008-06-19

    21 2.5.4 Nonlocal Means . . . . . . . . . . . . . . . . . 22 2.5.5 Patch -Based Denoising with Optimal Spatial Adap- tation...24 2.5.6 Other Patch -Based Methods . . . . . . . . . . 25 2.6 Chapter Summary...the nonlocal means [9], and an optimal patch -based algorithm [31]. These algorithms all include some measure of pixel similarity that allows the

  16. A New Method for Nonlocal Means Image Denoising Using Multiple Images.

    PubMed

    Wang, Xingzheng; Wang, Haoqian; Yang, Jiangfeng; Zhang, Yongbing

    2016-01-01

    The basic principle of nonlocal means is to denoise a pixel using the weighted average of the neighbourhood pixels, while the weight is decided by the similarity of these pixels. The key issue of the nonlocal means method is how to select similar patches and design the weight of them. There are two main contributions of this paper: The first contribution is that we use two images to denoise the pixel. These two noised images are with the same noise deviation. Instead of using only one image, we calculate the weight from two noised images. After the first denoising process, we get a pre-denoised image and a residual image. The second contribution is combining the nonlocal property between residual image and pre-denoised image. The improved nonlocal means method pays more attention on the similarity than the original one, which turns out to be very effective in eliminating gaussian noise. Experimental results with simulated data are provided.

  17. A New Method for Nonlocal Means Image Denoising Using Multiple Images

    PubMed Central

    Wang, Xingzheng; Wang, Haoqian; Yang, Jiangfeng; Zhang, Yongbing

    2016-01-01

    The basic principle of nonlocal means is to denoise a pixel using the weighted average of the neighbourhood pixels, while the weight is decided by the similarity of these pixels. The key issue of the nonlocal means method is how to select similar patches and design the weight of them. There are two main contributions of this paper: The first contribution is that we use two images to denoise the pixel. These two noised images are with the same noise deviation. Instead of using only one image, we calculate the weight from two noised images. After the first denoising process, we get a pre-denoised image and a residual image. The second contribution is combining the nonlocal property between residual image and pre-denoised image. The improved nonlocal means method pays more attention on the similarity than the original one, which turns out to be very effective in eliminating gaussian noise. Experimental results with simulated data are provided. PMID:27459293

  18. Local Sparse Structure Denoising for Low-Light-Level Image.

    PubMed

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa

    2015-12-01

    Sparse and redundant representations perform well in image denoising. However, sparsity-based methods fail to denoise low-light-level (LLL) images because of heavy and complex noise. They consider sparsity on image patches independently and tend to lose the texture structures. To suppress noises and maintain textures simultaneously, it is necessary to embed noise invariant features into the sparse decomposition process. We, therefore, used a local structure preserving sparse coding (LSPSc) formulation to explore the local sparse structures (both the sparsity and local structure) in image. It was found that, with the introduction of spatial local structure constraint into the general sparse coding algorithm, LSPSc could improve the robustness of sparse representation for patches in serious noise. We further used a kernel LSPSc (K-LSPSc) formulation, which extends LSPSc into the kernel space to weaken the influence of linear structure constraint in nonlinear data. Based on the robust LSPSc and K-LSPSc algorithms, we constructed a local sparse structure denoising (LSSD) model for LLL images, which was demonstrated to give high performance in the natural LLL images denoising, indicating that both the LSPSc- and K-LSPSc-based LSSD models have the stable property of noise inhibition and texture details preservation.

  19. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  20. Dictionary-based image denoising for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Allner, Sebastian; Mei, Kai; Pfeiffer, Franz; Noël, Peter B.

    2016-03-01

    Compared to conventional computed tomography (CT), dual energy CT allows for improved material decomposition by conducting measurements at two distinct energy spectra. Since radiation exposure is a major concern in clinical CT, there is a need for tools to reduce the noise level in images while preserving diagnostic information. One way to achieve this goal is the application of image-based denoising algorithms after an analytical reconstruction has been performed. We have developed a modified dictionary denoising algorithm for dual energy CT aimed at exploiting the high spatial correlation between between images obtained from different energy spectra. Both the low-and high energy image are partitioned into small patches which are subsequently normalized. Combined patches with improved signal-to-noise ratio are formed by a weighted addition of corresponding normalized patches from both images. Assuming that corresponding low-and high energy image patches are related by a linear transformation, the signal in both patches is added coherently while noise is neglected. Conventional dictionary denoising is then performed on the combined patches. Compared to conventional dictionary denoising and bilateral filtering, our algorithm achieved superior performance in terms of qualitative and quantitative image quality measures. We demonstrate, in simulation studies, that this approach can produce 2d-histograms of the high- and low-energy reconstruction which are characterized by significantly improved material features and separation. Moreover, in comparison to other approaches that attempt denoising without simultaneously using both energy signals, superior similarity to the ground truth can be found with our proposed algorithm.

  1. Image denoising with dominant sets by a coalitional game approach.

    PubMed

    Hsiao, Pei-Chi; Chang, Long-Wen

    2013-02-01

    Dominant sets are a new graph partition method for pairwise data clustering proposed by Pavan and Pelillo. We address the problem of dominant sets with a coalitional game model, in which each data point is treated as a player and similar data points are encouraged to group together for cooperation. We propose betrayal and hermit rules to describe the cooperative behaviors among the players. After applying the betrayal and hermit rules, an optimal and stable graph partition emerges, and all the players in the partition will not change their groups. For computational feasibility, we design an approximate algorithm for finding a dominant set of mutually similar players and then apply the algorithm to an application such as image denoising. In image denoising, every pixel is treated as a player who seeks similar partners according to its patch appearance in its local neighborhood. By averaging the noisy effects with the similar pixels in the dominant sets, we improve nonlocal means image denoising to restore the intrinsic structure of the original images and achieve competitive denoising results with the state-of-the-art methods in visual and quantitative qualities.

  2. 4D MR imaging using robust internal respiratory signal

    NASA Astrophysics Data System (ADS)

    Hui, CheukKai; Wen, Zhifei; Stemkens, Bjorn; Tijssen, R. H. N.; van den Berg, C. A. T.; Hwang, Ken-Pin; Beddar, Sam

    2016-05-01

    The purpose of this study is to investigate the feasibility of using internal respiratory (IR) surrogates to sort four-dimensional (4D) magnetic resonance (MR) images. The 4D MR images were constructed by acquiring fast 2D cine MR images sequentially, with each slice scanned for more than one breathing cycle. The 4D volume was then sorted retrospectively using the IR signal. In this study, we propose to use multiple low-frequency components in the Fourier space as well as the anterior body boundary as potential IR surrogates. From these potential IR surrogates, we used a clustering algorithm to identify those that best represented the respiratory pattern to derive the IR signal. A study with healthy volunteers was performed to assess the feasibility of the proposed IR signal. We compared this proposed IR signal with the respiratory signal obtained using respiratory bellows. Overall, 99% of the IR signals matched the bellows signals. The average difference between the end inspiration times in the IR signal and bellows signal was 0.18 s in this cohort of matching signals. For the acquired images corresponding to the other 1% of non-matching signal pairs, the respiratory motion shown in the images was coherent with the respiratory phases determined by the IR signal, but not the bellows signal. This suggested that the IR signal determined by the proposed method could potentially correct the faulty bellows signal. The sorted 4D images showed minimal mismatched artefacts and potential clinical applicability. The proposed IR signal therefore provides a feasible alternative to effectively sort MR images in 4D.

  3. Fast non local means denoising for 3D MR images.

    PubMed

    Coupé, Pierrick; Yger, Pierre; Barillot, Christian

    2006-01-01

    One critical issue in the context of image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image conspicuity and to improve the performances of all the processings needed for quantitative imaging analysis. The method proposed in this paper is based on an optimized version of the Non Local (NL) Means algorithm. This approach uses the natural redundancy of information in image to remove the noise. Tests were carried out on synthetic datasets and on real 3T MR images. The results show that the NL-means approach outperforms other classical denoising methods, such as Anisotropic Diffusion Filter and Total Variation.

  4. Streak image denoising and segmentation using adaptive Gaussian guided filter.

    PubMed

    Jiang, Zhuocheng; Guo, Baoping

    2014-09-10

    In streak tube imaging lidar (STIL), streak images are obtained using a CCD camera. However, noise in the captured streak images can greatly affect the quality of reconstructed 3D contrast and range images. The greatest challenge for streak image denoising is reducing the noise while preserving details. In this paper, we propose an adaptive Gaussian guided filter (AGGF) for noise removal and detail enhancement of streak images. The proposed algorithm is based on a guided filter (GF) and part of an adaptive bilateral filter (ABF). In the AGGF, the details are enhanced by optimizing the offset parameter. AGGF-denoised streak images are significantly sharper than those denoised by the GF. Moreover, the AGGF is a fast linear time algorithm achieved by recursively implementing a Gaussian filter kernel. Experimentally, AGGF demonstrates its capacity to preserve edges and thin structures and outperforms the existing bilateral filter and domain transform filter in terms of both visual quality and peak signal-to-noise ratio performance.

  5. 2D Orthogonal Locality Preserving Projection for Image Denoising.

    PubMed

    Shikkenawis, Gitam; Mitra, Suman K

    2016-01-01

    Sparse representations using transform-domain techniques are widely used for better interpretation of the raw data. Orthogonal locality preserving projection (OLPP) is a linear technique that tries to preserve local structure of data in the transform domain as well. Vectorized nature of OLPP requires high-dimensional data to be converted to vector format, hence may lose spatial neighborhood information of raw data. On the other hand, processing 2D data directly, not only preserves spatial information, but also improves the computational efficiency considerably. The 2D OLPP is expected to learn the transformation from 2D data itself. This paper derives mathematical foundation for 2D OLPP. The proposed technique is used for image denoising task. Recent state-of-the-art approaches for image denoising work on two major hypotheses, i.e., non-local self-similarity and sparse linear approximations of the data. Locality preserving nature of the proposed approach automatically takes care of self-similarity present in the image while inferring sparse basis. A global basis is adequate for the entire image. The proposed approach outperforms several state-of-the-art image denoising approaches for gray-scale, color, and texture images.

  6. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  7. Two-direction nonlocal model for image denoising.

    PubMed

    Zhang, Xuande; Feng, Xiangchu; Wang, Weiwei

    2013-01-01

    Similarities inherent in natural images have been widely exploited for image denoising and other applications. In fact, if a cluster of similar image patches is rearranged into a matrix, similarities exist both between columns and rows. Using the similarities, we present a two-directional nonlocal (TDNL) variational model for image denoising. The solution of our model consists of three components: one component is a scaled version of the original observed image and the other two components are obtained by utilizing the similarities. Specifically, by using the similarity between columns, we get a nonlocal-means-like estimation of the patch with consideration to all similar patches, while the weights are not the pairwise similarities but a set of clusterwise coefficients. Moreover, by using the similarity between rows, we also get nonlocal-autoregression-like estimations for the center pixels of the similar patches. The TDNL model leads to an alternative minimization algorithm. Experiments indicate that the model can perform on par with or better than the state-of-the-art denoising methods.

  8. Oriented wavelet transform for image compression and denoising.

    PubMed

    Chappelier, Vivien; Guillemot, Christine

    2006-10-01

    In this paper, we introduce a new transform for image processing, based on wavelets and the lifting paradigm. The lifting steps of a unidimensional wavelet are applied along a local orientation defined on a quincunx sampling grid. To maximize energy compaction, the orientation minimizing the prediction error is chosen adaptively. A fine-grained multiscale analysis is provided by iterating the decomposition on the low-frequency band. In the context of image compression, the multiresolution orientation map is coded using a quad tree. The rate allocation between the orientation map and wavelet coefficients is jointly optimized in a rate-distortion sense. For image denoising, a Markov model is used to extract the orientations from the noisy image. As long as the map is sufficiently homogeneous, interesting properties of the original wavelet are preserved such as regularity and orthogonality. Perfect reconstruction is ensured by the reversibility of the lifting scheme. The mutual information between the wavelet coefficients is studied and compared to the one observed with a separable wavelet transform. The rate-distortion performance of this new transform is evaluated for image coding using state-of-the-art subband coders. Its performance in a denoising application is also assessed against the performance obtained with other transforms or denoising methods.

  9. Optimally stabilized PET image denoising using trilateral filtering.

    PubMed

    Mansoor, Awais; Bagci, Ulas; Mollura, Daniel J

    2014-01-01

    Low-resolution and signal-dependent noise distribution in positron emission tomography (PET) images makes denoising process an inevitable step prior to qualitative and quantitative image analysis tasks. Conventional PET denoising methods either over-smooth small-sized structures due to resolution limitation or make incorrect assumptions about the noise characteristics. Therefore, clinically important quantitative information may be corrupted. To address these challenges, we introduced a novel approach to remove signal-dependent noise in the PET images where the noise distribution was considered as Poisson-Gaussian mixed. Meanwhile, the generalized Anscombe's transformation (GAT) was used to stabilize varying nature of the PET noise. Other than noise stabilization, it is also desirable for the noise removal filter to preserve the boundaries of the structures while smoothing the noisy regions. Indeed, it is important to avoid significant loss of quantitative information such as standard uptake value (SUV)-based metrics as well as metabolic lesion volume. To satisfy all these properties, we extended bilateral filtering method into trilateral filtering through multiscaling and optimal Gaussianization process. The proposed method was tested on more than 50 PET-CT images from various patients having different cancers and achieved the superior performance compared to the widely used denoising techniques in the literature.

  10. Comparison of de-noising techniques for FIRST images

    SciTech Connect

    Fodor, I K; Kamath, C

    2001-01-22

    Data obtained through scientific observations are often contaminated by noise and artifacts from various sources. As a result, a first step in mining these data is to isolate the signal of interest by minimizing the effects of the contaminations. Once the data has been cleaned or de-noised, data mining can proceed as usual. In this paper, we describe our work in denoising astronomical images from the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey. We are mining this survey to detect radio-emitting galaxies with a bent-double morphology. This task is made difficult by the noise in the images caused by the processing of the sensor data. We compare three different approaches to de-noising: thresholding of wavelet coefficients advocated in the statistical community, traditional Altering methods used in the image processing community, and a simple thresholding scheme proposed by FIRST astronomers. While each approach has its merits and pitfalls, we found that for our purpose, the simple thresholding scheme worked relatively well for the FIRST dataset.

  11. Denoising in Contrast-Enhanced X-ray Images

    NASA Astrophysics Data System (ADS)

    Jeon, Gwanggil

    2016-12-01

    In this paper, we propose a denoising and contrast-enhancement method for medical images. The main purpose of medical image improvement is to transform lower contrast data into higher contrast, and to reduce high noise levels. To meet this goal, we propose a noise-level estimation method, whereby the noise level is estimated by computing the standard deviation and variance in a local block. The obtained noise level is then used as an input parameter for the block-matching and 3D filtering (BM3D) algorithm, and the denoising process is then performed. Noise-level estimation step is important because the BM3D algorithm does not perform well without correct noise-level information. Simulation results confirm that the proposed method outperforms other benchmarks with respect to both their objective and visual performances.

  12. 4D Light Field Imaging System Using Programmable Aperture

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam

    2012-01-01

    Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need

  13. Robust L1 PCA and application in image denoising

    NASA Astrophysics Data System (ADS)

    Gao, Junbin; Kwan, Paul W. H.; Guo, Yi

    2007-11-01

    The so-called robust L1 PCA was introduced in our recent work [1] based on the L1 noise assumption. Due to the heavy tail characteristics of the L1 distribution, the proposed model has been proved much more robust against data outliers. In this paper, we further demonstrate how the learned robust L1 PCA model can be used to denoise image data.

  14. Undecimated Wavelet Transforms for Image De-noising

    SciTech Connect

    Gyaourova, A; Kamath, C; Fodor, I K

    2002-11-19

    A few different approaches exist for computing undecimated wavelet transform. In this work we construct three undecimated schemes and evaluate their performance for image noise reduction. We use standard wavelet based de-noising techniques and compare the performance of our algorithms with the original undecimated wavelet transform, as well as with the decimated wavelet transform. The experiments we have made show that our algorithms have better noise removal/blurring ratio.

  15. Local Spectral Component Decomposition for Multi-Channel Image Denoising.

    PubMed

    Rizkinia, Mia; Baba, Tatsuya; Shirai, Keiichiro; Okuda, Masahiro

    2016-07-01

    We propose a method for local spectral component decomposition based on the line feature of local distribution. Our aim is to reduce noise on multi-channel images by exploiting the linear correlation in the spectral domain of a local region. We first calculate a linear feature over the spectral components of an M -channel image, which we call the spectral line, and then, using the line, we decompose the image into three components: a single M -channel image and two gray-scale images. By virtue of the decomposition, the noise is concentrated on the two images, and thus our algorithm needs to denoise only the two gray-scale images, regardless of the number of the channels. As a result, image deterioration due to the imbalance of the spectral component correlation can be avoided. The experiment shows that our method improves image quality with less deterioration while preserving vivid contrast. Our method is especially effective for hyperspectral images. The experimental results demonstrate that our proposed method can compete with the other state-of-the-art denoising methods.

  16. 4D XCAT phantom for multimodality imaging research

    SciTech Connect

    Segars, W. P.; Sturgeon, G.; Mendonca, S.; Grimes, Jason; Tsui, B. M. W.

    2010-09-15

    Purpose: The authors develop the 4D extended cardiac-torso (XCAT) phantom for multimodality imaging research. Methods: Highly detailed whole-body anatomies for the adult male and female were defined in the XCAT using nonuniform rational B-spline (NURBS) and subdivision surfaces based on segmentation of the Visible Male and Female anatomical datasets from the National Library of Medicine as well as patient datasets. Using the flexibility of these surfaces, the Visible Human anatomies were transformed to match body measurements and organ volumes for a 50th percentile (height and weight) male and female. The desired body measurements for the models were obtained using the PEOPLESIZE program that contains anthropometric dimensions categorized from 1st to the 99th percentile for US adults. The desired organ volumes were determined from ICRP Publication 89 [ICRP, ''Basic anatomical and physiological data for use in radiological protection: reference values,'' ICRP Publication 89 (International Commission on Radiological Protection, New York, NY, 2002)]. The male and female anatomies serve as standard templates upon which anatomical variations may be modeled in the XCAT through user-defined parameters. Parametrized models for the cardiac and respiratory motions were also incorporated into the XCAT based on high-resolution cardiac- and respiratory-gated multislice CT data. To demonstrate the usefulness of the phantom, the authors show example simulation studies in PET, SPECT, and CT using publicly available simulation packages. Results: As demonstrated in the pilot studies, the 4D XCAT (which includes thousands of anatomical structures) can produce realistic imaging data when combined with accurate models of the imaging process. With the flexibility of the NURBS surface primitives, any number of different anatomies, cardiac or respiratory motions or patterns, and spatial resolutions can be simulated to perform imaging research. Conclusions: With the ability to produce

  17. Improved deadzone modeling for bivariate wavelet shrinkage-based image denoising

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2016-05-01

    Modern image processing performed on-board low Size, Weight, and Power (SWaP) platforms, must provide high- performance while simultaneously reducing memory footprint, power consumption, and computational complexity. Image preprocessing, along with downstream image exploitation algorithms such as object detection and recognition, and georegistration, place a heavy burden on power and processing resources. Image preprocessing often includes image denoising to improve data quality for downstream exploitation algorithms. High-performance image denoising is typically performed in the wavelet domain, where noise generally spreads and the wavelet transform compactly captures high information-bearing image characteristics. In this paper, we improve modeling fidelity of a previously-developed, computationally-efficient wavelet-based denoising algorithm. The modeling improvements enhance denoising performance without significantly increasing computational cost, thus making the approach suitable for low-SWAP platforms. Specifically, this paper presents modeling improvements to the Sendur-Selesnick model (SSM) which implements a bivariate wavelet shrinkage denoising algorithm that exploits interscale dependency between wavelet coefficients. We formulate optimization problems for parameters controlling deadzone size which leads to improved denoising performance. Two formulations are provided; one with a simple, closed form solution which we use for numerical result generation, and the second as an integral equation formulation involving elliptic integrals. We generate image denoising performance results over different image sets drawn from public domain imagery, and investigate the effect of wavelet filter tap length on denoising performance. We demonstrate denoising performance improvement when using the enhanced modeling over performance obtained with the baseline SSM model.

  18. A novel de-noising method for B ultrasound images

    NASA Astrophysics Data System (ADS)

    Tian, Da-Yong; Mo, Jia-qing; Yu, Yin-Feng; Lv, Xiao-Yi; Yu, Xiao; Jia, Zhen-Hong

    2015-12-01

    B ultrasound as a kind of ultrasonic imaging, which has become the indispensable diagnosis method in clinical medicine. However, the presence of speckle noise in ultrasound image greatly reduces the image quality and interferes with the accuracy of the diagnosis. Therefore, how to construct a method which can eliminate the speckle noise effectively, and at the same time keep the image details effectively is the research target of the current ultrasonic image de-noising. This paper is intended to remove the inherent speckle noise of B ultrasound image. The novel algorithm proposed is based on both wavelet transformation of B ultrasound images and data fusion of B ultrasound images, with a smaller mean squared error (MSE) and greater signal to noise ratio (SNR) compared with other algorithms. The results of this study can effectively remove speckle noise from B ultrasound images, and can well preserved the details and edge information which will produce better visual effects.

  19. Total variation versus wavelet-based methods for image denoising in fluorescence lifetime imaging microscopy.

    PubMed

    Chang, Ching-Wei; Mycek, Mary-Ann

    2012-05-01

    We report the first application of wavelet-based denoising (noise removal) methods to time-domain box-car fluorescence lifetime imaging microscopy (FLIM) images and compare the results to novel total variation (TV) denoising methods. Methods were tested first on artificial images and then applied to low-light live-cell images. Relative to undenoised images, TV methods could improve lifetime precision up to 10-fold in artificial images, while preserving the overall accuracy of lifetime and amplitude values of a single-exponential decay model and improving local lifetime fitting in live-cell images. Wavelet-based methods were at least 4-fold faster than TV methods, but could introduce significant inaccuracies in recovered lifetime values. The denoising methods discussed can potentially enhance a variety of FLIM applications, including live-cell, in vivo animal, or endoscopic imaging studies, especially under challenging imaging conditions such as low-light or fast video-rate imaging.

  20. Automatic parameter prediction for image denoising algorithms using perceptual quality features

    NASA Astrophysics Data System (ADS)

    Mittal, Anish; Moorthy, Anush K.; Bovik, Alan C.

    2012-03-01

    A natural scene statistics (NSS) based blind image denoising approach is proposed, where denoising is performed without knowledge of the noise variance present in the image. We show how such a parameter estimation can be used to perform blind denoising by combining blind parameter estimation with a state-of-the-art denoising algorithm.1 Our experiments show that for all noise variances simulated on a varied image content, our approach is almost always statistically superior to the reference BM3D implementation in terms of perceived visual quality at the 95% confidence level.

  1. Denoising portal images by means of wavelet techniques

    NASA Astrophysics Data System (ADS)

    Gonzalez Lopez, Antonio Francisco

    Portal images are used in radiotherapy for the verification of patient positioning. The distinguishing feature of this image type lies in its formation process: the same beam used for patient treatment is used for image formation. The high energy of the photons used in radiotherapy strongly limits the quality of portal images: Low contrast between tissues, low spatial resolution and low signal to noise ratio. This Thesis studies the enhancement of these images, in particular denoising of portal images. The statistical properties of portal images and noise are studied: power spectra, statistical dependencies between image and noise and marginal, joint and conditional distributions in the wavelet domain. Later, various denoising methods are applied to noisy portal images. Methods operating in the wavelet domain are the basis of this Thesis. In addition, the Wiener filter and the non local means filter (NLM), operating in the image domain, are used as a reference. Other topics studied in this Thesis are spatial resolution, wavelet processing and image processing in dosimetry in radiotherapy. In this regard, the spatial resolution of portal imaging systems is studied; a new method for determining the spatial resolution of the imaging equipments in digital radiology is presented; the calculation of the power spectrum in the wavelet domain is studied; reducing uncertainty in film dosimetry is investigated; a method for the dosimetry of small radiation fields with radiochromic film is presented; the optimal signal resolution is determined, as a function of the noise level and the quantization step, in the digitization process of films and the useful optical density range is set, as a function of the required uncertainty level, for a densitometric system. Marginal distributions of portal images are similar to those of natural images. This also applies to the statistical relationships between wavelet coefficients, intra-band and inter-band. These facts result in a better

  2. From heuristic optimization to dictionary learning: a review and comprehensive comparison of image denoising algorithms.

    PubMed

    Shao, Ling; Yan, Ruomei; Li, Xuelong; Liu, Yan

    2014-07-01

    Image denoising is a well explored topic in the field of image processing. In the past several decades, the progress made in image denoising has benefited from the improved modeling of natural images. In this paper, we introduce a new taxonomy based on image representations for a better understanding of state-of-the-art image denoising techniques. Within each category, several representative algorithms are selected for evaluation and comparison. The experimental results are discussed and analyzed to determine the overall advantages and disadvantages of each category. In general, the nonlocal methods within each category produce better denoising results than local ones. In addition, methods based on overcomplete representations using learned dictionaries perform better than others. The comprehensive study in this paper would serve as a good reference and stimulate new research ideas in image denoising.

  3. Noise distribution and denoising of current density images.

    PubMed

    Beheshti, Mohammadali; Foomany, Farbod H; Magtibay, Karl; Jaffray, David A; Krishnan, Sridhar; Nanthakumar, Kumaraswamy; Umapathy, Karthikeyan

    2015-04-01

    Current density imaging (CDI) is a magnetic resonance (MR) imaging technique that could be used to study current pathways inside the tissue. The current distribution is measured indirectly as phase changes. The inherent noise in the MR imaging technique degrades the accuracy of phase measurements leading to imprecise current variations. The outcome can be affected significantly, especially at a low signal-to-noise ratio (SNR). We have shown the residual noise distribution of the phase to be Gaussian-like and the noise in CDI images approximated as a Gaussian. This finding matches experimental results. We further investigated this finding by performing comparative analysis with denoising techniques, using two CDI datasets with two different currents (20 and 45 mA). We found that the block-matching and three-dimensional (BM3D) technique outperforms other techniques when applied on current density ([Formula: see text]). The minimum gain in noise power by BM3D applied to [Formula: see text] compared with the next best technique in the analysis was found to be around 2 dB per pixel. We characterize the noise profile in CDI images and provide insights on the performance of different denoising techniques when applied at two different stages of current density reconstruction.

  4. TU-C-BRD-01: Image Guided SBRT I: Multi-Modality 4D Imaging

    SciTech Connect

    Cai, J; Mageras, G; Pan, T

    2014-06-15

    Motion management is one of the critical technical challenges for radiation therapy. 4D imaging has been rapidly adopted as essential tool to assess organ motion associated with respiratory breathing. A variety of 4D imaging techniques have been developed and are currently under development based on different imaging modalities such as CT, MRI, PET, and CBCT. Each modality provides specific and complementary information about organ and tumor respiratory motion. Effective use of each different technique or combined use of different techniques can introduce a comprehensive management of tumor motion. Specifically, these techniques have afforded tremendous opportunities to better define and delineate tumor volumes, more accurately perform patient positioning, and effectively apply highly conformal therapy techniques such as IMRT and SBRT. Successful implementation requires good understanding of not only each technique, including unique features, limitations, artifacts, imaging acquisition and process, but also how to systematically apply the information obtained from different imaging modalities using proper tools such as deformable image registration. Furthermore, it is important to understand the differences in the effects of breathing variation between different imaging modalities. A comprehensive motion management strategy using multi-modality 4D imaging has shown promise in improving patient care, but at the same time faces significant challenges. This session will focuses on the current status and advances in imaging respiration-induced organ motion with different imaging modalities: 4D-CT, 4D-MRI, 4D-PET, and 4D-CBCT/DTS. Learning Objectives: Understand the need and role of multimodality 4D imaging in radiation therapy. Understand the underlying physics behind each 4D imaging technique. Recognize the advantages and limitations of each 4D imaging technique.

  5. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  6. Denoising Stimulated Raman Spectroscopic Images by Total Variation Minimization

    PubMed Central

    Liao, Chien-Sheng; Choi, Joon Hee; Zhang, Delong; Chan, Stanley H.; Cheng, Ji-Xin

    2016-01-01

    High-speed coherent Raman scattering imaging is opening a new avenue to unveiling the cellular machinery by visualizing the spatio-temporal dynamics of target molecules or intracellular organelles. By extracting signals from the laser at MHz modulation frequency, current stimulated Raman scattering (SRS) microscopy has reached shot noise limited detection sensitivity. The laser-based local oscillator in SRS microscopy not only generates high levels of signal, but also delivers a large shot noise which degrades image quality and spectral fidelity. Here, we demonstrate a denoising algorithm that removes the noise in both spatial and spectral domains by total variation minimization. The signal-to-noise ratio of SRS spectroscopic images was improved by up to 57 times for diluted dimethyl sulfoxide solutions and by 15 times for biological tissues. Weak Raman peaks of target molecules originally buried in the noise were unraveled. Coupling the denoising algorithm with multivariate curve resolution allowed discrimination of fat stores from protein-rich organelles in C. elegans. Together, our method significantly improved detection sensitivity without frame averaging, which can be useful for in vivo spectroscopic imaging. PMID:26955400

  7. Quadtree structured image approximation for denoising and interpolation.

    PubMed

    Scholefield, Adam; Dragotti, Pier Luigi

    2014-03-01

    The success of many image restoration algorithms is often due to their ability to sparsely describe the original signal. Shukla proposed a compression algorithm, based on a sparse quadtree decomposition model, which could optimally represent piecewise polynomial images. In this paper, we adapt this model to the image restoration by changing the rate-distortion penalty to a description-length penalty. In addition, one of the major drawbacks of this type of approximation is the computational complexity required to find a suitable subspace for each node of the quadtree. We address this issue by searching for a suitable subspace much more efficiently using the mathematics of updating matrix factorisations. Algorithms are developed to tackle denoising and interpolation. Simulation results indicate that we beat state of the art results when the original signal is in the model (e.g., depth images) and are competitive for natural images when the degradation is high.

  8. Automatic Denoising and Unmixing in Hyperspectral Image Processing

    NASA Astrophysics Data System (ADS)

    Peng, Honghong

    This thesis addresses two important aspects in hyperspectral image processing: automatic hyperspectral image denoising and unmixing. The first part of this thesis is devoted to a novel automatic optimized vector bilateral filter denoising algorithm, while the remainder concerns nonnegative matrix factorization with deterministic annealing for unsupervised unmixing in remote sensing hyperspectral images. The need for automatic hyperspectral image processing has been promoted by the development of potent hyperspectral systems, with hundreds of narrow contiguous bands, spanning the visible to the long wave infrared range of the electromagnetic spectrum. Due to the large volume of raw data generated by such sensors, automatic processing in the hyperspectral images processing chain is preferred to minimize human workload and achieve optimal result. Two of the mostly researched processing for such automatic effort are: hyperspectral image denoising, which is an important preprocessing step for almost all remote sensing tasks, and unsupervised unmixing, which decomposes the pixel spectra into a collection of endmember spectral signatures and their corresponding abundance fractions. Two new methodologies are introduced in this thesis to tackle the automatic processing problems described above. Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios. Typical vector bilateral filtering usage does not employ parameters that have been determined to satisfy optimality criteria. This thesis also introduces an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimizing the Stein

  9. Respiratory triggered 4D cone-beam computed tomography: A novel method to reduce imaging dose

    SciTech Connect

    Cooper, Benjamin J.; O'Brien, Ricky T.; Keall, Paul J.; Balik, Salim; Hugo, Geoffrey D.

    2013-04-15

    Purpose: A novel method called respiratory triggered 4D cone-beam computed tomography (RT 4D CBCT) is described whereby imaging dose can be reduced without degrading image quality. RT 4D CBCT utilizes a respiratory signal to trigger projections such that only a single projection is assigned to a given respiratory bin for each breathing cycle. In contrast, commercial 4D CBCT does not actively use the respiratory signal to minimize image dose. Methods: To compare RT 4D CBCT with conventional 4D CBCT, 3600 CBCT projections of a thorax phantom were gathered and reconstructed to generate a ground truth CBCT dataset. Simulation pairs of conventional 4D CBCT acquisitions and RT 4D CBCT acquisitions were developed assuming a sinusoidal respiratory signal which governs the selection of projections from the pool of 3600 original projections. The RT 4D CBCT acquisition triggers a single projection when the respiratory signal enters a desired acquisition bin; the conventional acquisition does not use a respiratory trigger and projections are acquired at a constant frequency. Acquisition parameters studied were breathing period, acquisition time, and imager frequency. The performance of RT 4D CBCT using phase based and displacement based sorting was also studied. Image quality was quantified by calculating difference images of the test dataset from the ground truth dataset. Imaging dose was calculated by counting projections. Results: Using phase based sorting RT 4D CBCT results in 47% less imaging dose on average compared to conventional 4D CBCT. Image quality differences were less than 4% at worst. Using displacement based sorting RT 4D CBCT results in 57% less imaging dose on average, than conventional 4D CBCT methods; however, image quality was 26% worse with RT 4D CBCT. Conclusions: Simulation studies have shown that RT 4D CBCT reduces imaging dose while maintaining comparable image quality for phase based 4D CBCT; image quality is degraded for displacement based RT 4D

  10. Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data

    PubMed Central

    Pnevmatikakis, Eftychios A.; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A.; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M.; Peterka, Darcy S.; Yuste, Rafael; Paninski, Liam

    2016-01-01

    SUMMARY We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multineuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data. PMID:26774160

  11. Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data.

    PubMed

    Pnevmatikakis, Eftychios A; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M; Peterka, Darcy S; Yuste, Rafael; Paninski, Liam

    2016-01-20

    We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data.

  12. Locally adaptive bilateral clustering for universal image denoising

    NASA Astrophysics Data System (ADS)

    Toh, K. K. V.; Mat Isa, N. A.

    2012-12-01

    This paper presents a novel and efficient locally adaptive denoising method based on clustering of pixels into regions of similar geometric and radiometric structures. Clustering is performed by adaptively segmenting pixels in the local kernel based on their augmented variational series. Then, noise pixels are restored by selectively considering the radiometric and spatial properties of every pixel in the formed clusters. The proposed method is exceedingly robust in conveying reliable local structural information even in the presence of noise. As a result, the proposed method substantially outperforms other state-of-the-art methods in terms of image restoration and computational cost. We support our claims with ample simulated and real data experiments. The relatively fast runtime from extensive simulations also suggests that the proposed method is suitable for a variety of image-based products — either embedded in image capturing devices or applied as image enhancement software.

  13. Developing an efficient technique for satellite image denoising and resolution enhancement for improving classification accuracy

    NASA Astrophysics Data System (ADS)

    Thangaswamy, Sree Sharmila; Kadarkarai, Ramar; Thangaswamy, Sree Renga Raja

    2013-01-01

    Satellite images are corrupted by noise during image acquisition and transmission. The removal of noise from the image by attenuating the high-frequency image components removes important details as well. In order to retain the useful information, improve the visual appearance, and accurately classify an image, an effective denoising technique is required. We discuss three important steps such as image denoising, resolution enhancement, and classification for improving accuracy in a noisy image. An effective denoising technique, hybrid directional lifting, is proposed to retain the important details of the images and improve visual appearance. The discrete wavelet transform based interpolation is developed for enhancing the resolution of the denoised image. The image is then classified using a support vector machine, which is superior to other neural network classifiers. The quantitative performance measures such as peak signal to noise ratio and classification accuracy show the significance of the proposed techniques.

  14. De-noising of digital image correlation based on stationary wavelet transform

    NASA Astrophysics Data System (ADS)

    Guo, Xiang; Li, Yulong; Suo, Tao; Liang, Jin

    2017-03-01

    In this paper, a stationary wavelet transform (SWT) based method is proposed to de-noise the digital image with the light noise, and the SWT de-noise algorithm is presented after the analyzing of the light noise. By using the de-noise algorithm, the method was demonstrated to be capable of providing accurate DIC measurements in the light noise environment. The verification, comparative and realistic experiments were conducted using this method. The result indicate that the de-noise method can be applied to the full-field strain measurement under the light interference with a high accuracy and stability.

  15. Adaptively wavelet-based image denoising algorithm with edge preserving

    NASA Astrophysics Data System (ADS)

    Tan, Yihua; Tian, Jinwen; Liu, Jian

    2006-02-01

    A new wavelet-based image denoising algorithm, which exploits the edge information hidden in the corrupted image, is presented. Firstly, a canny-like edge detector identifies the edges in each subband. Secondly, multiplying the wavelet coefficients in neighboring scales is implemented to suppress the noise while magnifying the edge information, and the result is utilized to exclude the fake edges. The isolated edge pixel is also identified as noise. Unlike the thresholding method, after that we use local window filter in the wavelet domain to remove noise in which the variance estimation is elaborated to utilize the edge information. This method is adaptive to local image details, and can achieve better performance than the methods of state of the art.

  16. Image denoising using principal component analysis in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Bacchelli, Silvia; Papi, Serena

    2006-05-01

    In this work we describe a method for removing Gaussian noise from digital images, based on the combination of the wavelet packet transform and the principal component analysis. In particular, since the aim of denoising is to retain the energy of the signal while discarding the energy of the noise, our basic idea is to construct powerful tailored filters by applying the Karhunen-Loeve transform in the wavelet packet domain, thus obtaining a compaction of the signal energy into a few principal components, while the noise is spread over all the transformed coefficients. This allows us to act with a suitable shrinkage function on these new coefficients, removing the noise without blurring the edges and the important characteristics of the images. The results of a large numerical experimentation encourage us to keep going in this direction with our studies.

  17. Hyperspectral image denoising using the robust low-rank tensor recovery.

    PubMed

    Li, Chang; Ma, Yong; Huang, Jun; Mei, Xiaoguang; Ma, Jiayi

    2015-09-01

    Denoising is an important preprocessing step to further analyze the hyperspectral image (HSI), and many denoising methods have been used for the denoising of the HSI data cube. However, the traditional denoising methods are sensitive to outliers and non-Gaussian noise. In this paper, by utilizing the underlying low-rank tensor property of the clean HSI data and the sparsity property of the outliers and non-Gaussian noise, we propose a new model based on the robust low-rank tensor recovery, which can preserve the global structure of HSI and simultaneously remove the outliers and different types of noise: Gaussian noise, impulse noise, dead lines, and so on. The proposed model can be solved by the inexact augmented Lagrangian method, and experiments on simulated and real hyperspectral images demonstrate that the proposed method is efficient for HSI denoising.

  18. Self-adaptive image denoising based on bidimensional empirical mode decomposition (BEMD).

    PubMed

    Guo, Song; Luan, Fangjun; Song, Xiaoyu; Li, Changyou

    2014-01-01

    To better analyze images with the Gaussian white noise, it is necessary to remove the noise before image processing. In this paper, we propose a self-adaptive image denoising method based on bidimensional empirical mode decomposition (BEMD). Firstly, normal probability plot confirms that 2D-IMF of Gaussian white noise images decomposed by BEMD follow the normal distribution. Secondly, energy estimation equation of the ith 2D-IMF (i=2,3,4,......) is proposed referencing that of ith IMF (i=2,3,4,......) obtained by empirical mode decomposition (EMD). Thirdly, the self-adaptive threshold of each 2D-IMF is calculated. Eventually, the algorithm of the self-adaptive image denoising method based on BEMD is described. From the practical perspective, this is applied for denoising of the magnetic resonance images (MRI) of the brain. And the results show it has a better denoising performance compared with other methods.

  19. A Fast Algorithm for Denoising Magnitude Diffusion-Weighted Images with Rank and Edge Constraints

    PubMed Central

    Lam, Fan; Liu, Ding; Song, Zhuang; Schuff, Norbert; Liang, Zhi-Pei

    2015-01-01

    Purpose To accelerate denoising of magnitude diffusion-weighted images subject to joint rank and edge constraints. Methods We extend a previously proposed majorize-minimize (MM) method for statistical estimation that involves noncentral χ distributions and joint rank and edge constraints. A new algorithm is derived which decomposes the constrained noncentral χ denoising problem into a series of constrained Gaussian denoising problems each of which is then solved using an efficient alternating minimization scheme. Results The performance of the proposed algorithm has been evaluated using both simulated and experimental data. Results from simulations based on ex vivo data show that the new algorithm achieves about a factor of 10 speed up over the original Quasi-Newton based algorithm. This improvement in computational efficiency enabled denoising of large data sets containing many diffusion-encoding directions. The denoising performance of the new efficient algorithm is found to be comparable to or even better than that of the original slow algorithm. For an in vivo high-resolution Q-ball acquisition, comparison of fiber tracking results around hippocampus region before and after denoising will also be shown to demonstrate the denoising effects of the new algorithm. Conclusion The optimization problem associated with denoising noncentral χ distributed diffusion-weighted images subject to joint rank and edge constraints can be solved efficiently using an MM-based algorithm. PMID:25733066

  20. 4D Imaging of Protein Aggregation in Live Cells

    PubMed Central

    Kaganovich, Daniel

    2013-01-01

    proteins that are not ubiquitinated are diverted to the IPOD, where they are actively aggregated in a protective compartment. Up until this point, the methodological paradigm of live-cell fluorescence microscopy has largely been to label proteins and track their locations in the cell at specific time-points and usually in two dimensions. As new technologies have begun to grant experimenters unprecedented access to the submicron scale in living cells, the dynamic architecture of the cytosol has come into view as a challenging new frontier for experimental characterization. We present a method for rapidly monitoring the 3D spatial distributions of multiple fluorescently labeled proteins in the yeast cytosol over time. 3D timelapse (4D imaging) is not merely a technical challenge; rather, it also facilitates a dramatic shift in the conceptual framework used to analyze cellular structure. We utilize a cytosolic folding sensor protein in live yeast to visualize distinct fates for misfolded proteins in cellular aggregation quality control, using rapid 4D fluorescent imaging. The temperature sensitive mutant of the Ubc9 protein10-12 (Ubc9ts) is extremely effective both as a sensor of cellular proteostasis, and a physiological model for tracking aggregation quality control. As with most ts proteins, Ubc9ts is fully folded and functional at permissive temperatures due to active cellular chaperones. Above 30 °C, or when the cell faces misfolding stress, Ubc9ts misfolds and follows the fate of a native globular protein that has been misfolded due to mutation, heat denaturation, or oxidative damage. By fusing it to GFP or other fluorophores, it can be tracked in 3D as it forms Stress Foci, or is directed to JUNQ or IPOD. PMID:23608881

  1. Simultaneous Fusion and Denoising of Panchromatic and Multispectral Satellite Images

    NASA Astrophysics Data System (ADS)

    Ragheb, Amr M.; Osman, Heba; Abbas, Alaa M.; Elkaffas, Saleh M.; El-Tobely, Tarek A.; Khamis, S.; Elhalawany, Mohamed E.; Nasr, Mohamed E.; Dessouky, Moawad I.; Al-Nuaimy, Waleed; Abd El-Samie, Fathi E.

    2012-12-01

    To identify objects in satellite images, multispectral (MS) images with high spectral resolution and low spatial resolution, and panchromatic (Pan) images with high spatial resolution and low spectral resolution need to be fused. Several fusion methods such as the intensity-hue-saturation (IHS), the discrete wavelet transform, the discrete wavelet frame transform (DWFT), and the principal component analysis have been proposed in recent years to obtain images with both high spectral and spatial resolutions. In this paper, a hybrid fusion method for satellite images comprising both the IHS transform and the DWFT is proposed. This method tries to achieve the highest possible spectral and spatial resolutions with as small distortion in the fused image as possible. A comparison study between the proposed hybrid method and the traditional methods is presented in this paper. Different MS and Pan images from Landsat-5, Spot, Landsat-7, and IKONOS satellites are used in this comparison. The effect of noise on the proposed hybrid fusion method as well as the traditional fusion methods is studied. Experimental results show the superiority of the proposed hybrid method to the traditional methods. The results show also that a wavelet denoising step is required when fusion is performed at low signal-to-noise ratios.

  2. Dual tree complex wavelet transform based denoising of optical microscopy images.

    PubMed

    Bal, Ufuk

    2012-12-01

    Photon shot noise is the main noise source of optical microscopy images and can be modeled by a Poisson process. Several discrete wavelet transform based methods have been proposed in the literature for denoising images corrupted by Poisson noise. However, the discrete wavelet transform (DWT) has disadvantages such as shift variance, aliasing, and lack of directional selectivity. To overcome these problems, a dual tree complex wavelet transform is used in our proposed denoising algorithm. Our denoising algorithm is based on the assumption that for the Poisson noise case threshold values for wavelet coefficients can be estimated from the approximation coefficients. Our proposed method was compared with one of the state of the art denoising algorithms. Better results were obtained by using the proposed algorithm in terms of image quality metrics. Furthermore, the contrast enhancement effect of the proposed method on collagen fıber images is examined. Our method allows fast and efficient enhancement of images obtained under low light intensity conditions.

  3. 4D ultrasound imaging - ethically justifiable in India?

    PubMed

    Indiran, Venkatraman

    2017-01-01

    Four-dimensional (4D) ultrasound (real-time volume sonography), which has been used in the West since the last decade for the determination of gender as well as for bonding and entertainment of the parents, has become widely available in India in this decade. Here, I would like to discuss the ethical issues associated with 4D ultrasonography in India. These are self-referral, the use of the technology for non-medical indications, a higher possibility of the disclosure of the foetus' gender and safety concerns.

  4. 2D/4D marker-free tumor tracking using 4D CBCT as the reference image.

    PubMed

    Wang, Mengjiao; Sharp, Gregory C; Rit, Simon; Delmon, Vivien; Wang, Guangzhi

    2014-05-07

    Tumor motion caused by respiration is an important issue in image-guided radiotherapy. A 2D/4D matching method between 4D volumes derived from cone beam computed tomography (CBCT) and 2D fluoroscopic images was implemented to track the tumor motion without the use of implanted markers. In this method, firstly, 3DCBCT and phase-rebinned 4DCBCT are reconstructed from cone beam acquisition. Secondly, 4DCBCT volumes and a streak-free 3DCBCT volume are combined to improve the image quality of the digitally reconstructed radiographs (DRRs). Finally, the 2D/4D matching problem is converted into a 2D/2D matching between incoming projections and DRR images from each phase of the 4DCBCT. The diaphragm is used as a target surrogate for matching instead of using the tumor position directly. This relies on the assumption that if a patient has the same breathing phase and diaphragm position as the reference 4DCBCT, then the tumor position is the same. From the matching results, the phase information, diaphragm position and tumor position at the time of each incoming projection acquisition can be derived. The accuracy of this method was verified using 16 candidate datasets, representing lung and liver applications and one-minute and two-minute acquisitions. The criteria for the eligibility of datasets were described: 11 eligible datasets were selected to verify the accuracy of diaphragm tracking, and one eligible dataset was chosen to verify the accuracy of tumor tracking. The diaphragm matching accuracy was 1.88 ± 1.35 mm in the isocenter plane and the 2D tumor tracking accuracy was 2.13 ± 1.26 mm in the isocenter plane. These features make this method feasible for real-time marker-free tumor motion tracking purposes.

  5. Exploiting the self-similarity in ERP images by nonlocal means for single-trial denoising.

    PubMed

    Strauss, Daniel J; Teuber, Tanja; Steidl, Gabriele; Corona-Strauss, Farah I

    2013-07-01

    Event related potentials (ERPs) represent a noninvasive and widely available means to analyze neural correlates of sensory and cognitive processing. Recent developments in neural and cognitive engineering proposed completely new application fields of this well-established measurement technique when using an advanced single-trial processing. We have recently shown that 2-D diffusion filtering methods from image processing can be used for the denoising of ERP single-trials in matrix representations, also called ERP images. In contrast to conventional 1-D transient ERP denoising techniques, the 2-D restoration of ERP images allows for an integration of regularities over multiple stimulations into the denoising process. Advanced anisotropic image restoration methods may require directional information for the ERP denoising process. This is especially true if there is a lack of a priori knowledge about possible traces in ERP images. However due to the use of event related experimental paradigms, ERP images are characterized by a high degree of self-similarity over the individual trials. In this paper, we propose the simple and easy to apply nonlocal means method for ERP image denoising in order to exploit this self-similarity rather than focusing on the edge-based extraction of directional information. Using measured and simulated ERP data, we compare our method to conventional approaches in ERP denoising. It is concluded that the self-similarity in ERP images can be exploited for single-trial ERP denoising by the proposed approach. This method might be promising for a variety of evoked and event-related potential applications, including nonstationary paradigms such as changing exogeneous stimulus characteristics or endogenous states during the experiment. As presented, the proposed approach is for the a posteriori denoising of single-trial sequences.

  6. Application of improved homogeneity similarity-based denoising in optical coherence tomography retinal images.

    PubMed

    Chen, Qiang; de Sisternes, Luis; Leng, Theodore; Rubin, Daniel L

    2015-06-01

    Image denoising is a fundamental preprocessing step of image processing in many applications developed for optical coherence tomography (OCT) retinal imaging--a high-resolution modality for evaluating disease in the eye. To make a homogeneity similarity-based image denoising method more suitable for OCT image removal, we improve it by considering the noise and retinal characteristics of OCT images in two respects: (1) median filtering preprocessing is used to make the noise distribution of OCT images more suitable for patch-based methods; (2) a rectangle neighborhood and region restriction are adopted to accommodate the horizontal stretching of retinal structures when observed in OCT images. As a performance measurement of the proposed technique, we tested the method on real and synthetic noisy retinal OCT images and compared the results with other well-known spatial denoising methods, including bilateral filtering, five partial differential equation (PDE)-based methods, and three patch-based methods. Our results indicate that our proposed method seems suitable for retinal OCT imaging denoising, and that, in general, patch-based methods can achieve better visual denoising results than point-based methods in this type of imaging, because the image patch can better represent the structured information in the images than a single pixel. However, the time complexity of the patch-based methods is substantially higher than that of the others.

  7. Image analysis for denoising full-field frequency-domain fluorescence lifetime images.

    PubMed

    Spring, B Q; Clegg, R M

    2009-08-01

    Video-rate fluorescence lifetime-resolved imaging microscopy (FLIM) is a quantitative imaging technique for measuring dynamic processes in biological specimens. FLIM offers valuable information in addition to simple fluorescence intensity imaging; for instance, the fluorescence lifetime is sensitive to the microenvironment of the fluorophore allowing reliable differentiation between concentration differences and dynamic quenching. Homodyne FLIM is a full-field frequency-domain technique for imaging fluorescence lifetimes at every pixel of a fluorescence image simultaneously. If a single modulation frequency is used, video-rate image acquisition is possible. Homodyne FLIM uses a gain-modulated image intensified charge-coupled device (ICCD) detector, which unfortunately is a major contribution to the noise of the measurement. Here we introduce image analysis for denoising homodyne FLIM data. The denoising routine is fast, improves the extraction of the fluorescence lifetime value(s) and increases the sensitivity and fluorescence lifetime resolving power of the FLIM instrument. The spatial resolution (especially the high spatial frequencies not related to noise) of the FLIM image is preserved, because the denoising routine does not blur or smooth the image. By eliminating the random noise known to be specific to photon noise and from the intensifier amplification, the fidelity of the spatial resolution is improved. The polar plot projection, a rapid FLIM analysis method, is used to demonstrate the effectiveness of the denoising routine with exemplary data from both physical and complex biological samples. We also suggest broader impacts of the image analysis for other fluorescence microscopy techniques (e.g. super-resolution imaging).

  8. Hybrid regularizers-based adaptive anisotropic diffusion for image denoising.

    PubMed

    Liu, Kui; Tan, Jieqing; Ai, Liefu

    2016-01-01

    To eliminate the staircasing effect for total variation filter and synchronously avoid the edges blurring for fourth-order PDE filter, a hybrid regularizers-based adaptive anisotropic diffusion is proposed for image denoising. In the proposed model, the [Formula: see text]-norm is considered as the fidelity term and the regularization term is composed of a total variation regularization and a fourth-order filter. The two filters can be adaptively selected according to the diffusion function. When the pixels locate at the edges, the total variation filter is selected to filter the image, which can preserve the edges. When the pixels belong to the flat regions, the fourth-order filter is adopted to smooth the image, which can eliminate the staircase artifacts. In addition, the split Bregman and relaxation approach are employed in our numerical algorithm to speed up the computation. Experimental results demonstrate that our proposed model outperforms the state-of-the-art models cited in the paper in both the qualitative and quantitative evaluations.

  9. An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.

    PubMed

    Khanian, Maryam; Feizi, Awat; Davari, Ali

    2014-01-01

    Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.

  10. Edge-preserving image denoising via group coordinate descent on the GPU.

    PubMed

    McGaffin, Madison Gray; Fessler, Jeffrey A

    2015-04-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel 1D pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates in-place and only store the noisy data, denoised image, and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation. Both algorithms use the majorize-minimize framework to solve the 1D pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time.

  11. Multispectral Photoacoustic Imaging Artifact Removal and Denoising Using Time Series Model-Based Spectral Noise Estimation.

    PubMed

    Kazakeviciute, Agne; Ho, Chris Jun Hui; Olivo, Malini

    2016-09-01

    The aim of this study is to solve a problem of denoising and artifact removal from in vivo multispectral photoacoustic imaging when the level of noise is not known a priori. The study analyzes Wiener filtering in Fourier domain when a family of anisotropic shape filters is considered. The unknown noise and signal power spectral densities are estimated using spectral information of images and the autoregressive of the power 1 ( AR(1)) model. Edge preservation is achieved by detecting image edges in the original and the denoised image and superimposing a weighted contribution of the two edge images to the resulting denoised image. The method is tested on multispectral photoacoustic images from simulations, a tissue-mimicking phantom, as well as in vivo imaging of the mouse, with its performance compared against that of the standard Wiener filtering in Fourier domain. The results reveal better denoising and fine details preservation capabilities of the proposed method when compared to that of the standard Wiener filtering in Fourier domain, suggesting that this could be a useful denoising technique for other multispectral photoacoustic studies.

  12. Edge-preserving image denoising via group coordinate descent on the GPU

    PubMed Central

    McGaffin, Madison G.; Fessler, Jeffrey A.

    2015-01-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel one-dimensional pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates inplace and only store the noisy data, denoised image and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation (TV). Both algorithms use the majorize-minimize (MM) framework to solve the one-dimensional pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time. PMID:25675454

  13. A comparison of filtering techniques on denoising terahertz coaxial digital holography image

    NASA Astrophysics Data System (ADS)

    Cui, Shan-shan; Li, Qi

    2016-10-01

    In the process of recording terahertz digital hologram, the hologram is easy to be contaminated by speckle noise, which leads to lower resolution in imaging system and affects the reconstruction results seriously. Thus, the study of filtering algorithms applicable for de-speckling terahertz digital holography image has important practical values. In this paper, non-local means filtering and guided bilateral filtering were brought to process the real image reconstructed from continuous-wave terahertz coaxial digital hologram. For comparison, median filtering, bilateral filtering, and robust bilateral filtering, were introduced as conventional methods to denoise the real image. Then, all the denoising results were evaluated. The comparison indicates that the guided bilateral filter manifests the optimal denoising effect for the terahertz digital holography image, both significantly suppressing speckle noise, and effectively preserving the useful information on the reconstructed image.

  14. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  15. [Multispectral remote sensing image denoising based on non-local means].

    PubMed

    Liu, Peng; Liu, Ding-Sheng; Li, Guo-Qing; Liu, Zhi-Wen

    2011-11-01

    The non-local mean denoising (NLM) exploits the fact that similar neighborhoods can occur anywhere in the image and can contribute to denoising. However, these current NLM methods do not aim at multichannel remote sensing image. Smoothing every band image separately will seriously damage the spectral information of the multispectral image. Then the authors promote the NLM from two aspects. Firstly, for multispectral image denoising, a weight value should be related to all channels but not only one channel. So for the kth band image, the authors use sum of smoothing kernel in all bands instead of one band. Secondly, for the patch whose spectral feature is similar to the spectral feature of the central patch, its weight should be larger. Bringing the two changes into the traditional non-local mean, a new multispectral non-local mean denoising method is proposed. In the experiments, different satellite images containing both urban and rural parts are used. For better evaluating the performance of the different method, ERGAS and SAM as quality index are used. And some other methods are compared with the proposed method. The proposed method shows better performance not only in ERGAS but also in SAM. Especially the spectral feature is better reserved in proposed NLM denoising.

  16. Towards real-time registration of 4D ultrasound images.

    PubMed

    Foroughi, Pezhman; Abolmaesumi, Purang; Hashtrudi-Zaad, Keyvan

    2006-01-01

    In this paper, we demonstrate a method for fast registration of sequences of 3D liver images, which could be used for the future real-time applications. In our method, every image is elastically registered to a so called fixed ultrasound image exploiting the information from previous registration. A few feature points are automatically selected, and tracked inside the images, while the deformation of other points are extrapolated with respect to the tracked points employing a fast free-form approach. The main intended application of the proposed method is real-time tracking of tumors for radiosurgery. The algorithm is evaluated on both naturally and artificially deformed images. Experimental results show that for around 85 percent accuracy, the process of tracking is completed very close to real time.

  17. Image denoising using nonsubsampled shearlet transform and twin support vector machines.

    PubMed

    Yang, Hong-Ying; Wang, Xiang-Yang; Niu, Pan-Pan; Liu, Yang-Cheng

    2014-09-01

    Denoising of images is one of the most basic tasks of image processing. It is a challenging work to design a edge/texture-preserving image denoising scheme. Nonsubsampled shearlet transform (NSST) is an effective multi-scale and multi-direction analysis method, it not only can exactly compute the shearlet coefficients based on a multiresolution analysis, but also can provide nearly optimal approximation for a piecewise smooth function. Based on NSST, a new edge/texture-preserving image denoising using twin support vector machines (TSVMs) is proposed in this paper. Firstly, the noisy image is decomposed into different subbands of frequency and orientation responses using the NSST. Secondly, the feature vector for a pixel in a noisy image is formed by the spatial geometric regularity in NSST domain, and the TSVMs model is obtained by training. Then the NSST detail coefficients are divided into information-related coefficients and noise-related ones by TSVMs training model. Finally, the detail subbands of NSST coefficients are denoised by using the adaptive threshold. Extensive experimental results demonstrate that our method can obtain better performances in terms of both subjective and objective evaluations than those state-of-the-art denoising techniques. Especially, the proposed method can preserve edges and textures very well while removing noise.

  18. Adaptive regularization of the NL-means: application to image and video denoising.

    PubMed

    Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François

    2014-08-01

    Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.

  19. A new adaptive algorithm for image denoising based on curvelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Musheng; Cai, Zhishan

    2013-10-01

    The purpose of this paper is to study a method of denoising images corrupted with additive white Gaussian noise. In this paper, the application of the time invariant discrete curvelet transform for noise reduction is considered. In curvelet transform, the frame elements are indexed by scale, orientation and location parameters. It is designed to represent edges and the singularities along curved paths more efficiently than the wavelet transform. Therefore, curvelet transform can get better results than wavelet method in image denoising. In general, image denoising imposes a compromise between noise reduction and preserving significant image details. To achieve a good performance in this respect, an efficient and adaptive image denoising method based on curvelet transform is presented in this paper. Firstly, the noisy image is decomposed into many levels to obtain different frequency sub-bands by curvelet transform. Secondly, efficient and adaptive threshold estimation based on generalized Gaussian distribution modeling of sub-band coefficients is used to remove the noisy coefficients. The choice of the threshold estimation is carried out by analyzing the standard deviation and threshold. Ultimately, invert the multi-scale decomposition to reconstruct the denoised image. Here, to prove the performance of the proposed method, the results are compared with other existent algorithms such as hard and soft threshold based on wavelet. The simulation results on several testing images indicate that the proposed method outperforms the other methods in peak signal to noise ratio and keeps better visual in edges information reservation as well. The results also suggest that curvelet transform can achieve a better performance than the wavelet transform in image denoising.

  20. Dual-wavelength retinal images denoising algorithm for improving the accuracy of oxygen saturation calculation

    NASA Astrophysics Data System (ADS)

    Xian, Yong-Li; Dai, Yun; Gao, Chun-Ming; Du, Rui

    2017-01-01

    Noninvasive measurement of hemoglobin oxygen saturation (SO2) in retinal vessels is based on spectrophotometry and spectral absorption characteristics of tissue. Retinal images at 570 and 600 nm are simultaneously captured by dual-wavelength retinal oximetry based on fundus camera. SO2 is finally measured after vessel segmentation, image registration, and calculation of optical density ratio of two images. However, image noise can dramatically affect subsequent image processing and SO2 calculation accuracy. The aforementioned problem remains to be addressed. The purpose of this study was to improve image quality and SO2 calculation accuracy by noise analysis and denoising algorithm for dual-wavelength images. First, noise parameters were estimated by mixed Poisson-Gaussian (MPG) noise model. Second, an MPG denoising algorithm which we called variance stabilizing transform (VST) + dual-domain image denoising (DDID) was proposed based on VST and improved dual-domain filter. The results show that VST + DDID is able to effectively remove MPG noise and preserve image edge details. VST + DDID is better than VST + block-matching and three-dimensional filtering, especially in preserving low-contrast details. The following simulation and analysis indicate that MPG noise in the retinal images can lead to erroneously low measurement for SO2, and the denoised images can provide more accurate grayscale values for retinal oximetry.

  1. [Curvelet denoising algorithm for medical ultrasound image based on adaptive threshold].

    PubMed

    Zhuang, Zhemin; Yao, Weike; Yang, Jinyao; Li, FenLan; Yuan, Ye

    2014-11-01

    The traditional denoising algorithm for ultrasound images would lost a lot of details and weak edge information when suppressing speckle noise. A new denoising algorithm of adaptive threshold based on curvelet transform is proposed in this paper. The algorithm utilizes differences of coefficients' local variance between texture and smooth region in each layer of ultrasound image to define fuzzy regions and membership functions. In the end, using the adaptive threshold that determine by the membership function to denoise the ultrasound image. The experimental text shows that the algorithm can reduce the speckle noise effectively and retain the detail information of original image at the same time, thus it can greatly enhance the performance of B ultrasound instrument.

  2. Low-dose computed tomography image denoising based on joint wavelet and sparse representation.

    PubMed

    Ghadrdan, Samira; Alirezaie, Javad; Dillenseger, Jean-Louis; Babyn, Paul

    2014-01-01

    Image denoising and signal enhancement are the most challenging issues in low dose computed tomography (CT) imaging. Sparse representational methods have shown initial promise for these applications. In this work we present a wavelet based sparse representation denoising technique utilizing dictionary learning and clustering. By using wavelets we extract the most suitable features in the images to obtain accurate dictionary atoms for the denoising algorithm. To achieve improved results we also lower the number of clusters which reduces computational complexity. In addition, a single image noise level estimation is developed to update the cluster centers in higher PSNRs. Our results along with the computational efficiency of the proposed algorithm clearly demonstrates the improvement of the proposed algorithm over other clustering based sparse representation (CSR) and K-SVD methods.

  3. Patch-based denoising method using low-rank technique and targeted database for optical coherence tomography image.

    PubMed

    Liu, Xiaoming; Yang, Zhou; Wang, Jia; Liu, Jun; Zhang, Kai; Hu, Wei

    2017-01-01

    Image denoising is a crucial step before performing segmentation or feature extraction on an image, which affects the final result in image processing. In recent years, utilizing the self-similarity characteristics of the images, many patch-based image denoising methods have been proposed, but most of them, named the internal denoising methods, utilized the noisy image only where the performances are constrained by the limited information they used. We proposed a patch-based method, which uses a low-rank technique and targeted database, to denoise the optical coherence tomography (OCT) image. When selecting the similar patches for the noisy patch, our method combined internal and external denoising, utilizing the other images relevant to the noisy image, in which our targeted database is made up of these two kinds of images and is an improvement compared with the previous methods. Next, we leverage the low-rank technique to denoise the group matrix consisting of the noisy patch and the corresponding similar patches, for the fact that a clean image can be seen as a low-rank matrix and rank of the noisy image is much larger than the clean image. After the first-step denoising is accomplished, we take advantage of Gabor transform, which considered the layer characteristic of the OCT retinal images, to construct a noisy image before the second step. Experimental results demonstrate that our method compares favorably with the existing state-of-the-art methods.

  4. NOTE: Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction

    NASA Astrophysics Data System (ADS)

    Holan, Scott H.; Viator, John A.

    2008-06-01

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.

  5. Medical image denoising using one-dimensional singularity function model.

    PubMed

    Luo, Jianhua; Zhu, Yuemin; Hiba, Bassem

    2010-03-01

    A novel denoising approach is proposed that is based on a spectral data substitution mechanism through using a mathematical model of one-dimensional singularity function analysis (1-D SFA). The method consists in dividing the complete spectral domain of the noisy signal into two subsets: the preserved set where the spectral data are kept unchanged, and the substitution set where the original spectral data having lower signal-to-noise ratio (SNR) are replaced by those reconstructed using the 1-D SFA model. The preserved set containing original spectral data is determined according to the SNR of the spectrum. The singular points and singularity degrees in the 1-D SFA model are obtained through calculating finite difference of the noisy signal. The theoretical formulation and experimental results demonstrated that the proposed method allows more efficient denoising while introducing less distortion, and presents significant improvement over conventional denoising methods.

  6. Frames-Based Denoising in 3D Confocal Microscopy Imaging.

    PubMed

    Konstantinidis, Ioannis; Santamaria-Pang, Alberto; Kakadiaris, Ioannis

    2005-01-01

    In this paper, we propose a novel denoising method for 3D confocal microscopy data based on robust edge detection. Our approach relies on the construction of a non-separable frame system in 3D that incorporates the Sobel operator in dual spatial directions. This multidirectional set of digital filters is capable of robustly detecting edge information by ensemble thresholding of the filtered data. We demonstrate the application of our method to both synthetic and real confocal microscopy data by comparing it to denoising methods based on separable 3D wavelets and 3D median filtering, and report very encouraging results.

  7. [A novel denoising approach to SVD filtering based on DCT and PCA in CT image].

    PubMed

    Feng, Fuqiang; Wang, Jun

    2013-10-01

    Because of various effects of the imaging mechanism, noises are inevitably introduced in medical CT imaging process. Noises in the images will greatly degrade the quality of images and bring difficulties to clinical diagnosis. This paper presents a new method to improve singular value decomposition (SVD) filtering performance in CT image. Filter based on SVD can effectively analyze characteristics of the image in horizontal (and/or vertical) directions. According to the features of CT image, we can make use of discrete cosine transform (DCT) to extract the region of interest and to shield uninterested region so as to realize the extraction of structure characteristics of the image. Then we transformed SVD to the image after DCT, constructing weighting function for image reconstruction adaptively weighted. The algorithm for the novel denoising approach in this paper was applied in CT image denoising, and the experimental results showed that the new method could effectively improve the performance of SVD filtering.

  8. Computed Tomography Images De-noising using a Novel Two Stage Adaptive Algorithm.

    PubMed

    Fadaee, Mojtaba; Shamsi, Mousa; Saberkari, Hamidreza; Sedaaghi, Mohammad Hossein

    2015-01-01

    In this paper, an optimal algorithm is presented for de-noising of medical images. The presented algorithm is based on improved version of local pixels grouping and principal component analysis. In local pixels grouping algorithm, blocks matching based on L (2) norm method is utilized, which leads to matching performance improvement. To evaluate the performance of our proposed algorithm, peak signal to noise ratio (PSNR) and structural similarity (SSIM) evaluation criteria have been used, which are respectively according to the signal to noise ratio in the image and structural similarity of two images. The proposed algorithm has two de-noising and cleanup stages. The cleanup stage is carried out comparatively; meaning that it is alternately repeated until the two conditions based on PSNR and SSIM are established. Implementation results show that the presented algorithm has a significant superiority in de-noising. Furthermore, the quantities of SSIM and PSNR values are higher in comparison to other methods.

  9. Computed Tomography Images De-noising using a Novel Two Stage Adaptive Algorithm

    PubMed Central

    Fadaee, Mojtaba; Shamsi, Mousa; Saberkari, Hamidreza; Sedaaghi, Mohammad Hossein

    2015-01-01

    In this paper, an optimal algorithm is presented for de-noising of medical images. The presented algorithm is based on improved version of local pixels grouping and principal component analysis. In local pixels grouping algorithm, blocks matching based on L2 norm method is utilized, which leads to matching performance improvement. To evaluate the performance of our proposed algorithm, peak signal to noise ratio (PSNR) and structural similarity (SSIM) evaluation criteria have been used, which are respectively according to the signal to noise ratio in the image and structural similarity of two images. The proposed algorithm has two de-noising and cleanup stages. The cleanup stage is carried out comparatively; meaning that it is alternately repeated until the two conditions based on PSNR and SSIM are established. Implementation results show that the presented algorithm has a significant superiority in de-noising. Furthermore, the quantities of SSIM and PSNR values are higher in comparison to other methods. PMID:26955565

  10. Denoising of PET images by combining wavelets and curvelets for improved preservation of resolution and quantitation.

    PubMed

    Le Pogam, A; Hanzouli, H; Hatt, M; Cheze Le Rest, C; Visvikis, D

    2013-12-01

    Denoising of Positron Emission Tomography (PET) images is a challenging task due to the inherent low signal-to-noise ratio (SNR) of the acquired data. A pre-processing denoising step may facilitate and improve the results of further steps such as segmentation, quantification or textural features characterization. Different recent denoising techniques have been introduced and most state-of-the-art methods are based on filtering in the wavelet domain. However, the wavelet transform suffers from some limitations due to its non-optimal processing of edge discontinuities. More recently, a new multi scale geometric approach has been proposed, namely the curvelet transform. It extends the wavelet transform to account for directional properties in the image. In order to address the issue of resolution loss associated with standard denoising, we considered a strategy combining the complementary wavelet and curvelet transforms. We compared different figures of merit (e.g. SNR increase, noise decrease in homogeneous regions, resolution loss, and intensity bias) on simulated and clinical datasets with the proposed combined approach and the wavelet-only and curvelet-only filtering techniques. The three methods led to an increase of the SNR. Regarding the quantitative accuracy however, the wavelet and curvelet only denoising approaches led to larger biases in the intensity and the contrast than the proposed combined algorithm. This approach could become an alternative solution to filters currently used after image reconstruction in clinical systems such as the Gaussian filter.

  11. A computationally efficient denoising and hole-filling method for depth image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Soulan; Chen, Chen; Kehtarnavaz, Nasser

    2016-04-01

    Depth maps captured by Kinect depth cameras are being widely used for 3D action recognition. However, such images often appear noisy and contain missing pixels or black holes. This paper presents a computationally efficient method for both denoising and hole-filling in depth images. The denoising is achieved by utilizing a combination of Gaussian kernel filtering and anisotropic filtering. The hole-filling is achieved by utilizing a combination of morphological filtering and zero block filtering. Experimental results using the publicly available datasets are provided indicating the superiority of the developed method in terms of both depth error and computational efficiency compared to three existing methods.

  12. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising.

    PubMed

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods.

  13. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  14. Denoising for 3-d photon-limited imaging data using nonseparable filterbanks.

    PubMed

    Santamaria-Pang, Alberto; Bildea, Teodor Stefan; Tan, Shan; Kakadiaris, Ioannis A

    2008-12-01

    In this paper, we present a novel frame-based denoising algorithm for photon-limited 3-D images. We first construct a new 3-D nonseparable filterbank by adding elements to an existing frame in a structurally stable way. In contrast with the traditional 3-D separable wavelet system, the new filterbank is capable of using edge information in multiple directions. We then propose a data-adaptive hysteresis thresholding algorithm based on this new 3-D nonseparable filterbank. In addition, we develop a new validation strategy for denoising of photon-limited images containing sparse structures, such as neurons (the structure of interest is less than 5% of total volume). The validation method, based on tubular neighborhoods around the structure, is used to determine the optimal threshold of the proposed denoising algorithm. We compare our method with other state-of-the-art methods and report very encouraging results on applications utilizing both synthetic and real data.

  15. Kernel regression based feature extraction for 3D MR image denoising.

    PubMed

    López-Rubio, Ezequiel; Florentín-Núñez, María Nieves

    2011-08-01

    Kernel regression is a non-parametric estimation technique which has been successfully applied to image denoising and enhancement in recent times. Magnetic resonance 3D image denoising has two features that distinguish it from other typical image denoising applications, namely the tridimensional structure of the images and the nature of the noise, which is Rician rather than Gaussian or impulsive. Here we propose a principled way to adapt the general kernel regression framework to this particular problem. Our noise removal system is rooted on a zeroth order 3D kernel regression, which computes a weighted average of the pixels over a regression window. We propose to obtain the weights from the similarities among small sized feature vectors associated to each pixel. In turn, these features come from a second order 3D kernel regression estimation of the original image values and gradient vectors. By considering directional information in the weight computation, this approach substantially enhances the performance of the filter. Moreover, Rician noise level is automatically estimated without any need of human intervention, i.e. our method is fully automated. Experimental results over synthetic and real images demonstrate that our proposal achieves good performance with respect to the other MRI denoising filters being compared.

  16. Automated wavelet denoising of photoacoustic signals for burn-depth image reconstruction

    NASA Astrophysics Data System (ADS)

    Holan, Scott H.; Viator, John A.

    2007-02-01

    Photoacoustic image reconstruction involves dozens or perhaps hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a sample with laser light are used to produce an image of the acoustic source. Each of these point measurements must undergo some signal processing, such as denoising and system deconvolution. In order to efficiently process the numerous signals acquired for photoacoustic imaging, we have developed an automated wavelet algorithm for processing signals generated in a burn injury phantom. We used the discrete wavelet transform to denoise photoacoustic signals generated in an optically turbid phantom containing whole blood. The denoising used universal level independent thresholding, as developed by Donoho and Johnstone. The entire signal processing technique was automated so that no user intervention was needed to reconstruct the images. The signals were backprojected using the automated wavelet processing software and showed reconstruction using denoised signals improved image quality by 21%, using a relative 2-norm difference scheme.

  17. Denoising 3D MR images by the enhanced non-local means filter for Rician noise.

    PubMed

    Liu, Hong; Yang, Cihui; Pan, Ning; Song, Enmin; Green, Richard

    2010-12-01

    The non-local means (NLM) filter removes noise by calculating the weighted average of the pixels in the global area and shows superiority over existing local filter methods that only consider local neighbor pixels. This filter has been successfully extended from 2D images to 3D images and has been applied to denoising 3D magnetic resonance (MR) images. In this article, a novel filter based on the NLM filter is proposed to improve the denoising effect. Considering the characteristics of Rician noise in the MR images, denoising by the NLM filter is first performed on the squared magnitude images. Then, unbiased correcting is carried out to eliminate the biased deviation. When performing the NLM filter, the weight is calculated based on the Gaussian-filtered image to reduce the disturbance of the noise. The performance of this filter is evaluated by carrying out a qualitative and quantitative comparison of this method with three other filters, namely, the original NLM filter, the unbiased NLM (UNLM) filter and the Rician NLM (RNLM) filter. Experimental results demonstrate that the proposed filter achieves better denoising performance over the other filters being compared.

  18. A comparative study of new and current methods for dental micro-CT image denoising

    PubMed Central

    Lashgari, Mojtaba; Qin, Jie; Swain, Michael

    2016-01-01

    Objectives: The aim of the current study was to evaluate the application of two advanced noise-reduction algorithms for dental micro-CT images and to implement a comparative analysis of the performance of new and current denoising algorithms. Methods: Denoising was performed using gaussian and median filters as the current filtering approaches and the block-matching and three-dimensional (BM3D) method and total variation method as the proposed new filtering techniques. The performance of the denoising methods was evaluated quantitatively using contrast-to-noise ratio (CNR), edge preserving index (EPI) and blurring indexes, as well as qualitatively using the double-stimulus continuous quality scale procedure. Results: The BM3D method had the best performance with regard to preservation of fine textural features (CNREdge), non-blurring of the whole image (blurring index), the clinical visual score in images with very fine features and the overall visual score for all types of images. On the other hand, the total variation method provided the best results with regard to smoothing of images in texture-free areas (CNRTex-free) and in preserving the edges and borders of image features (EPI). Conclusions: The BM3D method is the most reliable technique for denoising dental micro-CT images with very fine textural details, such as shallow enamel lesions, in which the preservation of the texture and fine features is of the greatest importance. On the other hand, the total variation method is the technique of choice for denoising images without very fine textural details in which the clinician or researcher is interested mainly in anatomical features and structural measurements. PMID:26764583

  19. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    PubMed

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  20. Exploring the impact of wavelet-based denoising in the classification of remote sensing hyperspectral images

    NASA Astrophysics Data System (ADS)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco

    2016-10-01

    The classification of remote sensing hyperspectral images for land cover applications is a very intensive topic. In the case of supervised classification, Support Vector Machines (SVMs) play a dominant role. Recently, the Extreme Learning Machine algorithm (ELM) has been extensively used. The classification scheme previously published by the authors, and called WT-EMP, introduces spatial information in the classification process by means of an Extended Morphological Profile (EMP) that is created from features extracted by wavelets. In addition, the hyperspectral image is denoised in the 2-D spatial domain, also using wavelets and it is joined to the EMP via a stacked vector. In this paper, the scheme is improved achieving two goals. The first one is to reduce the classification time while preserving the accuracy of the classification by using ELM instead of SVM. The second one is to improve the accuracy results by performing not only a 2-D denoising for every spectral band, but also a previous additional 1-D spectral signature denoising applied to each pixel vector of the image. For each denoising the image is transformed by applying a 1-D or 2-D wavelet transform, and then a NeighShrink thresholding is applied. Improvements in terms of classification accuracy are obtained, especially for images with close regions in the classification reference map, because in these cases the accuracy of the classification in the edges between classes is more relevant.

  1. Improved DCT-Based Nonlocal Means Filter for MR Images Denoising

    PubMed Central

    Hu, Jinrong; Pu, Yifei; Wu, Xi; Zhang, Yi; Zhou, Jiliu

    2012-01-01

    The nonlocal means (NLM) filter has been proven to be an efficient feature-preserved denoising method and can be applied to remove noise in the magnetic resonance (MR) images. To suppress noise more efficiently, we present a novel NLM filter based on the discrete cosine transform (DCT). Instead of computing similarity weights using the gray level information directly, the proposed method calculates similarity weights in the DCT subspace of neighborhood. Due to promising characteristics of DCT, such as low data correlation and high energy compaction, the proposed filter is naturally endowed with more accurate estimation of weights thus enhances denoising effectively. The performance of the proposed filter is evaluated qualitatively and quantitatively together with two other NLM filters, namely, the original NLM filter and the unbiased NLM (UNLM) filter. Experimental results demonstrate that the proposed filter achieves better denoising performance in MRI compared to the others. PMID:22545063

  2. Dependence of ventilation image derived from 4D CT on deformable image registration and ventilation algorithms.

    PubMed

    Latifi, Kujtim; Forster, Kenneth M; Hoffe, Sarah E; Dilling, Thomas J; van Elmpt, Wouter; Dekker, Andre; Zhang, Geoffrey G

    2013-07-08

    Ventilation imaging using 4D CT is a convenient and low-cost functional imaging methodology which might be of value in radiotherapy treatment planning to spare functional lung volumes. Deformable image registration (DIR) is needed to calculate ventilation imaging from 4D CT. This study investigates the dependence of calculated ventilation on DIR methods and ventilation algorithms. DIR of the normal end expiration and normal end inspiration phases of the 4D CT images was used to correlate the voxels between the two respiratory phases. Three different DIR algorithms, optical flow (OF), diffeomorphic demons (DD), and diffeomorphic morphons (DM) were retrospectively applied to ten esophagus and ten lung cancer cases with 4D CT image sets that encompassed the entire lung volume. The three ventilation extraction methods were used based on either the Jacobian, the change in volume of the voxel, or directly calculated from Hounsfield units. The ventilation calculation algorithms used are the Jacobian, ΔV, and HU method. They were compared using the Dice similarity coefficient (DSC) index and Bland-Altman plots. Dependence of ventilation images on the DIR was greater for the ΔV and the Jacobian methods than for the HU method. The DSC index for 20% of low-ventilation volume for ΔV was 0.33 ± 0.03 (1 SD) between OF and DM, 0.44 ± 0.05 between OF and DD, and 0.51 ± 0.04 between DM and DD. The similarity comparisons for Jacobian were 0.32 ± 0.03, 0.44 ± 0.05, and 0.51 ± 0.04, respectively, and for HU they were 0.53 ± 0.03, 0.56 ± 0.03, and 0.76 ± 0.04, respectively. Dependence of extracted ventilation on the ventilation algorithm used showed good agreement between the ΔV and Jacobian methods, but differed significantly for the HU method. DSC index for using OF as DIR was 0.86 ± 0.01 between ΔV and Jacobian, 0.28 ± 0.04 between ΔV and HU, and 0.28 ± 0.04 between Jacobian and HU, respectively. When using DM or DD as DIR, similar values were obtained when

  3. Subject-specific patch-based denoising for contrast-enhanced cardiac MR images

    NASA Astrophysics Data System (ADS)

    Ma, Lorraine; Ebrahimi, Mehran; Pop, Mihaela

    2016-03-01

    Many patch-based techniques in imaging, e.g., Non-local means denoising, require tuning parameters to yield optimal results. In real-world applications, e.g., denoising of MR images, ground truth is not generally available and the process of choosing an appropriate set of parameters is a challenge. Recently, Zhu et al. proposed a method to define an image quality measure, called Q, that does not require ground truth. In this manuscript, we evaluate the effect of various parameters of the NL-means denoising on this quality metric Q. Our experiments are based on the late-gadolinium enhancement (LGE) cardiac MR images that are inherently noisy. Our described exhaustive evaluation approach can be used in tuning parameters of patch-based schemes. Even in the case that an estimation of optimal parameters is provided using another existing approach, our described method can be used as a secondary validation step. Our preliminary results suggest that denoising parameters should be case-specific rather than generic.

  4. Denoising and Multivariate Analysis of Time-Of-Flight SIMS Images

    SciTech Connect

    Wickes, Bronwyn; Kim, Y.; Castner, David G.

    2003-08-30

    Time-of-flight SIMS (ToF-SIMS) imaging offers a modality for simultaneously visualizing the spatial distribution of different surface species. However, the utility of ToF-SIMS datasets may be limited by their large size, degraded mass resolution and low ion counts per pixel. Through denoising and multivariate image analysis, regions of similar chemistries may be differentiated more readily in ToF-SIMS image data. Three established denoising algorithms down-binning, boxcar and wavelet filtering were applied to ToF-SIMS images of different surface geometries and chemistries. The effect of these filters on the performance of principal component analysis (PCA) was evaluated in terms of the capture of important chemical image features in the principal component score images, the quality of the principal component

  5. The Application of Wavelet-Domain Hidden Markov Tree Model in Diabetic Retinal Image Denoising.

    PubMed

    Cui, Dong; Liu, Minmin; Hu, Lei; Liu, Keju; Guo, Yongxin; Jiao, Qing

    2015-01-01

    The wavelet-domain Hidden Markov Tree Model can properly describe the dependence and correlation of fundus angiographic images' wavelet coefficients among scales. Based on the construction of the fundus angiographic images Hidden Markov Tree Models and Gaussian Mixture Models, this paper applied expectation-maximum algorithm to estimate the wavelet coefficients of original fundus angiographic images and the Bayesian estimation to achieve the goal of fundus angiographic images denoising. As is shown in the experimental result, compared with the other algorithms as mean filter and median filter, this method effectively improved the peak signal to noise ratio of fundus angiographic images after denoising and preserved the details of vascular edge in fundus angiographic images.

  6. Variance stabilizing transformations in patch-based bilateral filters for poisson noise image denoising.

    PubMed

    de Deckerk, Arnaud; Lee, John Aldo; Verlysen, Michel

    2009-01-01

    Denoising is a key step in the processing of medical images. It aims at improving both the interpretability and visual aspect of the images. Yet, designing a robust and efficient denoising tool remains an unsolved challenge and a specific issue concerns the noise model. Many filters typically assume that noise is additive and Gaussian, with uniform variance. In contrast, noise in medical images often has more complex properties. This paper considers images with Poissonian noise and the patch-based bilateral filters, that is, filters that involve a tonal kernel and pair wise comparisons between shifted blocks of the images. The main aim is then to integrate two variance stabilizing transformations that allow the filters to work with Gaussianized noise. The performances of these filters are compared to those of the classical bilateral filter with the same transformations. The experiments include an artificial benchmark as well as a positron emission tomography image.

  7. Edge structure preserving 3D image denoising by local surface approximation.

    PubMed

    Qiu, Peihua; Mukherjee, Partha Sarathi

    2012-08-01

    In various applications, including magnetic resonance imaging (MRI) and functional MRI (fMRI), 3D images are becoming increasingly popular. To improve the reliability of subsequent image analyses, 3D image denoising is often a necessary preprocessing step, which is the focus of the current paper. In the literature, most existing image denoising procedures are for 2D images. Their direct extensions to 3D cases generally cannot handle 3D images efficiently because the structure of a typical 3D image is substantially more complicated than that of a typical 2D image. For instance, edge locations are surfaces in 3D cases which would be much more challenging to handle compared to edge curves in 2D cases. We propose a novel 3D image denoising procedure in this paper, based on local approximation of the edge surfaces using a set of surface templates. An important property of this method is that it can preserve edges and major edge structures (e.g., intersections of two edge surfaces and pointed corners). Numerical studies show that it works well in various applications.

  8. Image denoising using trivariate shrinkage filter in the wavelet domain and joint bilateral filter in the spatial domain.

    PubMed

    Yu, Hancheng; Zhao, Li; Wang, Haixian

    2009-10-01

    This correspondence proposes an efficient algorithm for removing Gaussian noise from corrupted image by incorporating a wavelet-based trivariate shrinkage filter with a spatial-based joint bilateral filter. In the wavelet domain, the wavelet coefficients are modeled as trivariate Gaussian distribution, taking into account the statistical dependencies among intrascale wavelet coefficients, and then a trivariate shrinkage filter is derived by using the maximum a posteriori (MAP) estimator. Although wavelet-based methods are efficient in image denoising, they are prone to producing salient artifacts such as low-frequency noise and edge ringing which relate to the structure of the underlying wavelet. On the other hand, most spatial-based algorithms output much higher quality denoising image with less artifacts. However, they are usually too computationally demanding. In order to reduce the computational cost, we develop an efficient joint bilateral filter by using the wavelet denoising result rather than directly processing the noisy image in the spatial domain. This filter could suppress the noise while preserve image details with small computational cost. Extension to color image denoising is also presented. We compare our denoising algorithm with other denoising techniques in terms of PSNR and visual quality. The experimental results indicate that our algorithm is competitive with other denoising techniques.

  9. Translation invariant directional framelet transform combined with Gabor filters for image denoising.

    PubMed

    Shi, Yan; Yang, Xiaoyuan; Guo, Yuhua

    2014-01-01

    This paper is devoted to the study of a directional lifting transform for wavelet frames. A nonsubsampled lifting structure is developed to maintain the translation invariance as it is an important property in image denoising. Then, the directionality of the lifting-based tight frame is explicitly discussed, followed by a specific translation invariant directional framelet transform (TIDFT). The TIDFT has two framelets ψ1, ψ2 with vanishing moments of order two and one respectively, which are able to detect singularities in a given direction set. It provides an efficient and sparse representation for images containing rich textures along with properties of fast implementation and perfect reconstruction. In addition, an adaptive block-wise orientation estimation method based on Gabor filters is presented instead of the conventional minimization of residuals. Furthermore, the TIDFT is utilized to exploit the capability of image denoising, incorporating the MAP estimator for multivariate exponential distribution. Consequently, the TIDFT is able to eliminate the noise effectively while preserving the textures simultaneously. Experimental results show that the TIDFT outperforms some other frame-based denoising methods, such as contourlet and shearlet, and is competitive to the state-of-the-art denoising approaches.

  10. Biomedical image and signal de-noising using dual tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Rizi, F. Yousefi; Noubari, H. Ahmadi; Setarehdan, S. K.

    2011-10-01

    Dual tree complex wavelet transform(DTCWT) is a form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The purposes of de-noising are reducing noise level and improving signal to noise ratio (SNR) without distorting the signal or image. This paper proposes a method for removing white Gaussian noise from ECG signals and biomedical images. The discrete wavelet transform (DWT) is very valuable in a large scope of de-noising problems. However, it has limitations such as oscillations of the coefficients at a singularity, lack of directional selectivity in higher dimensions, aliasing and consequent shift variance. The complex wavelet transform CWT strategy that we focus on in this paper is Kingsbury's and Selesnick's dual tree CWT (DTCWT) which outperforms the critically decimated DWT in a range of applications, such as de-noising. Each complex wavelet is oriented along one of six possible directions, and the magnitude of each complex wavelet has a smooth bell-shape. In the final part of this paper, we present biomedical image and signal de-noising by the means of thresholding magnitude of the wavelet coefficients.

  11. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    SciTech Connect

    Yan, Hao; Folkerts, Michael; Jiang, Steve B. E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun E-mail: steve.jiang@UTSouthwestern.edu; Zhen, Xin; Li, Yongbao; Pan, Tinsu; Cervino, Laura

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  12. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  13. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.

    PubMed

    Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe

    2014-03-01

    The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.

  14. Denoising of hyperspectral images by best multilinear rank approximation of a tensor

    NASA Astrophysics Data System (ADS)

    Marin-McGee, Maider; Velez-Reyes, Miguel

    2010-04-01

    The hyperspectral image cube can be modeled as a three dimensional array. Tensors and the tools of multilinear algebra provide a natural framework to deal with this type of mathematical object. Singular value decomposition (SVD) and its variants have been used by the HSI community for denoising of hyperspectral imagery. Denoising of HSI using SVD is achieved by finding a low rank approximation of a matrix representation of the hyperspectral image cube. This paper investigates similar concepts in hyperspectral denoising by using a low multilinear rank approximation the given HSI tensor representation. The Best Multilinear Rank Approximation (BMRA) of a given tensor A is to find a lower multilinear rank tensor B that is as close as possible to A in the Frobenius norm. Different numerical methods to compute the BMRA using Alternating Least Square (ALS) method and Newton's Methods over product of Grassmann manifolds are presented. The effect of the multilinear rank, the numerical method used to compute the BMRA, and different parameter choices in those methods are studied. Results show that comparable results are achievable with both ALS and Newton type methods. Also, classification results using the filtered tensor are better than those obtained either with denoising using SVD or MNF.

  15. Radiolucent 4D Ultrasound Imaging: System Design and Application to Radiotherapy Guidance.

    PubMed

    Schlosser, Jeffrey; Hristov, Dimitre

    2016-04-27

    Four-dimensional (4D) ultrasound (US) is an attractive modality for image guidance due to its real-time, non-ionizing, volumetric imaging capability with high soft tissue contrast. However, existing 4D US imaging systems contain large volumes of metal which interfere with diagnostic and therapeutic ionizing radiation in procedures such as CT imaging and radiation therapy. This study aimed to design and characterize a novel 4D Radiolucent Remotely-Actuated UltraSound Scanning (RRUSS) device that overcomes this limitation. In a phantom, we evaluated the imaging performance of the RRUSS device including frame rate, resolution, spatial integrity, and motion tracking accuracy. To evaluate compatibility with radiation therapy workflow, we evaluated device-induced CT imaging artifacts, US tracking performance during beam delivery, and device compatibility with commercial radiotherapy planning software. The RRUSS device produced 4D volumes at 0.1-3.0 Hz with 60⁰ lateral field of view (FOV), 50⁰ maximum elevational FOV, and 200 mm maximum depth. Imaging resolution (-3 dB point spread width) was 1.2-7.9 mm at depths up to 100 mm and motion tracking accuracy was ≤0.3±0.5 mm. No significant effect of the RRUSS device on CT image integrity was found, and RRUSS device performance was not affected by radiotherapy beam exposure. Agreement within ±3.0% / 2.0 mm was achieved between computed and measured radiotherapy dose delivered directly through the RRUSS device at 6 MV and 15 MV. In-vivo liver, kidney, and prostate images were successfully acquired. Our investigations suggest that a RRUSS device can offer non-interfering 4D guidance for radiation therapy and other diagnostic and therapeutic procedures.

  16. Radiolucent 4D Ultrasound Imaging: System Design and Application to Radiotherapy Guidance.

    PubMed

    Schlosser, Jeffrey; Hristov, Dimitre

    2016-10-01

    Four-dimensional (4D) ultrasound (US) is an attractive modality for image guidance due to its real-time, non-ionizing, volumetric imaging capability with high soft tissue contrast. However, existing 4D US imaging systems contain large volumes of metal which interfere with diagnostic and therapeutic ionizing radiation in procedures such as CT imaging and radiation therapy. This study aimed to design and characterize a novel 4D Radiolucent Remotely-Actuated UltraSound Scanning (RRUSS) device that overcomes this limitation. In a phantom, we evaluated the imaging performance of the RRUSS device including frame rate, resolution, spatial integrity, and motion tracking accuracy. To evaluate compatibility with radiation therapy workflow, we evaluated device-induced CT imaging artifacts, US tracking performance during beam delivery, and device compatibility with commercial radiotherapy planning software. The RRUSS device produced 4D volumes at 0.1-3.0 Hz with 60° lateral field of view (FOV), 50° maximum elevational FOV, and 200 mm maximum depth. Imaging resolution (-3 dB point spread width) was 1.2-7.9 mm at depths up to 100 mm and motion tracking accuracy was ≤ 0.3±0.5 mm. No significant effect of the RRUSS device on CT image integrity was found, and RRUSS device performance was not affected by radiotherapy beam exposure. Agreement within ±3.0% / 2.0 mm was achieved between computed and measured radiotherapy dose delivered directly through the RRUSS device at 6 MV and 15 MV. In vivo liver, kidney, and prostate images were successfully acquired. Our investigations suggest that a RRUSS device can offer non-interfering 4D guidance for radiation therapy and other diagnostic and therapeutic procedures.

  17. Non-local neighbor embedding image denoising algorithm in sparse domain

    NASA Astrophysics Data System (ADS)

    Shi, Guo-chuan; Xia, Liang; Liu, Shuang-qing; Xu, Guo-ming

    2013-12-01

    To get better denoising results, the prior knowledge of nature images should be taken into account to regularize the ill-posed inverse problem. In this paper, we propose an image denoising algorithm via non-local similar neighbor embedding in sparse domain. Firstly, a local statistical feature, namely histograms of oriented gradients of image patches is used to perform the clustering, and then the whole training data set is partitioned into a set of subsets which have similar local geometric structures and the centroid of each subset is also obtained. Secondly, we apply the principal component analysis (PCA) to learn the compact sub-dictionary for each cluster. Next, through sparse coding over the sub-dictionary and neighborhood selecting, the image patch to be synthesized can be approximated by its top k neighbors. The extensive experimental results validate the effective of the proposed method both in PSNR and visual perception.

  18. Multiresolution parametric estimation of transparent motions and denoising of fluoroscopic images.

    PubMed

    Auvray, Vincent; Liénard, Jean; Bouthemy, Patrick

    2005-01-01

    We describe a novel multiresolution parametric framework to estimate transparent motions typically present in X-Ray exams. Assuming the presence if two transparent layers, it computes two affine velocity fields by minimizing an appropriate objective function with an incremental Gauss-Newton technique. We have designed a realistic simulation scheme of fluoroscopic image sequences to validate our method on data with ground truth and different levels of noise. An experiment on real clinical images is also reported. We then exploit this transparent-motion estimation method to denoise two layers image sequences using a motion-compensated estimation method. In accordance with theory, we show that we reach a denoising factor of 2/3 in a few iterations without bringing any local artifacts in the image sequence.

  19. Denoising approach for remote sensing image based on anisotropic diffusion and wavelet transform algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiaojun; Lai, Weidong

    2011-08-01

    In this paper, a combined method have been put forward for one ASTER detected image with the wavelet filter to attenuate the noise and the anisotropic diffusion PDE(Partial Differential Equation) for further recovering image contrast. The model is verified in different noising background, since the remote sensing image usually contains salt and pepper, Gaussian as well as speckle noise. Considered the features that noise existing in wavelet domain, the wavelet filter with Bayesian estimation threshold is applied for recovering image contrast from the blurring background. The proposed PDE are performing an anisotropic diffusion in the orthogonal direction, thus preserving the edges during further denoising process. Simulation indicates that the combined algorithm can more effectively recover the blurred image from speckle and Gauss noise background than the only wavelet denoising method, while the denoising effect is also distinct when the pepper-salt noise has low intensity. The combined algorithm proposed in this article can be integrated in remote sensing image analyzing to obtain higher accuracy for environmental interpretation and pattern recognition.

  20. Projection domain denoising method based on dictionary learning for low-dose CT image reconstruction.

    PubMed

    Zhang, Haiyan; Zhang, Liyi; Sun, Yunshan; Zhang, Jingyu

    2015-01-01

    Reducing X-ray tube current is one of the widely used methods for decreasing the radiation dose. Unfortunately, the signal-to-noise ratio (SNR) of the projection data degrades simultaneously. To improve the quality of reconstructed images, a dictionary learning based penalized weighted least-squares (PWLS) approach is proposed for sinogram denoising. The weighted least-squares considers the statistical characteristic of noise and the penalty models the sparsity of sinogram based on dictionary learning. Then reconstruct CT image using filtered back projection (FBP) algorithm from the denoised sinogram. The proposed method is particularly suitable for the projection data with low SNR. Experimental results show that the proposed method can get high-quality CT images when the signal to noise ratio of projection data declines sharply.

  1. Denoising MR images using non-local means filter with combined patch and pixel similarity.

    PubMed

    Zhang, Xinyuan; Hou, Guirong; Ma, Jianhua; Yang, Wei; Lin, Bingquan; Xu, Yikai; Chen, Wufan; Feng, Yanqiu

    2014-01-01

    Denoising is critical for improving visual quality and reliability of associative quantitative analysis when magnetic resonance (MR) images are acquired with low signal-to-noise ratios. The classical non-local means (NLM) filter, which averages pixels weighted by the similarity of their neighborhoods, is adapted and demonstrated to effectively reduce Rician noise without affecting edge details in MR magnitude images. However, the Rician NLM (RNLM) filter usually blurs small high-contrast particle details which might be clinically relevant information. In this paper, we investigated the reason of this particle blurring problem and proposed a novel particle-preserving RNLM filter with combined patch and pixel (RNLM-CPP) similarity. The results of experiments on both synthetic and real MR data demonstrate that the proposed RNLM-CPP filter can preserve small high-contrast particle details better than the original RNLM filter while denoising MR images.

  2. 4D MR and attenuation map generation in PET/MR imaging using 4D PET derived deformation matrices: a feasibility study for lung cancer applications.

    PubMed

    Fayad, Hadi; Schmidt, Holger; Kuestner, Thomas; Visvikis, Dimitris

    2016-10-13

    Respiratory motion may reduce accuracy in fusion of functional and anatomical images using combined Positron emission tomography / Magnetic resonance (PET/MR) systems. Methodologies for the correction of respiratory motion in PET acquisitions using such systems are mostly based on the use of respiratory synchronized MR acquisitions to derive motion fields. Existing approaches based on tagging acquisitions may introduce artifacts in the MR images, while motion model approaches require the acquisition of training datasets. The objective of this work was to investigate the possibility of generating 4D MR images and associated attenuation maps (AMs) from a single static MR image combined with motion fields obtained from simultaneously acquired 4D non-attenuation corrected (NAC) PET images.

  3. A hybrid spatial-spectral denoising method for infrared hyperspectral images using 2DPCA

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Ma, Yong; Mei, Xiaoguang; Fan, Fan

    2016-11-01

    The traditional noise reduction methods for 3-D infrared hyperspectral images typically operate independently in either the spatial or spectral domain, and such methods overlook the relationship between the two domains. To address this issue, we propose a hybrid spatial-spectral method in this paper to link both domains. First, principal component analysis and bivariate wavelet shrinkage are performed in the 2-D spatial domain. Second, 2-D principal component analysis transformation is conducted in the 1-D spectral domain to separate the basic components from detail ones. The energy distribution of noise is unaffected by orthogonal transformation; therefore, the signal-to-noise ratio of each component is used as a criterion to determine whether a component should be protected from over-denoising or denoised with certain 1-D denoising methods. This study implements the 1-D wavelet shrinking threshold method based on Stein's unbiased risk estimator, and the quantitative results on publicly available datasets demonstrate that our method can improve denoising performance more effectively than other state-of-the-art methods can.

  4. Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function

    PubMed Central

    2015-01-01

    Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications. PMID:27222723

  5. Image Denoising With Edge-Preserving and Segmentation Based on Mask NHA.

    PubMed

    Hosotani, Fumitaka; Inuzuka, Yuya; Hasegawa, Masaya; Hirobayashi, Shigeki; Misawa, Tadanobu

    2015-12-01

    In this paper, we propose a zero-mean white Gaussian noise removal method using a high-resolution frequency analysis. It is difficult to separate an original image component from a noise component when using discrete Fourier transform or discrete cosine transform for analysis because sidelobes occur in the results. The 2D non-harmonic analysis (2D NHA) is a high-resolution frequency analysis technique that improves noise removal accuracy because of its sidelobe reduction feature. However, spectra generated by NHA are distorted, because of which the signal of the image is non-stationary. In this paper, we analyze each region with a homogeneous texture in the noisy image. Non-uniform regions that occur due to segmentation are analyzed by an extended 2D NHA method called Mask NHA. We conducted an experiment using a simulation image, and found that Mask NHA denoising attains a higher peak signal-to-noise ratio (PSNR) value than the state-of-the-art methods if a suitable segmentation result can be obtained from the input image, even though parameter optimization was incomplete. This experimental result exhibits the upper limit on the value of PSNR in our Mask NHA denoising method. The performance of Mask NHA denoising is expected to approach the limit of PSNR by improving the segmentation method.

  6. A New Adaptive Diffusive Function for Magnetic Resonance Imaging Denoising Based on Pixel Similarity

    PubMed Central

    Heydari, Mostafa; Karami, Mohammad Reza

    2015-01-01

    Although there are many methods for image denoising, but partial differential equation (PDE) based denoising attracted much attention in the field of medical image processing such as magnetic resonance imaging (MRI). The main advantage of PDE-based denoising approach is laid in its ability to smooth image in a nonlinear way, which effectively removes the noise, as well as preserving edge through anisotropic diffusion controlled by the diffusive function. This function was first introduced by Perona and Malik (P-M) in their model. They proposed two functions that are most frequently used in PDE-based methods. Since these functions consider only the gradient information of a diffused pixel, they cannot remove noise in noisy images with low signal-to-noise (SNR). In this paper we propose a modified diffusive function with fractional power that is based on pixel similarity to improve P-M model for low SNR. We also will show that our proposed function will stabilize the P-M method. As experimental results show, our proposed function that is modified version of P-M function effectively improves the SNR and preserves edges more than P-M functions in low SNR. PMID:26955563

  7. Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function.

    PubMed

    Lahmiri, Salim

    2016-03-01

    Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications.

  8. Graph-based retrospective 4D image construction from free-breathing MRI slice acquisitions

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Ciesielski, Krzysztof C.; McDonough, Joseph M.; Mong, Andrew; Campbell, Robert M.

    2014-03-01

    4D or dynamic imaging of the thorax has many potential applications [1, 2]. CT and MRI offer sufficient speed to acquire motion information via 4D imaging. However they have different constraints and requirements. For both modalities both prospective and retrospective respiratory gating and tracking techniques have been developed [3, 4]. For pediatric imaging, x-ray radiation becomes a primary concern and MRI remains as the de facto choice. The pediatric subjects we deal with often suffer from extreme malformations of their chest wall, diaphragm, and/or spine, as such patient cooperation needed by some of the gating and tracking techniques are difficult to realize without causing patient discomfort. Moreover, we are interested in the mechanical function of their thorax in its natural form in tidal breathing. Therefore free-breathing MRI acquisition is the ideal modality of imaging for these patients. In our set up, for each coronal (or sagittal) slice position, slice images are acquired at a rate of about 200-300 ms/slice over several natural breathing cycles. This produces typically several thousands of slices which contain both the anatomic and dynamic information. However, it is not trivial to form a consistent and well defined 4D volume from these data. In this paper, we present a novel graph-based combinatorial optimization solution for constructing the best possible 4D scene from such data entirely in the digital domain. Our proposed method is purely image-based and does not need breath holding or any external surrogates or instruments to record respiratory motion or tidal volume. Both adult and children patients' data are used to illustrate the performance of the proposed method. Experimental results show that the reconstructed 4D scenes are smooth and consistent spatially and temporally, agreeing with known shape and motion of the lungs.

  9. Difference in performance between 3D and 4D CBCT for lung imaging: a dose and image quality analysis.

    PubMed

    Thengumpallil, Sheeba; Smith, Kathleen; Monnin, Pascal; Bourhis, Jean; Bochud, François; Moeckli, Raphaël

    2016-11-08

    The study was to describe and to compare the performance of 3D and 4D CBCT imaging modalities by measuring and analyzing the delivered dose and the image quality. The 3D (Chest) and 4D (Symmetry) CBCT Elekta XVI lung IGRT protocols were analyzed. Dose profiles were measured with TLDs inside a dedicated phantom. The dosimetric indicator cone-beam dose index (CBDI) was evaluated. The image quality analysis was performed by assessing the contrast transfer function (CTF), the noise power spectrum (NPS) and the noise-equivalent quanta (NEQ). Artifacts were also evaluated by simulating irregular breathing variations. The two imaging modalities showed different dose distributions within the phantom. At the center, the 3D CBCT delivered twice the dose of the 4D CBCT. The CTF was strongly reduced by motion compared to static conditions, resulting in a CTF reduction of 85% for the 3D CBCT and 65% for the 4D CBCT. The amplitude of the NPS was two times higher for the 4D CBCT than for the 3D CBCT. In the presence of motion, the NEQ of the 4D CBCT was 50% higher than the 3D CBCT. In the presence of breathing irregularities, the 4D CBCT protocol was mainly affected by view-aliasing artifacts, which were typically cone-beam artifacts, while the 3D CBCT protocol was mainly affected by duplication artifacts. The results showed that the 4D CBCT ensures a reasonable dose and better image quality when mov-ing targets are involved compared to 3D CBCT. Therefore, 4D CBCT is a reliable imaging modality for lung free-breathing radiation therapy.

  10. 4D rotational x-ray imaging of wrist joint dynamic motion

    SciTech Connect

    Carelsen, Bart; Bakker, Niels H.; Strackee, Simon D.; Boon, Sjirk N.; Maas, Mario; Sabczynski, Joerg; Grimbergen, Cornelis A.; Streekstra, Geert J.

    2005-09-15

    Current methods for imaging joint motion are limited to either two-dimensional (2D) video fluoroscopy, or to animated motions from a series of static three-dimensional (3D) images. 3D movement patterns can be detected from biplane fluoroscopy images matched with computed tomography images. This involves several x-ray modalities and sophisticated 2D to 3D matching for the complex wrist joint. We present a method for the acquisition of dynamic 3D images of a moving joint. In our method a 3D-rotational x-ray (3D-RX) system is used to image a cyclically moving joint. The cyclic motion is synchronized to the x-ray acquisition to yield multiple sets of projection images, which are reconstructed to a series of time resolved 3D images, i.e., four-dimensional rotational x ray (4D-RX). To investigate the obtained image quality parameters the full width at half maximum (FWHM) of the point spread function (PSF) via the edge spread function and the contrast to noise ratio between air and phantom were determined on reconstructions of a bullet and rod phantom, using 4D-RX as well as stationary 3D-RX images. The CNR in volume reconstructions based on 251 projection images in the static situation and on 41 and 34 projection images of a moving phantom were 6.9, 3.0, and 2.9, respectively. The average FWHM of the PSF of these same images was, respectively, 1.1, 1.7, and 2.2 mm orthogonal to the motion and parallel to direction of motion 0.6, 0.7, and 1.0 mm. The main deterioration of 4D-RX images compared to 3D-RX images is due to the low number of projection images used and not to the motion of the object. Using 41 projection images seems the best setting for the current system. Experiments on a postmortem wrist show the feasibility of the method for imaging 3D dynamic joint motion. We expect that 4D-RX will pave the way to improved assessment of joint disorders by detection of 3D dynamic motion patterns in joints.

  11. Real time image-based tracking of 4D ultrasound data.

    PubMed

    Øye, Ola Kristoffer; Wein, Wolfgang; Ulvang, Dag Magne; Matre, Knut; Viola, Ivan

    2012-01-01

    We propose a methodology to perform real time image-based tracking on streaming 4D ultrasound data, using image registration to deduce the positioning of each ultrasound frame in a global coordinate system. Our method provides an alternative approach to traditional external tracking devices used for tracking probe movements. We compare the performance of our method against magnetic tracking on phantom and liver data, and show that our method is able to provide results in agreement with magnetic tracking.

  12. Fractional-order TV-L2 model for image denoising

    NASA Astrophysics Data System (ADS)

    Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu

    2013-10-01

    This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.

  13. A joint inter- and intrascale statistical model for Bayesian wavelet based image denoising.

    PubMed

    Pizurica, Aleksandra; Philips, Wilfried; Lemahieu, Ignace; Acheroy, Marc

    2002-01-01

    This paper presents a new wavelet-based image denoising method, which extends a "geometrical" Bayesian framework. The new method combines three criteria for distinguishing supposedly useful coefficients from noise: coefficient magnitudes, their evolution across scales and spatial clustering of large coefficients near image edges. These three criteria are combined in a Bayesian framework. The spatial clustering properties are expressed in a prior model. The statistical properties concerning coefficient magnitudes and their evolution across scales are expressed in a joint conditional model. The three main novelties with respect to related approaches are (1) the interscale-ratios of wavelet coefficients are statistically characterized and different local criteria for distinguishing useful coefficients from noise are evaluated, (2) a joint conditional model is introduced, and (3) a novel anisotropic Markov random field prior model is proposed. The results demonstrate an improved denoising performance over related earlier techniques.

  14. Laplacian based non-local means denoising of MR images with Rician noise.

    PubMed

    Bhujle, Hemalata V; Chaudhuri, Subhasis

    2013-11-01

    Magnetic Resonance (MR) image is often corrupted with a complex white Gaussian noise (Rician noise) which is signal dependent. Considering the special characteristics of Rician noise, we carry out nonlocal means denoising on squared magnitude images and compensate the introduced bias. In this paper, we propose an algorithm which not only preserves the edges and fine structures but also performs efficient denoising. For this purpose we have used a Laplacian of Gaussian (LoG) filter in conjunction with a nonlocal means filter (NLM). Further, to enhance the edges and to accelerate the filtering process, only a few similar patches have been preselected on the basis of closeness in edge and inverted mean values. Experiments have been conducted on both simulated and clinical data sets. The qualitative and quantitative measures demonstrate the efficacy of the proposed method.

  15. Population of anatomically variable 4D XCAT adult phantoms for imaging research and optimization

    SciTech Connect

    Segars, W. P.; Bond, Jason; Frush, Jack; Hon, Sylvia; Eckersley, Chris; Samei, E.; Williams, Cameron H.; Frush, D.; Feng Jianqiao; Tward, Daniel J.; Ratnanather, J. T.; Miller, M. I.

    2013-04-15

    Purpose: The authors previously developed the 4D extended cardiac-torso (XCAT) phantom for multimodality imaging research. The XCAT consisted of highly detailed whole-body models for the standard male and female adult, including the cardiac and respiratory motions. In this work, the authors extend the XCAT beyond these reference anatomies by developing a series of anatomically variable 4D XCAT adult phantoms for imaging research, the first library of 4D computational phantoms. Methods: The initial anatomy of each phantom was based on chest-abdomen-pelvis computed tomography data from normal patients obtained from the Duke University database. The major organs and structures for each phantom were segmented from the corresponding data and defined using nonuniform rational B-spline surfaces. To complete the body, the authors manually added on the head, arms, and legs using the original XCAT adult male and female anatomies. The structures were scaled to best match the age and anatomy of the patient. A multichannel large deformation diffeomorphic metric mapping algorithm was then used to calculate the transform from the template XCAT phantom (male or female) to the target patient model. The transform was applied to the template XCAT to fill in any unsegmented structures within the target phantom and to implement the 4D cardiac and respiratory models in the new anatomy. Each new phantom was refined by checking for anatomical accuracy via inspection of the models. Results: Using these methods, the authors created a series of computerized phantoms with thousands of anatomical structures and modeling cardiac and respiratory motions. The database consists of 58 (35 male and 23 female) anatomically variable phantoms in total. Like the original XCAT, these phantoms can be combined with existing simulation packages to simulate realistic imaging data. Each new phantom contains parameterized models for the anatomy and the cardiac and respiratory motions and can, therefore, serve

  16. [A fast non-local means algorithm for denoising of computed tomography images].

    PubMed

    Kang, Changqing; Cao, Wenping; Fang, Lei; Hua, Li; Cheng, Hong

    2012-11-01

    A fast non-local means image denoising algorithm is presented based on the single motif of existing computed tomography images in medical archiving systems. The algorithm is carried out in two steps of prepossessing and actual possessing. The sample neighborhood database is created via the data structure of locality sensitive hashing in the prepossessing stage. The CT image noise is removed by non-local means algorithm based on the sample neighborhoods accessed fast by locality sensitive hashing. The experimental results showed that the proposed algorithm could greatly reduce the execution time, as compared to NLM, and effectively preserved the image edges and details.

  17. From Wheatstone to Cameron and beyond: overview in 3-D and 4-D imaging technology

    NASA Astrophysics Data System (ADS)

    Gilbreath, G. Charmaine

    2012-02-01

    This paper reviews three-dimensional (3-D) and four-dimensional (4-D) imaging technology, from Wheatstone through today, with some prognostications for near future applications. This field is rich in variety, subject specialty, and applications. A major trend, multi-view stereoscopy, is moving the field forward to real-time wide-angle 3-D reconstruction as breakthroughs in parallel processing and multi-processor computers enable very fast processing. Real-time holography meets 4-D imaging reconstruction at the goal of achieving real-time, interactive, 3-D imaging. Applications to telesurgery and telemedicine as well as to the needs of the defense and intelligence communities are also discussed.

  18. Image Pretreatment Tools I: Algorithms for Map Denoising and Background Subtraction Methods.

    PubMed

    Cannistraci, Carlo Vittorio; Alessio, Massimo

    2016-01-01

    One of the critical steps in two-dimensional electrophoresis (2-DE) image pre-processing is the denoising, that might aggressively affect either spot detection or pixel-based methods. The Median Modified Wiener Filter (MMWF), a new nonlinear adaptive spatial filter, resulted to be a good denoising approach to use in practice with 2-DE. MMWF is suitable for global denoising, and contemporary for the removal of spikes and Gaussian noise, being its best setting invariant on the type of noise. The second critical step rises because of the fact that 2-DE gel images may contain high levels of background, generated by the laboratory experimental procedures, that must be subtracted for accurate measurements of the proteomic optical density signals. Here we discuss an efficient mathematical method for background estimation, that is suitable to work even before the 2-DE image spot detection, and it is based on the 3D mathematical morphology (3DMM) theory.

  19. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  20. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  1. Median-modified Wiener filter provides efficient denoising, preserving spot edge and morphology in 2-DE image processing.

    PubMed

    Cannistraci, Carlo V; Montevecchi, Franco M; Alessio, Massimo

    2009-11-01

    Denoising is a fundamental early stage in 2-DE image analysis strongly influencing spot detection or pixel-based methods. A novel nonlinear adaptive spatial filter (median-modified Wiener filter, MMWF), is here compared with five well-established denoising techniques (Median, Wiener, Gaussian, and Polynomial-Savitzky-Golay filters; wavelet denoising) to suggest, by means of fuzzy sets evaluation, the best denoising approach to use in practice. Although median filter and wavelet achieved the best performance in spike and Gaussian denoising respectively, they are unsuitable for contemporary removal of different types of noise, because their best setting is noise-dependent. Vice versa, MMWF that arrived second in each single denoising category, was evaluated as the best filter for global denoising, being its best setting invariant of the type of noise. In addition, median filter eroded the edge of isolated spots and filled the space between close-set spots, whereas MMWF because of a novel filter effect (drop-off-effect) does not suffer from erosion problem, preserves the morphology of close-set spots, and avoids spot and spike fuzzyfication, an aberration encountered for Wiener filter. In our tests, MMWF was assessed as the best choice when the goal is to minimize spot edge aberrations while removing spike and Gaussian noise.

  2. In-treatment 4D cone-beam CT with image-based respiratory phase recognition.

    PubMed

    Kida, Satoshi; Masutani, Yoshitaka; Yamashita, Hideomi; Imae, Toshikazu; Matsuura, Taeko; Saotome, Naoya; Ohtomo, Kuni; Nakagawa, Keiichi; Haga, Akihiro

    2012-07-01

    The use of respiration-correlated cone-beam computed tomography (4D-CBCT) appears to be crucial for implementing precise radiation therapy of lung cancer patients. The reconstruction of 4D-CBCT images requires a respiratory phase. In this paper, we propose a novel method based on an image-based phase recognition technique using normalized cross correlation (NCC). We constructed the respiratory phase by searching for a region in an adjacent projection that achieves the maximum correlation with a region in a reference projection along the cranio-caudal direction. The data on 12 lung cancer patients acquired just prior to treatment and on 3 lung cancer patients acquired during volumetric modulated arc therapy treatment were analyzed in the search for the effective area of cone-beam projection images for performing NCC with 12 combinations of registration area and segment size. The evaluation was done by a "recognition rate" defined as the ratio of the number of peak inhales detected with our method to that detected by eye (manual tracking). The average recognition rate of peak inhale with the most efficient area in the present method was 96.4%. The present method was feasible even when the diaphragm was outside the field of view. With the most efficient area, we reconstructed in-treatment 4D-CBCT by dividing the breathing signal into four phase bins; peak exhale, peak inhale, and two intermediate phases. With in-treatment 4D-CBCT images, it was possible to identify the tumor position and the tumor size in moments of inspiration and expiration, in contrast to in-treatment CBCT reconstructed with all projections.

  3. Four-dimensional magnetic resonance imaging (4D-MRI) using image-based respiratory surrogate: A feasibility study

    PubMed Central

    Cai, Jing; Chang, Zheng; Wang, Zhiheng; Paul Segars, William; Yin, Fang-Fang

    2011-01-01

    Purpose: Four-dimensional computed tomography (4D-CT) has been widely used in radiation therapy to assess patient-specific breathing motion for determining individual safety margins. However, it has two major drawbacks: low soft-tissue contrast and an excessive imaging dose to the patient. This research aimed to develop a clinically feasible four-dimensional magnetic resonance imaging (4D-MRI) technique to overcome these limitations. Methods: The proposed 4D-MRI technique was achieved by continuously acquiring axial images throughout the breathing cycle using fast 2D cine-MR imaging, and then retrospectively sorting the images by respiratory phase. The key component of the technique was the use of body area (BA) of the axial MR images as an internal respiratory surrogate to extract the breathing signal. The validation of the BA surrogate was performed using 4D-CT images of 12 cancer patients by comparing the respiratory phases determined using the BA method to those determined clinically using the Real-time position management (RPM) system. The feasibility of the 4D-MRI technique was tested on a dynamic motion phantom, the 4D extended Cardiac Torso (XCAT) digital phantom, and two healthy human subjects. Results: Respiratory phases determined from the BA matched closely to those determined from the RPM: mean (±SD) difference in phase: −3.9% (±6.4%); mean (±SD) absolute difference in phase: 10.40% (±3.3%); mean (±SD) correlation coefficient: 0.93 (±0.04). In the motion phantom study, 4D-MRI clearly showed the sinusoidal motion of the phantom; image artifacts observed were minimal to none. Motion trajectories measured from 4D-MRI and 2D cine-MRI (used as a reference) matched excellently: the mean (±SD) absolute difference in motion amplitude: −0.3 (±0.5) mm. In the 4D-XCAT phantom study, the simulated “4D-MRI” images showed good consistency with the original 4D-XCAT phantom images. The motion trajectory of the hypothesized “tumor” matched

  4. A general strategy for anisotropic diffusion in MR image denoising and enhancement.

    PubMed

    Tong, Chenchen; Sun, Ying; Payet, Nicolas; Ong, Sim-Heng

    2012-12-01

    Anisotropic diffusion (AD) has proven to be very effective in the denoising of magnetic resonance (MR) images. The result of AD filtering is highly dependent on several parameters, especially the conductance parameter. However, there is no automatic method to select the optimal parameter values. This paper presents a general strategy for AD filtering of MR images using an automatic parameter selection method. The basic idea is to estimate the parameters through an optimization step on a synthetic image model, which is different from traditional analytical methods. This approach can be easily applied to more sophisticated diffusion models for better denoising results. We conducted a systematic study of parameter selection for the AD filter, including the dynamic parameter decreasing rate, the parameter selection range for different noise levels and the influence of the image contrast on parameter selection. The proposed approach was validated using both simulated and real MR images. The model image generated using our approach was shown to be highly suitable for the purpose of parameter optimization. The results confirm that our method outperforms most state-of-the-art methods in both quantitative measurement and visual evaluation. By testing on real images with different noise levels, we demonstrated that our method is sufficiently general to be applied to a variety of MR images.

  5. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  6. 4-D Cardiac MR Image Analysis: Left and Right Ventricular Morphology and Function

    PubMed Central

    Wahle, Andreas; Johnson, Ryan K.; Scholz, Thomas D.; Sonka, Milan

    2010-01-01

    In this study, a combination of active shape model (ASM) and active appearance model (AAM) was used to segment the left and right ventricles of normal and Tetralogy of Fallot (TOF) hearts on 4-D (3-D+time) MR images. For each ventricle, a 4-D model was first used to achieve robust preliminary segmentation on all cardiac phases simultaneously and a 3-D model was then applied to each phase to improve local accuracy while maintaining the overall robustness of the 4-D segmentation. On 25 normal and 25 TOF hearts, in comparison to the expert traced independent standard, our comprehensive performance assessment showed subvoxel segmentation accuracy, high overlap ratios, good ventricular volume correlations, and small percent volume differences. Following 4-D segmentation, novel quantitative shape and motion features were extracted using shape information, volume-time and dV/dt curves, analyzed and used for disease status classification. Automated discrimination between normal/TOF subjects achieved 90%–100% sensitivity and specificity. The features obtained from TOF hearts show higher variability compared to normal subjects, suggesting their potential use as disease progression indicators. The abnormal shape and motion variations of the TOF hearts were accurately captured by both the segmentation and feature characterization. PMID:19709962

  7. Comparison of spatiotemporal interpolators for 4D image reconstruction from 2D transesophageal ultrasound

    NASA Astrophysics Data System (ADS)

    Haak, Alexander; van Stralen, Marijn; van Burken, Gerard; Klein, Stefan; Pluim, Josien P. W.; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.

    2012-03-01

    °For electrophysiology intervention monitoring, we intend to reconstruct 4D ultrasound (US) of structures in the beating heart from 2D transesophageal US by scanplane rotation. The image acquisition is continuous but unsynchronized to the heart rate, which results in a sparsely and irregularly sampled dataset and a spatiotemporal interpolation method is desired. Previously, we showed the potential of normalized convolution (NC) for interpolating such datasets. We explored 4D interpolation by 3 different methods: NC, nearest neighbor (NN), and temporal binning followed by linear interpolation (LTB). The test datasets were derived by slicing three 4D echocardiography datasets at random rotation angles (θ, range: 0-180) and random normalized cardiac phase (τ, range: 0-1). Four different distributions of rotated 2D images with 600, 900, 1350, and 1800 2D input images were created from all TEE sets. A 2D Gaussian kernel was used for NC and optimal kernel sizes (σθ and στ) were found by performing an exhaustive search. The RMS gray value error (RMSE) of the reconstructed images was computed for all interpolation methods. The estimated optimal kernels were in the range of σθ = 3.24 - 3.69°/ στ = 0.045 - 0.048, σθ = 2.79°/ στ = 0.031 - 0.038, σθ = 2.34°/ στ = 0.023 - 0.026, and σθ = 1.89°/ στ = 0.021 - 0.023 for 600, 900, 1350, and 1800 input images respectively. We showed that NC outperforms NN and LTB. For a small number of input images the advantage of NC is more pronounced.

  8. Statistics of Natural Stochastic Textures and Their Application in Image Denoising.

    PubMed

    Zachevsky, Ido; Zeevi, Yehoshua Y Josh

    2016-05-01

    Natural stochastic textures (NSTs), characterized by their fine details, are prone to corruption by artifacts, introduced during the image acquisition process by the combined effect of blur and noise. While many successful algorithms exist for image restoration and enhancement, the restoration of natural textures and textured images based on suitable statistical models has yet to be further improved. We examine the statistical properties of NST using three image databases. We show that the Gaussian distribution is suitable for many NST, while other natural textures can be properly represented by a model that separates the image into two layers; one of these layers contains the structural elements of smooth areas and edges, while the other contains the statistically Gaussian textural details. Based on these statistical properties, an algorithm for the denoising of natural images containing NST is proposed, using patch-based fractional Brownian motion model and regularization by means of anisotropic diffusion. It is illustrated that this algorithm successfully recovers both missing textural details and structural attributes that characterize natural images. The algorithm is compared with classical as well as the state-of-the-art denoising algorithms.

  9. Statistics of Natural Stochastic Textures and Their Application in Image Denoising.

    PubMed

    Zachevsky, Ido; Zeevi, Yehoshua Y

    2016-03-08

    Natural Stochastic Textures (NST), characterized by their fine details, are prone to corruption by artifacts, introduced during the image acquisition process by the combined effect of blur and noise. While many successful algorithms exist for image restoration and enhancement, restoration of natural textures and of textured images based on suitable statistical models have yet to be further improved.We examine the statistical properties of NST using three image databases. We show that the Gaussian distribution is suitable for many NST, while other natural textures can be properly represented by a model that separates the image into two layers; one of these layers contains the structural elements of smooth areas and edges, while the other contains the statistically- Gaussian textural details. Based on these statistical properties, an algorithm for denoising of natural images containing NST is proposed, using patch-based fractional Brownian motion model and regularization by means of anisotropic diffusion. It is illustrated that this algorithm successfully recovers both missing textural details and structural attributes that characterize natural images. The algorithm is compared to classical as well as stateof- the-art denoising algorithms.

  10. Image Denoising via Bandwise Adaptive Modeling and Regularization Exploiting Nonlocal Similarity.

    PubMed

    Xiong, Ruiqin; Liu, Hangfan; Zhang, Xinfeng; Zhang, Jian; Ma, Siwei; Wu, Feng; Gao, Wen

    2016-09-27

    This paper proposes a new image denoising algorithm based on adaptive signal modeling and regularization. It improves the quality of images by regularizing each image patch using bandwise distribution modeling in transform domain. Instead of using a global model for all the patches in an image, it employs content-dependent adaptive models to address the non-stationarity of image signals and also the diversity among different transform bands. The distribution model is adaptively estimated for each patch individually. It varies from one patch location to another and also varies for different bands. In particular, we consider the estimated distribution to have non-zero expectation. To estimate the expectation and variance parameters for every band of a particular patch, we exploit the nonlocal correlation in image to collect a set of highly similar patches as the data samples to form the distribution. Irrelevant patches are excluded so that such adaptively-learned model is more accurate than a global one. The image is ultimately restored via bandwise adaptive soft-thresholding, based on a Laplacian approximation of the distribution of similar-patch group transform coefficients. Experimental results demonstrate that the proposed scheme outperforms several state-of-the-art denoising methods in both the objective and the perceptual qualities.

  11. 3D and 4D Seismic Imaging in the Oilfield; the state of the art

    NASA Astrophysics Data System (ADS)

    Strudley, A.

    2005-05-01

    Seismic imaging in the oilfield context has seen enormous changes over the last 20 years driven by a combination of improved subsurface illumination (2D to 3D), increased computational power and improved physical understanding. Today Kirchhoff Pre-stack migration (in time or depth) is the norm with anisotropic parameterisation and finite difference methods being increasingly employed. In the production context Time-Lapse (4D) Seismic is of growing importance as a tool for monitoring reservoir changes to facilitate increased productivity and recovery. In this paper we present an overview of state of the art technology in 3D and 4D seismic and look at future trends. Pre-stack Kirchhoff migration in time or depth is the imaging tool of choice for the majority of contemporary 3D datasets. Recent developments in 3D pre-stack imaging have been focussed around finite difference solutions to the acoustic wave equation, the so-called Wave Equation Migration methods (WEM). Application of finite difference solutions to imaging is certainly not new, however 3D pre-stack migration using these schemes is a relatively recent development driven by the need for imaging complex geologic structures such as sub salt, and facilitated by increased computational resources. Finally there are a class of imaging methods referred to as beam migration. These methods may be based on either the wave equation or rays, but all operate on a localised (in space and direction) part of the wavefield. These methods offer a bridge between the computational efficiency of Kirchhoff schemes and the improved image quality of WEM methods. Just as 3D seismic has had a radical impact on the quality of the static model of the reservoir, 4D seismic is having a dramatic impact on the dynamic model. Repeat shooting of seismic surveys after a period of production (typically one to several years) reveals changes in pressure and saturation through changes in the seismic response. The growth in interest in 4D seismic

  12. Retinal Image Denoising via Bilateral Filter with a Spatial Kernel of Optimally Oriented Line Spread Function

    PubMed Central

    He, Yunlong; Zhao, Yanna; Ren, Yanju; Gee, James

    2017-01-01

    Filtering belongs to the most fundamental operations of retinal image processing and for which the value of the filtered image at a given location is a function of the values in a local window centered at this location. However, preserving thin retinal vessels during the filtering process is challenging due to vessels' small area and weak contrast compared to background, caused by the limited resolution of imaging and less blood flow in the vessel. In this paper, we present a novel retinal image denoising approach which is able to preserve the details of retinal vessels while effectively eliminating image noise. Specifically, our approach is carried out by determining an optimal spatial kernel for the bilateral filter, which is represented by a line spread function with an orientation and scale adjusted adaptively to the local vessel structure. Moreover, this approach can also be served as a preprocessing tool for improving the accuracy of the vessel detection technique. Experimental results show the superiority of our approach over state-of-the-art image denoising techniques such as the bilateral filter. PMID:28261320

  13. Application of the discrete torus wavelet transform to the denoising of magnetic resonance images of uterine and ovarian masses

    NASA Astrophysics Data System (ADS)

    Sarty, Gordon E.; Atkins, M. Stella; Olatunbosun, Femi; Chizen, Donna; Loewy, John; Kendall, Edward J.; Pierson, Roger A.

    1999-10-01

    A new numerical wavelet transform, the discrete torus wavelet transform, is described and an application is given to the denoising of abdominal magnetic resonance imaging (MRI) data. The discrete tori wavelet transform is an undecimated wavelet transform which is computed using a discrete Fourier transform and multiplication instead of by direct convolution in the image domain. This approach leads to a decomposition of the image onto frames in the space of square summable functions on the discrete torus, l2(T2). The new transform was compared to the traditional decimated wavelet transform in its ability to denoise MRI data. By using denoised images as the basis for the computation of a nuclear magnetic resonance spin-spin relaxation-time map through least squares curve fitting, an error map was generated that was used to assess the performance of the denoising algorithms. The discrete torus wavelet transform outperformed the traditional wavelet transform in 88% of the T2 error map denoising tests with phantoms and gynecologic MRI images.

  14. Imaging rotational dynamics of nanoparticles in liquid by 4D electron microscopy

    NASA Astrophysics Data System (ADS)

    Fu, Xuewen; Chen, Bin; Tang, Jau; Hassan, Mohammed Th.; Zewail, Ahmed H.

    2017-02-01

    In real time and space, four-dimensional electron microscopy (4D EM) has enabled observation of transient structures and morphologies of inorganic and organic materials. We have extended 4D EM to include liquid cells without the time resolution being limited by the response of the detector. Our approach permits the imaging of the motion and morphological dynamics of a single, same particle on nanometer and ultrashort time scales. As a first application, we studied the rotational dynamics of gold nanoparticles in aqueous solution. A full transition from the conventional diffusive rotation to superdiffusive rotation and further to a ballistic rotation was observed with increasing asymmetry of the nanoparticle morphology. We explored the underlying physics both experimentally and theoretically according to the morphological asymmetry of the nanoparticles.

  15. Application of adaptive kinetic modelling for bias propagation reduction in direct 4D image reconstruction.

    PubMed

    Kotasidis, F A; Matthews, J C; Reader, A J; Angelis, G I; Zaidi, H

    2014-10-21

    Parametric imaging in thoracic and abdominal PET can provide additional parameters more relevant to the pathophysiology of the system under study. However, dynamic data in the body are noisy due to the limiting counting statistics leading to suboptimal kinetic parameter estimates. Direct 4D image reconstruction algorithms can potentially improve kinetic parameter precision and accuracy in dynamic PET body imaging. However, construction of a common kinetic model is not always feasible and in contrast to post-reconstruction kinetic analysis, errors in poorly modelled regions may spatially propagate to regions which are well modelled. To reduce error propagation from erroneous model fits, we implement and evaluate a new approach to direct parameter estimation by incorporating a recently proposed kinetic modelling strategy within a direct 4D image reconstruction framework. The algorithm uses a secondary more general model to allow a less constrained model fit in regions where the kinetic model does not accurately describe the underlying kinetics. A portion of the residuals then is adaptively included back into the image whilst preserving the primary model characteristics in other well modelled regions using a penalty term that trades off the models. Using fully 4D simulations based on dynamic [(15)O]H2O datasets, we demonstrate reduction in propagation-related bias for all kinetic parameters. Under noisy conditions, reductions in bias due to propagation are obtained at the cost of increased noise, which in turn results in increased bias and variance of the kinetic parameters. This trade-off reflects the challenge of separating the residuals arising from poor kinetic modelling fits from the residuals arising purely from noise. Nonetheless, the overall root mean square error is reduced in most regions and parameters. Using the adaptive 4D image reconstruction improved model fits can be obtained in poorly modelled regions, leading to reduced errors potentially propagating

  16. Statistical modeling of 4D respiratory lung motion using diffeomorphic image registration.

    PubMed

    Ehrhardt, Jan; Werner, René; Schmidt-Richberg, Alexander; Handels, Heinz

    2011-02-01

    Modeling of respiratory motion has become increasingly important in various applications of medical imaging (e.g., radiation therapy of lung cancer). Current modeling approaches are usually confined to intra-patient registration of 3D image data representing the individual patient's anatomy at different breathing phases. We propose an approach to generate a mean motion model of the lung based on thoracic 4D computed tomography (CT) data of different patients to extend the motion modeling capabilities. Our modeling process consists of three steps: an intra-subject registration to generate subject-specific motion models, the generation of an average shape and intensity atlas of the lung as anatomical reference frame, and the registration of the subject-specific motion models to the atlas in order to build a statistical 4D mean motion model (4D-MMM). Furthermore, we present methods to adapt the 4D mean motion model to a patient-specific lung geometry. In all steps, a symmetric diffeomorphic nonlinear intensity-based registration method was employed. The Log-Euclidean framework was used to compute statistics on the diffeomorphic transformations. The presented methods are then used to build a mean motion model of respiratory lung motion using thoracic 4D CT data sets of 17 patients. We evaluate the model by applying it for estimating respiratory motion of ten lung cancer patients. The prediction is evaluated with respect to landmark and tumor motion, and the quantitative analysis results in a mean target registration error (TRE) of 3.3 ±1.6 mm if lung dynamics are not impaired by large lung tumors or other lung disorders (e.g., emphysema). With regard to lung tumor motion, we show that prediction accuracy is independent of tumor size and tumor motion amplitude in the considered data set. However, tumors adhering to non-lung structures degrade local lung dynamics significantly and the model-based prediction accuracy is lower in these cases. The statistical respiratory

  17. Application of curvelet transform for denoising of CT images

    NASA Astrophysics Data System (ADS)

    Ławicki, Tomasz; Zhirnova, Oxana

    2015-09-01

    The paper presents a method of noise reduction in CT images by the curvelet transform. Noise affects the ability to visualize pathologic qualities and the living tissues structure in CT. Noise in CT images depends on the amount of discrete x-ray photons reaching the detector. In the CT images, noise is responsible for visibility reduction the low contrast areas and objects. Noisy picture may not be properly interpreted by a physician, especially for the case of detection of pathological changes in tissues. The tests were performed with the Shepp-Logan test image with additive Gaussian noise.

  18. MCAT to XCAT: The Evolution of 4-D Computerized Phantoms for Imaging Research

    PubMed Central

    Paul Segars, W.; Tsui, Benjamin M. W.

    2012-01-01

    Recent work in the development of computerized phantoms has focused on the creation of ideal “hybrid” models that seek to combine the realism of a patient-based voxelized phantom with the flexibility of a mathematical or stylized phantom. We have been leading the development of such computerized phantoms for use in medical imaging research. This paper will summarize our developments dating from the original four-dimensional (4-D) Mathematical Cardiac-Torso (MCAT) phantom, a stylized model based on geometric primitives, to the current 4-D extended Cardiac-Torso (XCAT) and Mouse Whole-Body (MOBY) phantoms, hybrid models of the human and laboratory mouse based on state-of-the-art computer graphics techniques. This paper illustrates the evolution of computerized phantoms toward more accurate models of anatomy and physiology. This evolution was catalyzed through the introduction of nonuniform rational b-spline (NURBS) and subdivision (SD) surfaces, tools widely used in computer graphics, as modeling primitives to define a more ideal hybrid phantom. With NURBS and SD surfaces as a basis, we progressed from a simple geometrically based model of the male torso (MCAT) containing only a handful of structures to detailed, whole-body models of the male and female (XCAT) anatomies (at different ages from newborn to adult), each containing more than 9000 structures. The techniques we applied for modeling the human body were similarly used in the creation of the 4-D MOBY phantom, a whole-body model for the mouse designed for small animal imaging research. From our work, we have found the NURBS and SD surface modeling techniques to be an efficient and flexible way to describe the anatomy and physiology for realistic phantoms. Based on imaging data, the surfaces can accurately model the complex organs and structures in the body, providing a level of realism comparable to that of a voxelized phantom. In addition, they are very flexible. Like stylized models, they can easily be

  19. Denoising techniques combined to Monte Carlo simulations for the prediction of high-resolution portal images in radiotherapy treatment verification

    NASA Astrophysics Data System (ADS)

    Lazaro, D.; Barat, E.; Le Loirec, C.; Dautremer, T.; Montagu, T.; Guérin, L.; Batalla, A.

    2013-05-01

    This work investigates the possibility of combining Monte Carlo (MC) simulations to a denoising algorithm for the accurate prediction of images acquired using amorphous silicon (a-Si) electronic portal imaging devices (EPIDs). An accurate MC model of the Siemens OptiVue1000 EPID was first developed using the penelope code, integrating a non-uniform backscatter modelling. Two already existing denoising algorithms were then applied on simulated portal images, namely the iterative reduction of noise (IRON) method and the locally adaptive Savitzky-Golay (LASG) method. A third denoising method, based on a nonparametric Bayesian framework and called DPGLM (for Dirichlet process generalized linear model) was also developed. Performances of the IRON, LASG and DPGLM methods, in terms of smoothing capabilities and computation time, were compared for portal images computed for different values of the RMS pixel noise (up to 10%) in three different configurations, a heterogeneous phantom irradiated by a non-conformal 15 × 15 cm2 field, a conformal beam from a pelvis treatment plan, and an IMRT beam from a prostate treatment plan. For all configurations, DPGLM outperforms both IRON and LASG by providing better smoothing performances and demonstrating a better robustness with respect to noise. Additionally, no parameter tuning is required by DPGLM, which makes the denoising step very generic and easy to handle for any portal image. Concerning the computation time, the denoising of 1024 × 1024 images takes about 1 h 30 min, 2 h and 5 min using DPGLM, IRON, and LASG, respectively. This paper shows the feasibility to predict within a few hours and with the same resolution as real images accurate portal images, combining MC simulations with the DPGLM denoising algorithm.

  20. A Denoising Scheme for Randomly Clustered Noise Removal in ICCD Sensing Image.

    PubMed

    Wang, Fei; Wang, Yibin; Yang, Meng; Zhang, Xuetao; Zheng, Nanning

    2017-01-26

    An Intensified Charge-Coupled Device (ICCD) image is captured by the ICCD image sensor in extremely low-light conditions. Its noise has two distinctive characteristics. (a) Different from the independent identically distributed (i.i.d.) noise in natural image, the noise in the ICCD sensing image is spatially clustered, which induces unexpected structure information; (b) The pattern of the clustered noise is formed randomly. In this paper, we propose a denoising scheme to remove the randomly clustered noise in the ICCD sensing image. First, we decompose the image into non-overlapped patches and classify them into flat patches and structure patches according to if real structure information is included. Then, two denoising algorithms are designed for them, respectively. For each flat patch, we simulate multiple similar patches for it in pseudo-time domain and remove its noise by averaging all the simulated patches, considering that the structure information induced by the noise varies randomly over time. For each structure patch, we design a structure-preserved sparse coding algorithm to reconstruct the real structure information. It reconstructs each patch by describing it as a weighted summation of its neighboring patches and incorporating the weights into the sparse representation of the current patch. Based on all the reconstructed patches, we generate a reconstructed image. After that, we repeat the whole process by changing relevant parameters, considering that blocking artifacts exist in a single reconstructed image. Finally, we obtain the reconstructed image by merging all the generated images into one. Experiments are conducted on an ICCD sensing image dataset, which verifies its subjective performance in removing the randomly clustered noise and preserving the real structure information in the ICCD sensing image.

  1. A Denoising Scheme for Randomly Clustered Noise Removal in ICCD Sensing Image

    PubMed Central

    Wang, Fei; Wang, Yibin; Yang, Meng; Zhang, Xuetao; Zheng, Nanning

    2017-01-01

    An Intensified Charge-Coupled Device (ICCD) image is captured by the ICCD image sensor in extremely low-light conditions. Its noise has two distinctive characteristics. (a) Different from the independent identically distributed (i.i.d.) noise in natural image, the noise in the ICCD sensing image is spatially clustered, which induces unexpected structure information; (b) The pattern of the clustered noise is formed randomly. In this paper, we propose a denoising scheme to remove the randomly clustered noise in the ICCD sensing image. First, we decompose the image into non-overlapped patches and classify them into flat patches and structure patches according to if real structure information is included. Then, two denoising algorithms are designed for them, respectively. For each flat patch, we simulate multiple similar patches for it in pseudo-time domain and remove its noise by averaging all the simulated patches, considering that the structure information induced by the noise varies randomly over time. For each structure patch, we design a structure-preserved sparse coding algorithm to reconstruct the real structure information. It reconstructs each patch by describing it as a weighted summation of its neighboring patches and incorporating the weights into the sparse representation of the current patch. Based on all the reconstructed patches, we generate a reconstructed image. After that, we repeat the whole process by changing relevant parameters, considering that blocking artifacts exist in a single reconstructed image. Finally, we obtain the reconstructed image by merging all the generated images into one. Experiments are conducted on an ICCD sensing image dataset, which verifies its subjective performance in removing the randomly clustered noise and preserving the real structure information in the ICCD sensing image. PMID:28134759

  2. Recursive Gauss-Seidel median filter for CT lung image denoising

    NASA Astrophysics Data System (ADS)

    Dewi, Dyah Ekashanti Octorina; Faudzi, Ahmad Athif Mohd.; Mengko, Tati Latifah; Suzumori, Koichi

    2017-02-01

    Poisson and Gaussian noises have been known to affect Computed Tomography (CT) image quality during reconstruction. Standard median (SM) Filter has been widely used to reduce the unwanted impulsive noises. However, it cannot perform satisfactorily once the noise density is high. Recursive median (RM) filter has also been proposed to optimize the denoising. On the other hand, the image quality is degraded. In this paper, we propose a hybrid recursive median (RGSM) filtering technique by using Gauss-Seidel Relaxation to enhance denoising and preserve image quality in RM filter. First, the SM filtering was performed, followed by Gauss-Seidel, and combined to generate secondary approximation solution. This scheme was iteratively done by applying the secondary approximation solution to the successive iterations. Progressive noise reduction was accomplished in every iterative stage. The last stage generated the final solution. Experiments on CT lung images show that the proposed technique has higher noise reduction improvements compared to the conventional RM filtering. The results have also confirmed better anatomical quality preservation. The proposed technique may improve lung nodules segmentation and characterization performance.

  3. Manifold learning for image-based breathing gating with application to 4D ultrasound.

    PubMed

    Wachinger, Christian; Yigitsoy, Mehmet; Navab, Nassir

    2010-01-01

    Breathing motion leads to a significant displacement and deformation of organs in the abdominal region. This makes the detection of the breathing phase for numerous applications necessary. We propose a new, purely image-based respiratory gating method for ultrasound. Further, we use this technique to provide a solution for breathing affected 4D ultrasound acquisitions with a wobbler probe. We achieve the gating with Laplacian eigenmaps, a manifold learning technique, to determine the low-dimensional manifold embedded in the high-dimensional image space. Since Laplacian eigenmaps assign each ultrasound frame a coordinate in low-dimensional space by respecting the neighborhood relationship, they are well suited for analyzing the breathing cycle. For the 4D application, we perform the manifold learning for each angle, and consecutively, align all the local curves and perform a curve fitting to achieve a globally consistent breathing signal. We performed the image-based gating on several 2D and 3D ultrasound datasets over time, and quantified its very good performance by comparing it to measurements from an external gating system.

  4. [Possibilities of 4D ultrasonography in imaging of the pelvic floor structures].

    PubMed

    Dlouhá, K; Krofta, L

    2011-12-01

    Technological boom of the last decades brought urogynaecologists and other specialists new possibilities in imaging of the pelvic floor structures which may substantially add to search for etiology of pelvic floor dysfunction. Magnetic resonance imaging (MRI) is an expensive, less accessible method and may pose certain dyscomphort to the patient. 3D/4D ultrasonography overcomes these disadvantages and brings new possibilities especially in dynamic, real time imaging and consequently enables focus on functional anatomy of complex of muscles and fascial structures of the pelvic floor. With 3D/4D ultrasound we can visualise urethra and surrounding structures, levator ani and urogenital hiatus, its changes during muscle contraction and Valsalva manévre. This method has great potential in diagnostics of pelvic organ prolapse, it may bring new knowledge of factors contributing to loss of integrity of pelvic floor structures resulting in prolapse and incontinence. Studies exist which describe changes in urogenital hiatus after vaginal delivery, further studies of large numbers of patients during longer period of time are though necessary so that conclusions can be drawn for clinical praxis.

  5. Modeling diffusion-weighted MRI as a spatially variant Gaussian mixture: Application to image denoising

    PubMed Central

    Gonzalez, Juan Eugenio Iglesias; Thompson, Paul M.; Zhao, Aishan; Tu, Zhuowen

    2011-01-01

    Purpose: This work describes a spatially variant mixture model constrained by a Markov random field to model high angular resolution diffusion imaging (HARDI) data. Mixture models suit HARDI well because the attenuation by diffusion is inherently a mixture. The goal is to create a general model that can be used in different applications. This study focuses on image denoising and segmentation (primarily the former). Methods: HARDI signal attenuation data are used to train a Gaussian mixture model in which the mean vectors and covariance matrices are assumed to be independent of spatial locations, whereas the mixture weights are allowed to vary at different lattice positions. Spatial smoothness of the data is ensured by imposing a Markov random field prior on the mixture weights. The model is trained in an unsupervised fashion using the expectation maximization algorithm. The number of mixture components is determined using the minimum message length criterion from information theory. Once the model has been trained, it can be fitted to a noisy diffusion MRI volume by maximizing the posterior probability of the underlying noiseless data in a Bayesian framework, recovering a denoised version of the image. Moreover, the fitted probability maps of the mixture components can be used as features for posterior image segmentation. Results: The model-based denoising algorithm proposed here was compared on real data with three other approaches that are commonly used in the literature: Gaussian filtering, anisotropic diffusion, and Rician-adapted nonlocal means. The comparison shows that, at low signal-to-noise ratio, when these methods falter, our algorithm considerably outperforms them. When tractography is performed on the model-fitted data rather than on the noisy measurements, the quality of the output improves substantially. Finally, ventricle and caudate nucleus segmentation experiments also show the potential usefulness of the mixture probability maps for

  6. Normalized iterative denoising ghost imaging based on the adaptive threshold

    NASA Astrophysics Data System (ADS)

    Li, Gaoliang; Yang, Zhaohua; Zhao, Yan; Yan, Ruitao; Liu, Xia; Liu, Baolei

    2017-02-01

    An approach for improving ghost imaging (GI) quality is proposed. In this paper, an iteration model based on normalized GI is built through theoretical analysis. An adaptive threshold value is selected in the iteration model. The initial value of the iteration model is estimated as a step to remove the correlated noise. The simulation and experimental results reveal that the proposed strategy reconstructs a better image than traditional and normalized GI, without adding complexity. The NIDGI-AT scheme does not require prior information regarding the object, and can also choose the threshold adaptively. More importantly, the signal-to-noise ratio (SNR) of the reconstructed image is greatly improved. Therefore, this methodology represents another step towards practical real-world applications.

  7. An MRI denoising method using image data redundancy and local SNR estimation.

    PubMed

    Golshan, Hosein M; Hasanzadeh, Reza P R; Yousefzadeh, Shahrokh C

    2013-09-01

    This paper presents an LMMSE-based method for the three-dimensional (3D) denoising of MR images assuming a Rician noise model. Conventionally, the LMMSE method estimates the noise-less signal values using the observed MR data samples within local neighborhoods. This is not an efficient procedure to deal with this issue while the 3D MR data intrinsically includes many similar samples that can be used to improve the estimation results. To overcome this problem, we model MR data as random fields and establish a principled way which is capable of choosing the samples not only from a local neighborhood but also from a large portion of the given data. To follow the similar samples within the MR data, an effective similarity measure based on the local statistical moments of images is presented. The parameters of the proposed filter are automatically chosen from the estimated local signal-to-noise ratio. To further enhance the denoising performance, a recursive version of the introduced approach is also addressed. The proposed filter is compared with related state-of-the-art filters using both synthetic and real MR datasets. The experimental results demonstrate the superior performance of our proposal in removing the noise and preserving the anatomical structures of MR images.

  8. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature.

  9. Denoising time-resolved microscopy image sequences with singular value thresholding.

    PubMed

    Furnival, Tom; Leary, Rowan K; Midgley, Paul A

    2016-05-10

    Time-resolved imaging in microscopy is important for the direct observation of a range of dynamic processes in both the physical and life sciences. However, the image sequences are often corrupted by noise, either as a result of high frame rates or a need to limit the radiation dose received by the sample. Here we exploit both spatial and temporal correlations using low-rank matrix recovery methods to denoise microscopy image sequences. We also make use of an unbiased risk estimator to address the issue of how much thresholding to apply in a robust and automated manner. The performance of the technique is demonstrated using simulated image sequences, as well as experimental scanning transmission electron microscopy data, where surface adatom motion and nanoparticle structural dynamics are recovered at rates of up to 32 frames per second.

  10. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    PubMed

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  11. SU-E-T-300: Dosimetric Comparision of 4D Radiation Therapy and 3D Radiation Therapy for the Liver Tumor Based On 4D Medical Image

    SciTech Connect

    Ma, C; Yin, Y

    2015-06-15

    Purpose: The purpose of this work was to determine the dosimetric benefit to normal tissues by tracking liver tumor dose in four dimensional radiation therapy (4DRT) on ten phases of four dimensional computer tomagraphy(4DCT) images. Methods: Target tracking each phase with the beam aperture for ten liver cancer patients were converted to cumulative plan and compared to the 3D plan with a merged target volume based on 4DCT image in radiation treatment planning system (TPS). The change in normal tissue dose was evaluated in the plan by using the parameters V5, V10, V15, V20,V25, V30, V35 and V40 (volumes receiving 5, 10, 15, 20, 25, 30, 35 and 40Gy, respectively) in the dose-volume histogram for the liver; mean dose for the following structures: liver, left kidney and right kidney; and maximum dose for the following structures: bowel, duodenum, esophagus, stomach and heart. Results: There was significant difference between 4D PTV(average 115.71cm3 )and ITV(169.86 cm3). When the planning objective is 95% volume of PTV covered by the prescription dose, the mean dose for the liver, left kidney and right kidney have an average decrease 23.13%, 49.51%, and 54.38%, respectively. The maximum dose for bowel, duodenum,esophagus, stomach and heart have an average decrease 16.77%, 28.07%, 24.28%, 4.89%, and 4.45%, respectively. Compared to 3D RT, radiation volume for the liver V5, V10, V15, V20, V25, V30, V35 and V40 by using the 4D plans have a significant decrease(P≤0.05). Conclusion: The 4D plan method creates plans that permit better sparing of the normal structures than the commonly used ITV method, which delivers the same dosimetric effects to the target.

  12. Image denoising using a directional adaptive diffusion filter

    NASA Astrophysics Data System (ADS)

    Zhao, Cuifang; Shi, Caicheng; He, Peikun

    2006-11-01

    Partial differential equations (PDEs) are well-known due to their good processing results which it can not only smooth the noise but also preserve the edges. But the shortcomings of these processes came to being noticed by people. In some sense, PDE filter is called "cartoon model" as it produces an approximation of the input image, use the same diffusion model and parameters to process noise and signal because it can not differentiate them, therefore, the image is naturally modified toward piecewise constant functions. A new method called a directional adaptive diffusion filter is proposed in the paper, which combines PDE mode with wavelet transform. The undecimated discrete wavelet transform (UDWT) is carried out to get different frequency bands which have obviously directional selectivity and more redundancy details. Experimental results show that the proposed method provides a performance better to preserve textures, small details and global information.

  13. Infrared image denoising applied in infrared sound field measurement

    NASA Astrophysics Data System (ADS)

    Su, Zhiqiang; Shen, Guofeng

    2017-03-01

    The research made use of the heat property and explored the distribution of focused ultrasound field. In our experiments, we measured the distribution of heat sources, and then, calculated the distribution of focused ultrasound field via a liner relation. In the experiments, we got a series of infrared images with noise. It's such an important thing to find out a solution to get rid of the noise in those images in order to get an accurate focused ultrasound field distribution. So the investigation following is focused in finding out a filter which can remove most noise in the infrared charts and the distribution of ultrasound filed is not impacted. Experiments compared the effects of different filters by the index of - 6dB width of the temperature rise images. By this index, we can find out a filter which is the most suitable filter for keeping the distribution of focused ultrasound field in steady. All experiments, including simulations, semi-simulations and actual verification experiments used six filters to deal with the raw data to get -6dB width and signal to noise ratio. From the results of experiments, we drew a conclusion that gauss filter is the best to keep the distribution of focused ultrasound field in steady.

  14. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    SciTech Connect

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frederic

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a

  15. The development of a population of 4D pediatric XCAT phantoms for imaging research and optimization

    SciTech Connect

    Segars, W. P. Norris, Hannah; Sturgeon, Gregory M.; Zhang, Yakun; Bond, Jason; Samei, E.; Minhas, Anum; Frush, D.; Tward, Daniel J.; Ratnanather, J. T.; Miller, M. I.

    2015-08-15

    Purpose: We previously developed a set of highly detailed 4D reference pediatric extended cardiac-torso (XCAT) phantoms at ages of newborn, 1, 5, 10, and 15 yr with organ and tissue masses matched to ICRP Publication 89 values. In this work, we extended this reference set to a series of 64 pediatric phantoms of varying age and height and body mass percentiles representative of the public at large. The models will provide a library of pediatric phantoms for optimizing pediatric imaging protocols. Methods: High resolution positron emission tomography-computed tomography data obtained from the Duke University database were reviewed by a practicing experienced radiologist for anatomic regularity. The CT portion of the data was then segmented with manual and semiautomatic methods to form a target model defined using nonuniform rational B-spline surfaces. A multichannel large deformation diffeomorphic metric mapping algorithm was used to calculate the transform from the best age matching pediatric XCAT reference phantom to the patient target. The transform was used to complete the target, filling in the nonsegmented structures and defining models for the cardiac and respiratory motions. The complete phantoms, consisting of thousands of structures, were then manually inspected for anatomical accuracy. The mass for each major tissue was calculated and compared to linearly interpolated ICRP values for different ages. Results: Sixty four new pediatric phantoms were created in this manner. Each model contains the same level of detail as the original XCAT reference phantoms and also includes parameterized models for the cardiac and respiratory motions. For the phantoms that were 10 yr old and younger, we included both sets of reproductive organs. This gave them the capability to simulate both male and female anatomy. With this, the population can be expanded to 92. Wide anatomical variation was clearly seen amongst the phantom models, both in organ shape and size, even for

  16. Long-Term Live Cell Imaging and Automated 4D Analysis of Drosophila Neuroblast Lineages

    PubMed Central

    Berger, Christian; Lendl, Thomas; Knoblich, Juergen A.

    2013-01-01

    The developing Drosophila brain is a well-studied model system for neurogenesis and stem cell biology. In the Drosophila central brain, around 200 neural stem cells called neuroblasts undergo repeated rounds of asymmetric cell division. These divisions typically generate a larger self-renewing neuroblast and a smaller ganglion mother cell that undergoes one terminal division to create two differentiating neurons. Although single mitotic divisions of neuroblasts can easily be imaged in real time, the lack of long term imaging procedures has limited the use of neuroblast live imaging for lineage analysis. Here we describe a method that allows live imaging of cultured Drosophila neuroblasts over multiple cell cycles for up to 24 hours. We describe a 4D image analysis protocol that can be used to extract cell cycle times and growth rates from the resulting movies in an automated manner. We use it to perform lineage analysis in type II neuroblasts where clonal analysis has indicated the presence of a transit-amplifying population that potentiates the number of neurons. Indeed, our experiments verify type II lineages and provide quantitative parameters for all cell types in those lineages. As defects in type II neuroblast lineages can result in brain tumor formation, our lineage analysis method will allow more detailed and quantitative analysis of tumorigenesis and asymmetric cell division in the Drosophila brain. PMID:24260257

  17. Adaptive kernel-based image denoising employing semi-parametric regularization.

    PubMed

    Bouboulis, Pantelis; Slavakis, Konstantinos; Theodoridis, Sergios

    2010-06-01

    The main contribution of this paper is the development of a novel approach, based on the theory of Reproducing Kernel Hilbert Spaces (RKHS), for the problem of noise removal in the spatial domain. The proposed methodology has the advantage that it is able to remove any kind of additive noise (impulse, gaussian, uniform, etc.) from any digital image, in contrast to the most commonly used denoising techniques, which are noise dependent. The problem is cast as an optimization task in a RKHS, by taking advantage of the celebrated Representer Theorem in its semi-parametric formulation. The semi-parametric formulation, although known in theory, has so far found limited, to our knowledge, application. However, in the image denoising problem, its use is dictated by the nature of the problem itself. The need for edge preservation naturally leads to such a modeling. Examples verify that in the presence of gaussian noise the proposed methodology performs well compared to wavelet based technics and outperforms them significantly in the presence of impulse or mixed noise.

  18. Research on infrared-image denoising algorithm based on the noise analysis of the detector

    NASA Astrophysics Data System (ADS)

    Liu, Songtao; Zhou, Xiaodong; Shen, Tongsheng; Han, Yanli

    2005-01-01

    Since the conventional denoising algorithms have not considered the influence of certain concrete detector, they are not very effective to remove various noises contained in the low signal-to-noise ration infrared image. In this paper, a new thinking for infrared image denoising is proposed, which is based on the noise analyses of detector with an example of L model infrared multi-element detector. According to the noise analyses of this detector, the emphasis is placed on how to filter white noise and fractal noise in the preprocessing phase. Wavelet analysis is a good tool for analyzing 1/f process. 1/f process can be viewed as white noise approximately since its wavelet coefficients are stationary and uncorrelated. So if wavelet transform is adopted, the problem of removing white noise and fraction noise is simplified as the only one problem, i.e., removing white noise. To address this problem, a new wavelet domain adaptive wiener filtering algorithm is presented. From the viewpoint of quantitative and qualitative analyses, the filtering effect of our method is compared with those of traditional median filter, mean filter and wavelet thresholding algorithm in detail. The results show that our method can reduce various noises effectively and raise the ratio of signal-to-noise evidently.

  19. Using 4D Cardiovascular Magnetic Resonance Imaging to Validate Computational Fluid Dynamics: A Case Study.

    PubMed

    Biglino, Giovanni; Cosentino, Daria; Steeden, Jennifer A; De Nova, Lorenzo; Castelli, Matteo; Ntsinjana, Hopewell; Pennati, Giancarlo; Taylor, Andrew M; Schievano, Silvia

    2015-01-01

    Computational fluid dynamics (CFD) can have a complementary predictive role alongside the exquisite visualization capabilities of 4D cardiovascular magnetic resonance (CMR) imaging. In order to exploit these capabilities (e.g., for decision-making), it is necessary to validate computational models against real world data. In this study, we sought to acquire 4D CMR flow data in a controllable, experimental setup and use these data to validate a corresponding computational model. We applied this paradigm to a case of congenital heart disease, namely, transposition of the great arteries (TGA) repaired with arterial switch operation. For this purpose, a mock circulatory loop compatible with the CMR environment was constructed and two detailed aortic 3D models (i.e., one TGA case and one normal aortic anatomy) were tested under realistic hemodynamic conditions, acquiring 4D CMR flow. The same 3D domains were used for multi-scale CFD simulations, whereby the remainder of the mock circulatory system was appropriately summarized with a lumped parameter network. Boundary conditions of the simulations mirrored those measured in vitro. Results showed a very good quantitative agreement between experimental and computational models in terms of pressure (overall maximum % error = 4.4% aortic pressure in the control anatomy) and flow distribution data (overall maximum % error = 3.6% at the subclavian artery outlet of the TGA model). Very good qualitative agreement could also be appreciated in terms of streamlines, throughout the cardiac cycle. Additionally, velocity vectors in the ascending aorta revealed less symmetrical flow in the TGA model, which also exhibited higher wall shear stress in the anterior ascending aorta.

  20. Using 4D Cardiovascular Magnetic Resonance Imaging to Validate Computational Fluid Dynamics: A Case Study

    PubMed Central

    Biglino, Giovanni; Cosentino, Daria; Steeden, Jennifer A.; De Nova, Lorenzo; Castelli, Matteo; Ntsinjana, Hopewell; Pennati, Giancarlo; Taylor, Andrew M.; Schievano, Silvia

    2015-01-01

    Computational fluid dynamics (CFD) can have a complementary predictive role alongside the exquisite visualization capabilities of 4D cardiovascular magnetic resonance (CMR) imaging. In order to exploit these capabilities (e.g., for decision-making), it is necessary to validate computational models against real world data. In this study, we sought to acquire 4D CMR flow data in a controllable, experimental setup and use these data to validate a corresponding computational model. We applied this paradigm to a case of congenital heart disease, namely, transposition of the great arteries (TGA) repaired with arterial switch operation. For this purpose, a mock circulatory loop compatible with the CMR environment was constructed and two detailed aortic 3D models (i.e., one TGA case and one normal aortic anatomy) were tested under realistic hemodynamic conditions, acquiring 4D CMR flow. The same 3D domains were used for multi-scale CFD simulations, whereby the remainder of the mock circulatory system was appropriately summarized with a lumped parameter network. Boundary conditions of the simulations mirrored those measured in vitro. Results showed a very good quantitative agreement between experimental and computational models in terms of pressure (overall maximum % error = 4.4% aortic pressure in the control anatomy) and flow distribution data (overall maximum % error = 3.6% at the subclavian artery outlet of the TGA model). Very good qualitative agreement could also be appreciated in terms of streamlines, throughout the cardiac cycle. Additionally, velocity vectors in the ascending aorta revealed less symmetrical flow in the TGA model, which also exhibited higher wall shear stress in the anterior ascending aorta. PMID:26697416

  1. Denoising of PET images by context modelling using local neighbourhood correlation

    NASA Astrophysics Data System (ADS)

    Huerga, Carlos; Castro, Pablo; Corredoira, Eva; Coronado, Monica; Delgado, Victor; Guibelalde, Eduardo

    2017-01-01

    Positron emission tomography (PET) images are characterised by low signal-to-noise ratio and blurred edges when compared with other image modalities. It is therefore advisable to use noise reduction methods for qualitative and quantitative analyses. Given the importance of the maximum and mean uptake values, it is necessary to avoid signal loss, which could modify the clinical significance. This paper proposes a method of non-linear image denoising for PET. It is based on spatially adaptive wavelet-shrinkage and uses context modelling, which explicitly considers the correlation between neighbouring pixels. This context modelling is able to maintain the uptake values and preserve the edges in significant regions. The algorithm is proposed as an alternative to the usual filtering that is performed after reconstruction.

  2. Enhancing a diffusion algorithm for 4D image segmentation using local information

    NASA Astrophysics Data System (ADS)

    Lösel, Philipp; Heuveline, Vincent

    2016-03-01

    Inspired by the diffusion of a particle, we present a novel approach for performing a semiautomatic segmentation of tomographic images in 3D, 4D or higher dimensions to meet the requirements of high-throughput measurements in a synchrotron X-ray microtomograph. Given a small number of 2D-slices with at least two manually labeled segments, one can either analytically determine the probability that an intelligently weighted random walk starting at one labeled pixel will be at a certain time at a specific position in the dataset or determine the probability approximately by performing several random walks. While the weights of a random walk take into account local information at the starting point, the random walk itself can be in any dimension. Starting a great number of random walks in each labeled pixel, a voxel in the dataset will be hit by several random walks over time. Hence, the image can be segmented by assigning each voxel to the label where the random walks most likely started from. Due to the high scalability of random walks, this approach is suitable for high throughput measurements. Additionally, we describe an interactively adjusted active contours slice by slice method considering local information, where we start with one manually labeled slice and move forward in any direction. This approach is superior with respect to accuracy towards the diffusion algorithm but inferior in the amount of tedious manual processing steps. The methods were applied on 3D and 4D datasets and evaluated by means of manually labeled images obtained in a realistic scenario with biologists.

  3. Quantifying the impact of respiratory-gated 4D CT acquisition on thoracic image quality: A digital phantom study

    SciTech Connect

    Bernatowicz, K. Knopf, A.; Lomax, A.; Keall, P.; Kipritidis, J.; Mishra, P.

    2015-01-15

    Purpose: Prospective respiratory-gated 4D CT has been shown to reduce tumor image artifacts by up to 50% compared to conventional 4D CT. However, to date no studies have quantified the impact of gated 4D CT on normal lung tissue imaging, which is important in performing dose calculations based on accurate estimates of lung volume and structure. To determine the impact of gated 4D CT on thoracic image quality, the authors developed a novel simulation framework incorporating a realistic deformable digital phantom driven by patient tumor motion patterns. Based on this framework, the authors test the hypothesis that respiratory-gated 4D CT can significantly reduce lung imaging artifacts. Methods: Our simulation framework synchronizes the 4D extended cardiac torso (XCAT) phantom with tumor motion data in a quasi real-time fashion, allowing simulation of three 4D CT acquisition modes featuring different levels of respiratory feedback: (i) “conventional” 4D CT that uses a constant imaging and couch-shift frequency, (ii) “beam paused” 4D CT that interrupts imaging to avoid oversampling at a given couch position and respiratory phase, and (iii) “respiratory-gated” 4D CT that triggers acquisition only when the respiratory motion fulfills phase-specific displacement gating windows based on prescan breathing data. Our framework generates a set of ground truth comparators, representing the average XCAT anatomy during beam-on for each of ten respiratory phase bins. Based on this framework, the authors simulated conventional, beam-paused, and respiratory-gated 4D CT images using tumor motion patterns from seven lung cancer patients across 13 treatment fractions, with a simulated 5.5 cm{sup 3} spherical lesion. Normal lung tissue image quality was quantified by comparing simulated and ground truth images in terms of overall mean square error (MSE) intensity difference, threshold-based lung volume error, and fractional false positive/false negative rates. Results

  4. Anisotropic Nonlocal Means Denoising

    DTIC Science & Technology

    2011-11-26

    match the nuanced edges and textures of real-world images remains open, since we have considered only brutal binary images here. Finally, while NLM...com- puter vision. Denoising algorithms have evolved from the classical linear and median filters to more modern schemes like total variation denoising...underlying image gradients outperforms NLM by a signi cant margin. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same

  5. Enhancing ejection fraction measurement through 4D respiratory motion compensation in cardiac PET imaging.

    PubMed

    Tang, Jing; Wang, Xinhui; Gao, Xiangzhen; Segars, Paul; Lodge, Martin; Rahmim, Arman

    2017-03-02

    ECG gated cardiac PET imaging measures functional parameters such as left ventricle (LV) ejection fraction (EF), providing diagnostic and prognostic information for management of patients with coronary artery disease (CAD). Respiratory motion degrades spatial resolution and affects the accuracy in measuring the LV volumes for EF calculation. The goal of this study is to systematically investigate the effect of respiratory motion correction on the estimation of end-diastolic volume (EDV), end-systolic volume (ESV), and EF, especially on the separation of normal and abnormal EFs. We developed a respiratory motion incorporated 4D PET image reconstruction technique which uses all gated-frame data to acquire a motion-suppressed image. Using the standard XCAT phantom and two individual-specific volunteer XCAT phantoms, we simulated dual-gated myocardial perfusion imaging data for normally and abnormally beating hearts. With and without respiratory motion correction, we measured the EDV, ESV, and EF from the cardiac gated reconstructed images. For all the phantoms, the estimated volumes increased and the biases significantly reduced with motion correction compared with those without. Furthermore, the improvement of ESV measurement in the abnormally beating heart led to better separation of normal and abnormal EFs. The simulation study demonstrated the significant effect of respiratory motion correction on cardiac imaging data with motion amplitude as small as 0.7 cm. The larger the motion amplitude the more improvement respiratory motion correction brought about on the measurement of EF. Using data-driven respiratory gating, we also demonstrated the effect of respiratory motion correction on estimation of the above functional parameters from list mode patient data. Respiratory motion correction is shown to improve the accuracy of EF measurement in clinical cardiac PET imaging.

  6. Gaussian mixture model-based gradient field reconstruction for infrared image detail enhancement and denoising

    NASA Astrophysics Data System (ADS)

    Zhao, Fan; Zhao, Jian; Zhao, Wenda; Qu, Feng

    2016-05-01

    Infrared images are characterized by low signal-to-noise ratio and low contrast. Therefore, the edge details are easily immerged in the background and noise, making it much difficult to achieve infrared image edge detail enhancement and denoising. This article proposes a novel method of Gaussian mixture model-based gradient field reconstruction, which enhances image edge details while suppressing noise. First, by analyzing the gradient histogram of noisy infrared image, Gaussian mixture model is adopted to simulate the distribution of the gradient histogram, and divides the image information into three parts corresponding to faint details, noise and the edges of clear targets, respectively. Then, the piecewise function is constructed based on the characteristics of the image to increase gradients of faint details and suppress gradients of noise. Finally, anisotropic diffusion constraint is added while visualizing enhanced image from the transformed gradient field to further suppress noise. The experimental results show that the method possesses unique advantage of effectively enhancing infrared image edge details and suppressing noise as well, compared with the existing methods. In addition, it can be used to effectively enhance other types of images such as the visible and medical images.

  7. brainR: Interactive 3 and 4D Images of High Resolution Neuroimage Data

    PubMed Central

    Muschelli, John; Sweeney, Elizabeth; Crainiceanu, Ciprian

    2016-01-01

    We provide software tools for displaying and publishing interactive 3-dimensional (3D) and 4-dimensional (4D) figures to html webpages, with examples of high-resolution brain imaging. Our framework is based in the R statistical software using the rgl package, a 3D graphics library. We build on this package to allow manipulation of figures including rotation and translation, zooming, coloring of brain substructures, adjusting transparency levels, and addition/or removal of brain structures. The need for better visualization tools of ultra high dimensional data is ever present; we are providing a clean, simple, web-based option. We also provide a package (brainR) for users to readily implement these tools. PMID:27330829

  8. A patient specific 4D MRI liver motion model based on sparse imaging and registration

    NASA Astrophysics Data System (ADS)

    Noorda, Y. H.; Bartels, L. W.; van Stralen, Marijn; Pluim, J. P. W.

    2013-03-01

    Introduction: Image-guided minimally invasive procedures are becoming increasingly popular. Currently, High-Intensity Focused Ultrasound (HIFU) treatment of lesions in mobile organs, such as the liver, is in development. A requirement for such treatment is automatic motion tracking, such that the position of the lesion can be followed in real time. We propose a 4D liver motion model, which can be used during planning of this procedure. During treatment, the model can serve as a motion predictor. In a similar fashion, this model could be used for radiotherapy treatment of the liver. Method: The model is built by acquiring 2D dynamic sagittal MRI data at six locations in the liver. By registering these dynamics to a 3D MRI liver image, 2D deformation fields are obtained at every location. The 2D fields are ordered according to the position of the liver at that specific time point, such that liver motion during an average breathing period can be simulated. This way, a sparse deformation field is created over time. This deformation field is finally interpolated over the entire volume, yielding a 4D motion model. Results: The accuracy of the model is evaluated by comparing unseen slices to the slice predicted by the model at that specific location and phase in the breathing cycle. The mean Dice coefficient of the liver regions was 0.90. The mean misalignment of the vessels was 1.9 mm. Conclusion: The model is able to predict patient specific deformations of the liver and can predict regular motion accurately.

  9. SU-C-9A-06: The Impact of CT Image Used for Attenuation Correction in 4D-PET

    SciTech Connect

    Cui, Y; Bowsher, J; Yan, S; Cai, J; Das, S; Yin, F

    2014-06-01

    Purpose: To evaluate the appropriateness of using 3D non-gated CT image for attenuation correction (AC) in a 4D-PET (gated PET) imaging protocol used in radiotherapy treatment planning simulation. Methods: The 4D-PET imaging protocol in a Siemens PET/CT simulator (Biograph mCT, Siemens Medical Solutions, Hoffman Estates, IL) was evaluated. CIRS Dynamic Thorax Phantom (CIRS Inc., Norfolk, VA) with a moving glass sphere (8 mL) in the middle of its thorax portion was used in the experiments. The glass was filled with {sup 18}F-FDG and was in a longitudinal motion derived from a real patient breathing pattern. Varian RPM system (Varian Medical Systems, Palo Alto, CA) was used for respiratory gating. Both phase-gating and amplitude-gating methods were tested. The clinical imaging protocol was modified to use three different CT images for AC in 4D-PET reconstruction: first is to use a single-phase CT image to mimic actual clinical protocol (single-CT-PET); second is to use the average intensity projection CT (AveIP-CT) derived from 4D-CT scanning (AveIP-CT-PET); third is to use 4D-CT image to do the phase-matched AC (phase-matching- PET). Maximum SUV (SUVmax) and volume of the moving target (glass sphere) with threshold of 40% SUVmax were calculated for comparison between 4D-PET images derived with different AC methods. Results: The SUVmax varied 7.3%±6.9% over the breathing cycle in single-CT-PET, compared to 2.5%±2.8% in AveIP-CT-PET and 1.3%±1.2% in phasematching PET. The SUVmax in single-CT-PET differed by up to 15% from those in phase-matching-PET. The target volumes measured from single- CT-PET images also presented variations up to 10% among different phases of 4D PET in both phase-gating and amplitude-gating experiments. Conclusion: Attenuation correction using non-gated CT in 4D-PET imaging is not optimal process for quantitative analysis. Clinical 4D-PET imaging protocols should consider phase-matched 4D-CT image if available to achieve better accuracy.

  10. A global approach for solving evolutive heat transfer for image denoising and inpainting.

    PubMed

    Auclair-Fortier, Marie-Flavie; Ziou, Djemel

    2006-09-01

    This paper proposes an alternative to partial differential equations (PDEs) for solving problems in computer vision based on evolutive heat transfer. Traditionally, the method for solving such physics-based problems is to discretize and solve a PDE by a purely mathematical process. Instead of using the PDE, we propose to use the global heat principle and to decompose it into basic laws. We show that some of these laws admit an exact global version since they arise from conservative principles. We also show that the assumptions made about the other basic Iaws can be made wisely, taking into account knowledge about the problem and the domain. The numerical scheme is derived in a straightforward way from the modeled problem, thus providing a physical explanation for each step in the solution. The advantage of such an approach is that it minimizes the approximations made during the whole process and it modularizes it, allowing changing the application to a great number of problems. We apply the scheme to two applications: image denoising and inpainting which are modeled with heat transfer. For denoising, we propose a new approximation for the conductivity coefficient and we add thin lines to the features in order to block diffusion.

  11. 3D MR image denoising using rough set and kernel PCA method.

    PubMed

    Phophalia, Ashish; Mitra, Suman K

    2017-02-01

    In this paper, we have presented a two stage method, using kernel principal component analysis (KPCA) and rough set theory (RST), for denoising volumetric MRI data. A rough set theory (RST) based clustering technique has been used for voxel based processing. The method groups similar voxels (3D cubes) using class and edge information derived from noisy input. Each clusters thus formed now represented via basis vector. These vectors now projected into kernel space and PCA is performed in the feature space. This work is motivated by idea that under Rician noise MRI data may be non-linear and kernel mapping will help to define linear separator between these clusters/basis vectors thus used for image denoising. We have further investigated various kernels for Rician noise for different noise levels. The best kernel is then selected on the performance basis over PSNR and structure similarity (SSIM) measures. The work has been compared with state-of-the-art methods under various measures for synthetic and real databases.

  12. Astronomical image denoising by means of improved adaptive backtracking-based matching pursuit algorithm.

    PubMed

    Liu, Qianshun; Bai, Jian; Yu, Feihong

    2014-11-10

    In an effort to improve compressive sensing and spare signal reconstruction by way of the backtracking-based adaptive orthogonal matching pursuit (BAOMP), a new sparse coding algorithm called improved adaptive backtracking-based OMP (ABOMP) is proposed in this study. Many aspects have been improved compared to the original BAOMP method, including replacing the fixed threshold with an adaptive one, adding residual feedback and support set verification, and others. Because of these ameliorations, the proposed algorithm can more precisely choose the atoms. By adding the adaptive step-size mechanism, it requires much less iteration and thus executes more efficiently. Additionally, a simple but effective contrast enhancement method is also adopted to further improve the denoising results and visual effect. By combining the IABOMP algorithm with the state-of-art dictionary learning algorithm K-SVD, the proposed algorithm achieves better denoising effects for astronomical images. Numerous experimental results show that the proposed algorithm performs successfully and effectively on Gaussian and Poisson noise removal.

  13. Ultrasound imaging and characterization of biofilms based on wavelet de-noised radiofrequency data.

    PubMed

    Vaidya, Kunal; Osgood, Robert; Ren, Dabin; Pichichero, Michael E; Helguera, María

    2014-03-01

    The ability to non-invasively image and characterize bacterial biofilms in children during nasopharyngeal colonization with potential otopathogens and during acute otitis media would represent a significant advance. We sought to determine if quantitative high-frequency ultrasound techniques could be used to achieve that goal. Systematic time studies of bacterial biofilm formation were performed on three preparations of an isolated Haemophilus influenzae (NTHi) strain, a Streptococcus pneumoniae (Sp) strain and a combination of H. influenzae and S. pneumoniae (NTHi + Sp) in an in vitro environment. The process of characterization included conditioning of the acquired radiofrequency data obtained with a 15-MHz focused, piston transducer by using a seven-level wavelet decomposition scheme to de-noise the individual A-lines acquired. All subsequent spectral parameter estimations were done on the wavelet de-noised radiofrequency data. Various spectral parameters-peak frequency shift, bandwidth reduction and integrated backscatter coefficient-were recorded. These parameters were successfully used to map the progression of the biofilms in time and to differentiate between single- and multiple-species biofilms. Results were compared with those for confocal microscopy and theoretical evaluation of form factor. We conclude that high-frequency ultrasound may prove a useful modality to detect and characterize bacterial biofilms in humans as they form on tissues and plastic materials.

  14. 3D and 4D magnetic susceptibility tomography based on complex MR images

    DOEpatents

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  15. Vessel Enhancement and Segmentation of 4D CT Lung Image Using Stick Tensor Voting

    NASA Astrophysics Data System (ADS)

    Cong, Tan; Hao, Yang; Jingli, Shi; Xuan, Yang

    2016-12-01

    Vessel enhancement and segmentation plays a significant role in medical image analysis. This paper proposes a novel vessel enhancement and segmentation method for 4D CT lung image using stick tensor voting algorithm, which focuses on addressing the vessel distortion issue of vessel enhancement diffusion (VED) method. Furthermore, the enhanced results are easily segmented using level-set segmentation. In our method, firstly, vessels are filtered using Frangi's filter to reduce intrapulmonary noises and extract rough blood vessels. Secondly, stick tensor voting algorithm is employed to estimate the correct direction along the vessel. Then the estimated direction along the vessel is used as the anisotropic diffusion direction of vessel in VED algorithm, which makes the intensity diffusion of points locating at the vessel wall be consistent with the directions of vessels and enhance the tubular features of vessels. Finally, vessels can be extracted from the enhanced image by applying level-set segmentation method. A number of experiments results show that our method outperforms traditional VED method in vessel enhancement and results in satisfied segmented vessels.

  16. Bounded Self-Weights Estimation Method for Non-Local Means Image Denoising Using Minimax Estimators.

    PubMed

    Nguyen, Minh Phuong; Chun, Se Young

    2017-04-01

    A non-local means (NLM) filter is a weighted average of a large number of non-local pixels with various image intensity values. The NLM filters have been shown to have powerful denoising performance, excellent detail preservation by averaging many noisy pixels, and using appropriate values for the weights, respectively. The NLM weights between two different pixels are determined based on the similarities between two patches that surround these pixels and a smoothing parameter. Another important factor that influences the denoising performance is the self-weight values for the same pixel. The recently introduced local James-Stein type center pixel weight estimation method (LJS) outperforms other existing methods when determining the contribution of the center pixels in the NLM filter. However, the LJS method may result in excessively large self-weight estimates since no upper bound is assumed, and the method uses a relatively large local area for estimating the self-weights, which may lead to a strong bias. In this paper, we investigated these issues in the LJS method, and then propose a novel local self-weight estimation methods using direct bounds (LMM-DB) and reparametrization (LMM-RP) based on the Baranchik's minimax estimator. Both the LMM-DB and LMM-RP methods were evaluated using a wide range of natural images and a clinical MRI image together with the various levels of additive Gaussian noise. Our proposed parameter selection methods yielded an improved bias-variance trade-off, a higher peak signal-to-noise (PSNR) ratio, and fewer visual artifacts when compared with the results of the classical NLM and LJS methods. Our proposed methods also provide a heuristic way to select a suitable global smoothing parameters that can yield PSNR values that are close to the optimal values.

  17. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    SciTech Connect

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain E-mail: cedric.messaoudi@curie.fr Messaoudi, Cedric E-mail: cedric.messaoudi@curie.fr Marco, Sergio E-mail: cedric.messaoudi@curie.fr

    2015-01-13

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  18. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    NASA Astrophysics Data System (ADS)

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain; Messaoudi, Cedric; Marco, Sergio

    2015-01-01

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  19. Adapting non-local means of de-noising in intraoperative magnetic resonance imaging for brain tumor surgery.

    PubMed

    Mizukuchi, Takashi; Fujii, Masazumi; Hayashi, Yuichiro; Tsuzaka, Masatoshi

    2014-01-01

    In image-guided brain tumor surgery, intraoperative magnetic resonance imaging (iMRI) is a powerful tool for updating navigational information after brain shift, controlling the resection of brain tumors, and evaluating intraoperative complications. Low-field iMRI scans occasionally generate a lot of noise, the reason for which is yet to be determined. This noise adversely affects the neurosurgeons' interpretations. In this study, in order to improve the image quality of iMR images, we optimized and adapted an unbiased non-local means (UNLM) filter to iMR images. This noise appears to occur at a specific frequency-encoding band. In order to adapt the UNLM filter to the noise, we improved the UNLM, so that de-noising can be performed at different noise levels that occur at different frequency-encoding bands. As a result, clinical iMR images can be de-noised adequately while preserving crucial information, such as edges. The UNLM filter preserved the edges more clearly than did other classical filters attached to an anisotropic diffusion filter. In addition, UNLM de-noising can improve the signal-to-noise ratio of clinical iMR images by more than 2 times (p < 0.01). Although the computational time of the UNLM processing is very long, post-processing of UNLM filter images, for which the parameters were optimized, can be performed during other MRI scans. Therefore, The UNLM filter was more effective than increasing the number of signal averages. The iMR image quality was improved without extension of the MR scanning time. UNLM de-noising in post-processing is expected to improve the diagnosability of low-field iMR images.

  20. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform.

    PubMed

    Chitchian, Shahab; Mayer, Markus A; Boretsky, Adam R; van Kuijk, Frederik J; Motamedi, Massoud

    2012-11-01

    ABSTRACT. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained.

  1. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    PubMed Central

    Mayer, Markus A.; Boretsky, Adam R.; van Kuijk, Frederik J.; Motamedi, Massoud

    2012-01-01

    Abstract. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained. PMID:23117804

  2. Automatic landmark generation for deformable image registration evaluation for 4D CT images of lung

    NASA Astrophysics Data System (ADS)

    Vickress, J.; Battista, J.; Barnett, R.; Morgan, J.; Yartsev, S.

    2016-10-01

    Deformable image registration (DIR) has become a common tool in medical imaging across both diagnostic and treatment specialties, but the methods used offer varying levels of accuracy. Evaluation of DIR is commonly performed using manually selected landmarks, which is subjective, tedious and time consuming. We propose a semi-automated method that saves time and provides accuracy comparable to manual selection. Three landmarking methods including manual (with two independent observers), scale invariant feature transform (SIFT), and SIFT with manual editing (SIFT-M) were tested on 10 thoracic 4DCT image studies corresponding to the 0% and 50% phases of respiration. Results of each method were evaluated against a gold standard (GS) landmark set comparing both mean and proximal landmark displacements. The proximal method compares the local deformation magnitude between a test landmark pair and the closest GS pair. Statistical analysis was done using an intra class correlation (ICC) between test and GS displacement values. The creation time per landmark pair was 22, 34, 2.3, and 4.3 s for observers 1 and 2, SIFT, and SIFT-M methods respectively. Across 20 lungs from the 10 CT studies, the ICC values between the GS and observer 1 and 2, SIFT, and SIFT-M methods were 0.85, 0.85, 0.84, and 0.82 for mean lung deformation, and 0.97, 0.98, 0.91, and 0.96 for proximal landmark deformation, respectively. SIFT and SIFT-M methods have an accuracy that is comparable to manual methods when tested against a GS landmark set while saving 90% of the time. The number and distribution of landmarks significantly affected the analysis as manifested by the different results for mean deformation and proximal landmark deformation methods. Automatic landmark methods offer a promising alternative to manual landmarking, if the quantity, quality and distribution of landmarks can be optimized for the intended application.

  3. Denoising of B{sub 1}{sup +} field maps for noise-robust image reconstruction in electrical properties tomography

    SciTech Connect

    Michel, Eric; Hernandez, Daniel; Cho, Min Hyoung; Lee, Soo Yeol

    2014-10-15

    Purpose: To validate the use of adaptive nonlinear filters in reconstructing conductivity and permittivity images from the noisy B{sub 1}{sup +} maps in electrical properties tomography (EPT). Methods: In EPT, electrical property images are computed by taking Laplacian of the B{sub 1}{sup +} maps. To mitigate the noise amplification in computing the Laplacian, the authors applied adaptive nonlinear denoising filters to the measured complex B{sub 1}{sup +} maps. After the denoising process, they computed the Laplacian by central differences. They performed EPT experiments on phantoms and a human brain at 3 T along with corresponding EPT simulations on finite-difference time-domain models. They evaluated the EPT images comparing them with the ones obtained by previous EPT reconstruction methods. Results: In both the EPT simulations and experiments, the nonlinear filtering greatly improved the EPT image quality when evaluated in terms of the mean and standard deviation of the electrical property values at the regions of interest. The proposed method also improved the overall similarity between the reconstructed conductivity images and the true shapes of the conductivity distribution. Conclusions: The nonlinear denoising enabled us to obtain better-quality EPT images of the phantoms and the human brain at 3 T.

  4. A deformable phantom for 4D radiotherapy verification: Design and image registration evaluation

    SciTech Connect

    Serban, Monica; Heath, Emily; Stroian, Gabriela; Collins, D. Louis; Seuntjens, Jan

    2008-03-15

    peak inhale. The SI displacement of the landmarks varied between 94% and 3% of the piston excursion for positions closer and farther away from the piston, respectively. The reproducibility of the phantom deformation was within the image resolution (0.7x0.7x1.25 mm{sup 3}). Vector average registration accuracy based on point landmarks was found to be 0.5 (0.4 SD) mm. The tumor and lung mean 3D DTA obtained from triangulated surfaces were 0.4 (0.1 SD) mm and 1.0 (0.8 SD) mm, respectively. This phantom is capable of reproducibly emulating the physically realistic lung features and deformations and has a wide range of potential applications, including four-dimensional (4D) imaging, evaluation of deformable registration accuracy, 4D planning and dose delivery.

  5. A deformable phantom for 4D radiotherapy verification: design and image registration evaluation.

    PubMed

    Serban, Monica; Heath, Emily; Stroian, Gabriela; Collins, D Louis; Seuntjens, Jan

    2008-03-01

    . The SI displacement of the landmarks varied between 94% and 3% of the piston excursion for positions closer and farther away from the piston, respectively. The reproducibility of the phantom deformation was within the image resolution (0.7 x 0.7 x 1.25 mm3). Vector average registration accuracy based on point landmarks was found to be 0.5 (0.4 SD) mm. The tumor and lung mean 3D DTA obtained from triangulated surfaces were 0.4 (0.1 SD) mm and 1.0 (0.8 SD) mm, respectively. This phantom is capable of reproducibly emulating the physically realistic lung features and deformations and has a wide range of potential applications, including four-dimensional (4D) imaging, evaluation of deformable registration accuracy, 4D planning and dose delivery.

  6. A Comparison of PDE-based Non-Linear Anisotropic Diffusion Techniques for Image Denoising

    SciTech Connect

    Weeratunga, S K; Kamath, C

    2003-01-06

    PDE-based, non-linear diffusion techniques are an effective way to denoise images. In a previous study, we investigated the effects of different parameters in the implementation of isotropic, non-linear diffusion. Using synthetic and real images, we showed that for images corrupted with additive Gaussian noise, such methods are quite effective, leading to lower mean-squared-error values in comparison with spatial filters and wavelet-based approaches. In this paper, we extend this work to include anisotropic diffusion, where the diffusivity is a tensor valued function which can be adapted to local edge orientation. This allows smoothing along the edges, but not perpendicular to it. We consider several anisotropic diffusivity functions as well as approaches for discretizing the diffusion operator that minimize the mesh orientation effects. We investigate how these tensor-valued diffusivity functions compare in image quality, ease of use, and computational costs relative to simple spatial filters, the more complex bilateral filters, wavelet-based methods, and isotropic non-linear diffusion based techniques.

  7. Comparison of PDE-based non-linear anistropic diffusion techniques for image denoising

    NASA Astrophysics Data System (ADS)

    Weeratunga, Sisira K.; Kamath, Chandrika

    2003-05-01

    PDE-based, non-linear diffusion techniques are an effective way to denoise images.In a previous study, we investigated the effects of different parameters in the implementation of isotropic, non-linear diffusion. Using synthetic and real images, we showed that for images corrupted with additive Gaussian noise, such methods are quite effective, leading to lower mean-squared-error values in comparison with spatial filters and wavelet-based approaches. In this paper, we extend this work to include anisotropic diffusion, where the diffusivity is a tensor valued function which can be adapted to local edge orientation. This allows smoothing along the edges, but not perpendicular to it. We consider several anisotropic diffusivity functions as well as approaches for discretizing the diffusion operator that minimize the mesh orientation effects. We investigate how these tensor-valued diffusivity functions compare in image quality, ease of use, and computational costs relative to simple spatial filters, the more complex bilateral filters, wavelet-based methods, and isotropic non-linear diffusion based techniques.

  8. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry

    NASA Astrophysics Data System (ADS)

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian

    2014-03-01

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.

  9. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry.

    PubMed

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian

    2014-03-21

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.

  10. Application of 4D resistivity image profiling to detect DNAPLs plume.

    NASA Astrophysics Data System (ADS)

    Liu, H.; Yang, C.; Tsai, Y.

    2008-12-01

    In July 1993, the soil and groundwater of the factory of Taiwan , Miaoli was found to be contaminated by dichloroethane, chlorobenzene and other hazardous solvents. The contaminants were termed to be dense non-aqueous phase liquids (DNAPLs). The contaminated site was neglected for the following years until May 1998, the Environment Protection Agency of Miaoli ordered the company immediately take an action for treatment of the contaminated site. Excavating and exposing the contaminated soil was done at the previous waste DNAPL dumped area. In addition, more than 53 wells were drilled around the pool with a maximum depth of 12 m where a clayey layer was found. Continuous pumping the groundwater and monitoring the concentration of residual DNAPL contained in the well water samples have done in different stages of remediation. However, it is suspected that the DNAPL has existed for a long time, therefore the contaminants might dilute but remnants of a DNAPL plume that are toxic to humans still remain in the soil and migrate to deeper aquifers. A former contaminated site was investigated using the 2D, 3D and 4D resisitivity image technique, with aims of determining buried contaminant geometry. This paper emphasizes the use of resistivity image profiling (RIP) method to map the limit of this DNAPL waste disposal site where the records of operations are not variations. A significant change in resistivity values was detected between known polluted and non-polluted subsurface; a high resistivity value implies that the subsurface was contaminated by DNAPL plume. The results of the survey serve to provide insight into the sensitivity of RIP method for detecting DNAPL plumes within the shallow subsurface, and help to provide valuable information related to monitoring the possible migration path of DNAPL plume in the past. According to the formerly studies in this site, affiliation by excavates with pumps water remediation had very long time, Therefore this research was used

  11. Parametric surface denoising

    NASA Astrophysics Data System (ADS)

    Kakadiaris, Ioannis A.; Konstantinidis, Ioannis; Papadakis, Manos; Ding, Wei; Shen, Lixin

    2005-08-01

    Three dimensional (3D) surfaces can be sampled parametrically in the form of range image data. Smoothing/denoising of such raw data is usually accomplished by adapting techniques developed for intensity image processing, since both range and intensity images comprise parametrically sampled geometry and appearance measurements, respectively. We present a transform-based algorithm for surface denoising, motivated by our previous work on intensity image denoising, which utilizes a non-separable Parseval frame and an ensemble thresholding scheme. The frame is constructed from separable (tensor) products of a piecewise linear spline tight frame and incorporates the weighted average operator and the Sobel operators in directions that are integer multiples of 45°. We compare the performance of this algorithm with other transform-based methods from the recent literature. Our results indicate that such transform methods are suited to the task of smoothing range images.

  12. A novel method for image denoising of fluorescence molecular imaging based on fuzzy C-Means clustering

    NASA Astrophysics Data System (ADS)

    An, Yu; Liu, Jie; Ye, Jinzuo; Mao, Yamin; Yang, Xin; Jiang, Shixin; Chi, Chongwei; Tian, Jie

    2015-03-01

    As an important molecular imaging modality, fluorescence molecular imaging (FMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorophore, FMI can noninvasively obtain the distribution of fluorophore in-vivo. However, due to the fact that the spectrum of fluorescence is in the section of the visible light range, there are mass of autofluorescence on the surface of the bio-tissues, which is a major disturbing factor in FMI. Meanwhile, the high-level of dark current for charge-coupled device (CCD) camera and other influencing factor can also produce a lot of background noise. In this paper, a novel method for image denoising of FMI based on fuzzy C-Means clustering (FCM) is proposed, because the fluorescent signal is the major component of the fluorescence images, and the intensity of autofluorescence and other background signals is relatively lower than the fluorescence signal. First, the fluorescence image is smoothed by sliding-neighborhood operations to initially eliminate the noise. Then, the wavelet transform (WLT) is performed on the fluorescence images to obtain the major component of the fluorescent signals. After that, the FCM method is adopt to separate the major component and background of the fluorescence images. Finally, the proposed method was validated using the original data obtained by in vivo implanted fluorophore experiment, and the results show that our proposed method can effectively obtain the fluorescence signal while eliminate the background noise, which could increase the quality of fluorescence images.

  13. 4D laser camera for accurate patient positioning, collision avoidance, image fusion and adaptive approaches during diagnostic and therapeutic procedures.

    PubMed

    Brahme, Anders; Nyman, Peter; Skatt, Björn

    2008-05-01

    A four-dimensional (4D) laser camera (LC) has been developed for accurate patient imaging in diagnostic and therapeutic radiology. A complementary metal-oxide semiconductor camera images the intersection of a scanned fan shaped laser beam with the surface of the patient and allows real time recording of movements in a three-dimensional (3D) or four-dimensional (4D) format (3D +time). The LC system was first designed as an accurate patient setup tool during diagnostic and therapeutic applications but was found to be of much wider applicability as a general 4D photon "tag" for the surface of the patient in different clinical procedures. It is presently used as a 3D or 4D optical benchmark or tag for accurate delineation of the patient surface as demonstrated for patient auto setup, breathing and heart motion detection. Furthermore, its future potential applications in gating, adaptive therapy, 3D or 4D image fusion between most imaging modalities and image processing are discussed. It is shown that the LC system has a geometrical resolution of about 0, 1 mm and that the rigid body repositioning accuracy is about 0, 5 mm below 20 mm displacements, 1 mm below 40 mm and better than 2 mm at 70 mm. This indicates a slight need for repeated repositioning when the initial error is larger than about 50 mm. The positioning accuracy with standard patient setup procedures for prostate cancer at Karolinska was found to be about 5-6 mm when independently measured using the LC system. The system was found valuable for positron emission tomography-computed tomography (PET-CT) in vivo tumor and dose delivery imaging where it potentially may allow effective correction for breathing artifacts in 4D PET-CT and image fusion with lymph node atlases for accurate target volume definition in oncology. With a LC system in all imaging and radiation therapy rooms, auto setup during repeated diagnostic and therapeutic procedures may save around 5 min per session, increase accuracy and allow

  14. Randomized denoising autoencoders for smaller and efficient imaging based AD clinical trials.

    PubMed

    Ithapul, Vamsi K; Singh, Vikas; Okonkwo, Ozioma; Johnson, Sterling C

    2014-01-01

    There is growing body of research devoted to designing imaging-based biomarkers that identify Alzheimer's disease (AD) in its prodromal stage using statistical machine learning methods. Recently several authors investigated how clinical trials for AD can be made more efficient (i.e., smaller sample size) using predictive measures from such classification methods. In this paper, we explain why predictive measures given by such SVM type objectives may be less than ideal for use in the setting described above. We give a solution based on a novel deep learning model, randomized denoising autoencoders (rDA), which regresses on training labels y while also accounting for the variance, a property which is very useful for clinical trial design. Our results give strong improvements in sample size estimates over strategies based on multi-kernel learning. Also, rDA predictions appear to more accurately correlate to stages of disease. Separately, our formulation empirically shows how deep architectures can be applied in the large d, small n regime--the default situation in medical imaging. This result is of independent interest.

  15. 4D cone-beam CT imaging for guidance in radiation therapy: setup verification by use of implanted fiducial markers

    NASA Astrophysics Data System (ADS)

    Jin, Peng; van Wieringen, Niek; Hulshof, Maarten C. C. M.; Bel, Arjan; Alderliesten, Tanja

    2016-03-01

    The use of 4D cone-beam computed tomography (CBCT) and fiducial markers for guidance during radiation therapy of mobile tumors is challenging due to the trade-off between image quality, imaging dose, and scanning time. We aimed to investigate the visibility of markers and the feasibility of marker-based 4D registration and manual respiration-induced marker motion quantification for different CBCT acquisition settings. A dynamic thorax phantom and a patient with implanted gold markers were included. For both the phantom and patient, the peak-to-peak amplitude of marker motion in the cranial-caudal direction ranged from 5.3 to 14.0 mm, which did not affect the marker visibility and the associated marker-based registration feasibility. While using a medium field of view (FOV) and the same total imaging dose as is applied for 3D CBCT scanning in our clinic, it was feasible to attain an improved marker visibility by reducing the imaging dose per projection and increasing the number of projection images. For a small FOV with a shorter rotation arc but similar total imaging dose, streak artifacts were reduced due to using a smaller sampling angle. Additionally, the use of a small FOV allowed reducing total imaging dose and scanning time (~2.5 min) without losing the marker visibility. In conclusion, by using 4D CBCT with identical or lower imaging dose and a reduced gantry speed, it is feasible to attain sufficient marker visibility for marker-based 4D setup verification. Moreover, regardless of the settings, manual marker motion quantification can achieve a high accuracy with the error <1.2 mm.

  16. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    SciTech Connect

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-19

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described.

  17. Quantifying the image quality and dose reduction of respiratory triggered 4D cone-beam computed tomography with patient-measured breathing

    NASA Astrophysics Data System (ADS)

    Cooper, Benjamin J.; O'Brien, Ricky T.; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J.

    2015-12-01

    Respiratory triggered four dimensional cone-beam computed tomography (RT 4D CBCT) is a novel technique that uses a patient’s respiratory signal to drive the image acquisition with the goal of imaging dose reduction without degrading image quality. This work investigates image quality and dose using patient-measured respiratory signals for RT 4D CBCT simulations. Studies were performed that simulate a 4D CBCT image acquisition using both the novel RT 4D CBCT technique and a conventional 4D CBCT technique. A set containing 111 free breathing lung cancer patient respiratory signal files was used to create 111 pairs of RT 4D CBCT and conventional 4D CBCT image sets from realistic simulations of a 4D CBCT system using a Rando phantom and the digital phantom, XCAT. Each of these image sets were compared to a ground truth dataset from which a mean absolute pixel difference (MAPD) metric was calculated to quantify the degradation of image quality. The number of projections used in each simulation was counted and was assumed as a surrogate for imaging dose. Based on 111 breathing traces, when comparing RT 4D CBCT with conventional 4D CBCT, the average image quality was reduced by 7.6% (Rando study) and 11.1% (XCAT study). However, the average imaging dose reduction was 53% based on needing fewer projections (617 on average) than conventional 4D CBCT (1320 projections). The simulation studies have demonstrated that the RT 4D CBCT method can potentially offer a 53% saving in imaging dose on average compared to conventional 4D CBCT in simulation studies using a wide range of patient-measured breathing traces with a minimal impact on image quality.

  18. SU-E-J-183: Quantifying the Image Quality and Dose Reduction of Respiratory Triggered 4D Cone-Beam Computed Tomography with Patient- Measured Breathing

    SciTech Connect

    Cooper, B; OBrien, R; Kipritidis, J; Keall, P

    2014-06-01

    Purpose: Respiratory triggered four dimensional cone-beam computed tomography (RT 4D CBCT) is a novel technique that uses a patient's respiratory signal to drive the image acquisition with the goal of imaging dose reduction without degrading image quality. This work investigates image quality and dose using patient-measured respiratory signals for RT 4D CBCT simulations instead of synthetic sinusoidal signals used in previous work. Methods: Studies were performed that simulate a 4D CBCT image acquisition using both the novel RT 4D CBCT technique and a conventional 4D CBCT technique from a database of oversampled Rando phantom CBCT projections. A database containing 111 free breathing lung cancer patient respiratory signal files was used to create 111 RT 4D CBCT and 111 conventional 4D CBCT image datasets from realistic simulations of a 4D RT CBCT system. Each of these image datasets were compared to a ground truth dataset from which a root mean square error (RMSE) metric was calculated to quantify the degradation of image quality. The number of projections used in each simulation is counted and was assumed as a surrogate for imaging dose. Results: Based on 111 breathing traces, when comparing RT 4D CBCT with conventional 4D CBCT the average image quality was reduced by 7.6%. However, the average imaging dose reduction was 53% based on needing fewer projections (617 on average) than conventional 4D CBCT (1320 projections). Conclusion: The simulation studies using a wide range of patient breathing traces have demonstrated that the RT 4D CBCT method can potentially offer a substantial saving of imaging dose of 53% on average compared to conventional 4D CBCT in simulation studies with a minimal impact on image quality. A patent application (PCT/US2012/048693) has been filed which is related to this work.

  19. Domain adaptation based on deep denoising auto-encoders for classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Riz, Emanuele; Demir, Begüm; Bruzzone, Lorenzo

    2016-10-01

    This paper investigates the effectiveness of deep learning (DL) for domain adaptation (DA) problems in the classification of remote sensing images to generate land-cover maps. To this end, we introduce two different DL architectures: 1) single-stage domain adaptation (SS-DA) architecture; and 2) hierarchal domain adaptation (H-DA) architecture. Both architectures require that a reliable training set is available only for one of the images (i.e., the source domain) from a previous analysis, whereas it is not for another image to be classified (i.e., the target domain). To classify the target domain image, the proposed architectures aim to learn a shared feature representation that is invariant across the source and target domains in a completely unsupervised fashion. To this end, both architectures are defined based on the stacked denoising auto-encoders (SDAEs) due to their high capability to define high-level feature representations. The SS-DA architecture leads to a common feature space by: 1) initially unifying the samples in source and target domains; and 2) then feeding them simultaneously into the SDAE. To further increase the robustness of the shared representations, the H-DA employs: 1) two SDAEs for learning independently the high level representations of source and target domains; and 2) a consensus SDAE to learn the domain invariant high-level features. After obtaining the domain invariant features through proposed architectures, the classifier is trained by the domain invariant labeled samples of the source domain, and then the domain invariant samples of the target domain are classified to generate the related classification map. Experimental results obtained for the classification of very high resolution images confirm the effectiveness of the proposed DL architectures.

  20. Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Säring, Dennis; Fiehler, Jens; Illies, Till; Möller, Dietmar; Handels, Heinz

    2009-02-01

    In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.

  1. An adaptive non-local means filter for denoising live-cell images and improving particle detection.

    PubMed

    Yang, Lei; Parton, Richard; Ball, Graeme; Qiu, Zhen; Greenaway, Alan H; Davis, Ilan; Lu, Weiping

    2010-12-01

    Fluorescence imaging of dynamical processes in live cells often results in a low signal-to-noise ratio. We present a novel feature-preserving non-local means approach to denoise such images to improve feature recovery and particle detection. The commonly used non-local means filter is not optimal for noisy biological images containing small features of interest because image noise prevents accurate determination of the correct coefficients for averaging, leading to over-smoothing and other artifacts. Our adaptive method addresses this problem by constructing a particle feature probability image, which is based on Haar-like feature extraction. The particle probability image is then used to improve the estimation of the correct coefficients for averaging. We show that this filter achieves higher peak signal-to-noise ratio in denoised images and has a greater capability in identifying weak particles when applied to synthetic data. We have applied this approach to live-cell images resulting in enhanced detection of end-binding-protein 1 foci on dynamically extending microtubules in photo-sensitive Drosophila tissues. We show that our feature-preserving non-local means filter can reduce the threshold of imaging conditions required to obtain meaningful data.

  2. TH-E-BRF-02: 4D-CT Ventilation Image-Based IMRT Plans Are Dosimetrically Comparable to SPECT Ventilation Image-Based Plans

    SciTech Connect

    Kida, S; Bal, M; Kabus, S; Loo, B; Keall, P; Yamamoto, T

    2014-06-15

    Purpose: An emerging lung ventilation imaging method based on 4D-CT can be used in radiotherapy to selectively avoid irradiating highly-functional lung regions, which may reduce pulmonary toxicity. Efforts to validate 4DCT ventilation imaging have been focused on comparison with other imaging modalities including SPECT and xenon CT. The purpose of this study was to compare 4D-CT ventilation image-based functional IMRT plans with SPECT ventilation image-based plans as reference. Methods: 4D-CT and SPECT ventilation scans were acquired for five thoracic cancer patients in an IRB-approved prospective clinical trial. The ventilation images were created by quantitative analysis of regional volume changes (a surrogate for ventilation) using deformable image registration of the 4D-CT images. A pair of 4D-CT ventilation and SPECT ventilation image-based IMRT plans was created for each patient. Regional ventilation information was incorporated into lung dose-volume objectives for IMRT optimization by assigning different weights on a voxel-by-voxel basis. The objectives and constraints of the other structures in the plan were kept identical. The differences in the dose-volume metrics have been evaluated and tested by a paired t-test. SPECT ventilation was used to calculate the lung functional dose-volume metrics (i.e., mean dose, V20 and effective dose) for both 4D-CT ventilation image-based and SPECT ventilation image-based plans. Results: Overall there were no statistically significant differences in any dose-volume metrics between the 4D-CT and SPECT ventilation imagebased plans. For example, the average functional mean lung dose of the 4D-CT plans was 26.1±9.15 (Gy), which was comparable to 25.2±8.60 (Gy) of the SPECT plans (p = 0.89). For other critical organs and PTV, nonsignificant differences were found as well. Conclusion: This study has demonstrated that 4D-CT ventilation image-based functional IMRT plans are dosimetrically comparable to SPECT ventilation image

  3. Applying an animal model to quantify the uncertainties of an image-based 4D-CT algorithm

    NASA Astrophysics Data System (ADS)

    Pierce, Greg; Wang, Kevin; Battista, Jerry; Lee, Ting-Yim

    2012-06-01

    The purpose of this paper is to use an animal model to quantify the spatial displacement uncertainties and test the fundamental assumptions of an image-based 4D-CT algorithm in vivo. Six female Landrace cross pigs were ventilated and imaged using a 64-slice CT scanner (GE Healthcare) operating in axial cine mode. The breathing amplitude pattern of the pigs was varied by periodically crimping the ventilator gas return tube during the image acquisition. The image data were used to determine the displacement uncertainties that result from matching CT images at the same respiratory phase using normalized cross correlation (NCC) as the matching criteria. Additionally, the ability to match the respiratory phase of a 4.0 cm subvolume of the thorax to a reference subvolume using only a single overlapping 2D slice from the two subvolumes was tested by varying the location of the overlapping matching image within the subvolume and examining the effect this had on the displacement relative to the reference volume. The displacement uncertainty resulting from matching two respiratory images using NCC ranged from 0.54 ± 0.10 mm per match to 0.32 ± 0.16 mm per match in the lung of the animal. The uncertainty was found to propagate in quadrature, increasing with number of NCC matches performed. In comparison, the minimum displacement achievable if two respiratory images were matched perfectly in phase ranged from 0.77 ± 0.06 to 0.93 ± 0.06 mm in the lung. The assumption that subvolumes from separate cine scan could be matched by matching a single overlapping 2D image between to subvolumes was validated. An in vivo animal model was developed to test an image-based 4D-CT algorithm. The uncertainties associated with using NCC to match the respiratory phase of two images were quantified and the assumption that a 4.0 cm 3D subvolume can by matched in respiratory phase by matching a single 2D image from the 3D subvolume was validated. The work in this paper shows the image-based 4D

  4. Weighted Schatten p -Norm Minimization for Image Denoising and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Xie, Yuan; Gu, Shuhang; Liu, Yan; Zuo, Wangmeng; Zhang, Wensheng; Zhang, Lei

    2016-10-01

    Low rank matrix approximation (LRMA), which aims to recover the underlying low rank matrix from its degraded observation, has a wide range of applications in computer vision. The latest LRMA methods resort to using the nuclear norm minimization (NNM) as a convex relaxation of the nonconvex rank minimization. However, NNM tends to over-shrink the rank components and treats the different rank components equally, limiting its flexibility in practical applications. We propose a more flexible model, namely the Weighted Schatten $p$-Norm Minimization (WSNM), to generalize the NNM to the Schatten $p$-norm minimization with weights assigned to different singular values. The proposed WSNM not only gives better approximation to the original low-rank assumption, but also considers the importance of different rank components. We analyze the solution of WSNM and prove that, under certain weights permutation, WSNM can be equivalently transformed into independent non-convex $l_p$-norm subproblems, whose global optimum can be efficiently solved by generalized iterated shrinkage algorithm. We apply WSNM to typical low-level vision problems, e.g., image denoising and background subtraction. Extensive experimental results show, both qualitatively and quantitatively, that the proposed WSNM can more effectively remove noise, and model complex and dynamic scenes compared with state-of-the-art methods.

  5. PDE-based nonlinear diffusion techniques for denoising scientific and industrial images: an empirical study

    NASA Astrophysics Data System (ADS)

    Weeratunga, Sisira K.; Kamath, Chandrika

    2002-05-01

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, we focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. We complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. We explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. We also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. Our empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  6. Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders

    PubMed Central

    Su, Hai; Xing, Fuyong; Kong, Xiangfei; Xie, Yuanpu; Zhang, Shaoting; Yang, Lin

    2016-01-01

    Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts. PMID:27796013

  7. SU-D-17A-04: The Impact of Audiovisual Biofeedback On Image Quality During 4D Functional and Anatomic Imaging: Results of a Prospective Clinical Trial

    SciTech Connect

    Keall, P; Pollock, S; Yang, J; Diehn, M; Berger, J; Graves, E; Loo, B; Yamamoto, T

    2014-06-01

    Purpose: The ability of audiovisual (AV) biofeedback to improve breathing regularity has not previously been investigated for functional imaging studies. The purpose of this study was to investigate the impact of AV biofeedback on 4D-PET and 4D-CT image quality in a prospective clinical trial. We hypothesized that motion blurring in 4D-PET images and the number of artifacts in 4D-CT images are reduced using AV biofeedback. Methods: AV biofeedback is a real-time, interactive and personalized system designed to help a patient self-regulate his/her breathing using a patient-specific representative waveform and musical guides. In an IRB-approved prospective clinical trial, 4D-PET and 4D-CT images of 10 lung cancer patients were acquired with AV biofeedback (AV) and free breathing (FB). The 4D-PET images in 6 respiratory bins were analyzed for motion blurring by: (1) decrease of GTVPET and (2) increase of SUVmax in 4-DPET compared to 3D-PET. The 4D-CT images were analyzed for artifacts by: (1) comparing normalized cross correlation-based scores (NCCS); and (2) quantifying a visual assessment score (VAS). A two-tailed paired t-test was used to test the hypotheses. Results: The impact of AV biofeedback on 4D-PET and 4D-CT images varied widely between patients, suggesting inconsistent patient comprehension and capability. Overall, the 4D-PET decrease of GTVPET was 2.0±3.0cm3 with AV and 2.3±3.9cm{sup 3} for FB (p=0.61). The 4D-PET increase of SUVmax was 1.6±1.0 with AV and 1.1±0.8 with FB (p=0.002). The 4D-CT NCCS were 0.65±0.27 with AV and 0.60±0.32 for FB (p=0.32). The 4D-CT VAS was 0.0±2.7 (p=ns). Conclusion: A 10-patient study demonstrated a statistically significant reduction of motion blurring of AV over FB for 1/2 functional 4D-PET imaging metrics. No difference between AV and FB was found for 2 anatomic 4D-CT imaging metrics. Future studies will focus on optimizing the human-computer interface and including patient training sessions for improved

  8. SU-E-T-428: Feasibility Study of 4D Image Reconstruction by Organ Motion Vector Extension Based On Portal Images

    SciTech Connect

    Yoon, J; Jung, J; Yeo, I; Kim, J; Yi, B

    2015-06-15

    Purpose: To develop and to test a method to generate a new 4D CT images of the treatment day from the old 4D CT and the portal images of the day when the motion extent exceeded from that represented by plan CTs. Methods: A motion vector of a moving tumor in a patient may be extended to reconstruct the tumor position when the motion extent exceeded from that represented by plan CTs. To test this, 1. a phantom that consists of a polystyrene cylinder (tumor) embedded in cork (lung) was placed on a moving platform with 4 sec/cycle and amplitudes of 1 cm and 2 cm, and was 4D-scanned. 2. A 6MV photon beam was irradiated on the moving phantoms and cineEPID images were obtained. 3. A motion vector of the tumor was acquired from 4D CT images of the phantom with 1 cm amplitude. 4. From cine EPID images of the phantom with the 2 cm amplitude, various motion extents (0.3 cm, 0.5 cm, etc) were acquired and programmed into the motion vector, producing CT images at each position. 5. The reconstructed CT images were then compared with pre-acquired “reference” 4D CT images at each position (i.e. phase). Results: The CT image was reconstructed and compared with the reference image, showing a slight mismatch in the transition direction limited by voxel size (slice thickness) in CT image. Due to the rigid nature of the phantom studied, the modeling the displacement of the center of object was sufficient. When deformable tumors are to be modeled, more complex scheme is necessary, which utilize cine EPID and 4D CT images. Conclusion: The new idea of CT image reconstruction was demonstrated. Deformable tumor movements need to be considered in the future.

  9. [Spatio-temporal image correlation (STIC) and tomographic ultrasound imaging (TUI)--combined clinical implementation in 3D/4D fetal echocardiography].

    PubMed

    Markov, D

    2010-01-01

    Two new forms of volume data image processing by three (3D) and four (4D) dimensional ultrasound named Spatio-Temporal Image Correlation (STIC) and Tomographic Ultrasound Imaging (TUI) are presented. The advantages and disadvantages of the combined clinical implementation of both modalities in fetal echocardiography are discussed.

  10. Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.

    2007-03-01

    In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.

  11. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  12. Super-Resolution Reconstruction of Diffusion-Weighted Images using 4D Low-Rank and Total Variation

    PubMed Central

    Shi, Feng; Cheng, Jian; Wang, Li; Yap, Pew-Thian; Shen, Dinggang

    2016-01-01

    Diffusion-weighted imaging (DWI) provides invaluable information in white matter microstructure and is widely applied in neurological applications. However, DWI is largely limited by its relatively low spatial resolution. In this paper, we propose an image post-processing method, referred to as super-resolution reconstruction, to estimate a high spatial resolution DWI from the input low-resolution DWI, e.g., at a factor of 2. Instead of requiring specially designed DWI acquisition of multiple shifted or orthogonal scans, our method needs only a single DWI scan. To do that, we propose to model both the blurring and downsampling effects in the image degradation process where the low-resolution image is observed from the latent high-resolution image, and recover the latent high-resolution image with the help of two regularizations. The first regularization is 4-dimensional (4D) low-rank, proposed to gather self-similarity information from both the spatial domain and the diffusion domain of 4D DWI. The second regularization is total variation, proposed to depress noise and preserve local structures such as edges in the image recovery process. Extensive experiments were performed on 20 subjects, and results show that the proposed method is able to recover the fine details of white matter structures, and outperform other approaches such as interpolation methods, non-local means based upsampling, and total variation based upsampling. PMID:27845833

  13. SU-E-J-02: 4D Digital Tomosynthesis Based On Algebraic Image Reconstruction and Total-Variation Minimization for the Improvement of Image Quality

    SciTech Connect

    Kim, D; Kang, S; Kim, T; Suh, T; Kim, S

    2014-06-01

    Purpose: In this paper, we implemented the four-dimensional (4D) digital tomosynthesis (DTS) imaging based on algebraic image reconstruction technique and total-variation minimization method in order to compensate the undersampled projection data and improve the image quality. Methods: The projection data were acquired as supposed the cone-beam computed tomography system in linear accelerator by the Monte Carlo simulation and the in-house 4D digital phantom generation program. We performed 4D DTS based upon simultaneous algebraic reconstruction technique (SART) among the iterative image reconstruction technique and total-variation minimization method (TVMM). To verify the effectiveness of this reconstruction algorithm, we performed systematic simulation studies to investigate the imaging performance. Results: The 4D DTS algorithm based upon the SART and TVMM seems to give better results than that based upon the existing method, or filtered-backprojection. Conclusion: The advanced image reconstruction algorithm for the 4D DTS would be useful to validate each intra-fraction motion during radiation therapy. In addition, it will be possible to give advantage to real-time imaging for the adaptive radiation therapy. This research was supported by Leading Foreign Research Institute Recruitment Program (Grant No.2009-00420) and Basic Atomic Energy Research Institute (BAERI); (Grant No. 2009-0078390) through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP)

  14. Evaluation of non-local means based denoising filters for diffusion kurtosis imaging using a new phantom.

    PubMed

    Zhou, Min-Xiong; Yan, Xu; Xie, Hai-Bin; Zheng, Hui; Xu, Dongrong; Yang, Guang

    2015-01-01

    Image denoising has a profound impact on the precision of estimated parameters in diffusion kurtosis imaging (DKI). This work first proposes an approach to constructing a DKI phantom that can be used to evaluate the performance of denoising algorithms in regard to their abilities of improving the reliability of DKI parameter estimation. The phantom was constructed from a real DKI dataset of a human brain, and the pipeline used to construct the phantom consists of diffusion-weighted (DW) image filtering, diffusion and kurtosis tensor regularization, and DW image reconstruction. The phantom preserves the image structure while minimizing image noise, and thus can be used as ground truth in the evaluation. Second, we used the phantom to evaluate three representative algorithms of non-local means (NLM). Results showed that one scheme of vector-based NLM, which uses DWI data with redundant information acquired at different b-values, produced the most reliable estimation of DKI parameters in terms of Mean Square Error (MSE), Bias and standard deviation (Std). The result of the comparison based on the phantom was consistent with those based on real datasets.

  15. SU-E-J-157: Improving the Quality of T2-Weighted 4D Magnetic Resonance Imaging for Clinical Evaluation

    SciTech Connect

    Du, D; Mutic, S; Hu, Y; Caruthers, S; Glide-Hurst, C; Low, D

    2014-06-01

    Purpose: To develop an imaging technique that enables us to acquire T2- weighted 4D Magnetic Resonance Imaging (4DMRI) with sufficient spatial coverage, temporal resolution and spatial resolution for clinical evaluation. Methods: T2-weighed 4DMRI images were acquired from a healthy volunteer using a respiratory amplitude triggered T2-weighted Turbo Spin Echo sequence. 10 respiratory states were used to equally sample the respiratory range based on amplitude (0%, 20%i, 40%i, 60%i, 80%i, 100%, 80%e, 60%e, 40%e and 20%e). To avoid frequent scanning halts, a methodology was devised that split 10 respiratory states into two packages in an interleaved manner and packages were acquired separately. Sixty 3mm sagittal slices at 1.5mm in-plane spatial resolution were acquired to offer good spatial coverage and reasonable spatial resolution. The in-plane field of view was 375mm × 260mm with nominal scan time of 3 minutes 42 seconds. Acquired 2D images at the same respiratory state were combined to form the 3D image set corresponding to that respiratory state and reconstructed in the coronal view to evaluate whether all slices were at the same respiratory state. 3D image sets of 10 respiratory states represented a complete 4D MRI image set. Results: T2-weighted 4DMRI image were acquired in 10 minutes which was within clinical acceptable range. Qualitatively, the acquired MRI images had good image quality for delineation purposes. There were no abrupt position changes in reconstructed coronal images which confirmed that all sagittal slices were in the same respiratory state. Conclusion: We demonstrated it was feasible to acquire T2-weighted 4DMRI image set within a practical amount of time (10 minutes) that had good temporal resolution (10 respiratory states), spatial resolution (1.5mm × 1.5mm × 3.0mm) and spatial coverage (60 slices) for future clinical evaluation.

  16. Development and Application of a Suite of 4-D Virtual Breast Phantoms for Optimization and Evaluation of Breast Imaging Systems

    PubMed Central

    Lin, Yuan; Ikejimba, Lynda C.; Ghate, Sujata V.; Dobbins, James T.; Segars, William P.

    2014-01-01

    Mammography is currently the most widely utilized tool for detection and diagnosis of breast cancer. However, in women with dense breast tissue, tissue overlap may obscure lesions. Digital breast tomosynthesis can reduce tissue overlap. Furthermore, imaging with contrast enhancement can provide additional functional information about lesions, such as morphology and kinetics, which in turn may improve lesion identification and characterization. The performance of these imaging techniques is strongly dependent on the structural composition of the breast, which varies significantly among patients. Therefore, imaging system and imaging technique optimization should take patient variability into consideration. Furthermore, optimization of imaging techniques that employ contrast agents should include the temporally varying breast composition with respect to the contrast agent uptake kinetics. To these ends, we have developed a suite of 4-D virtual breast phantoms, which are incorporated with the kinetics of contrast agent propagation in different tissues and can realistically model normal breast parenchyma as well as benign and malignant lesions. This development presents a new approach in performing simulation studies using truly anthropomorphic models. To demonstrate the utility of the proposed 4-D phantoms, we present a simplified example study to compare the performance of 14 imaging paradigms qualitatively and quantitatively. PMID:24691118

  17. Toward time resolved 4D cardiac CT imaging with patient dose reduction: estimating the global heart motion

    NASA Astrophysics Data System (ADS)

    Taguchi, Katsuyuki; Segars, W. Paul; Fung, George S. K.; Tsui, Benjamin M. W.

    2006-03-01

    Coronary artery imaging with multi-slice helical computed tomography is a promising noninvasive imaging technique. The current major issues include the insufficient temporal resolution and large patient dose. We propose an image reconstruction method which provides a solution to both of the problems. The method uses an iterative approach repeating the following four steps until the difference between the two projection data sets falls below a certain criteria in step-4: 1) estimating or updating the cardiac motion vectors, 2) reconstructing the time-resolved 4D dynamic volume images using the motion vectors, 3) calculating the projection data from the current 4D images, 4) comparing them with the measured ones. In this study, we obtain the first estimate of the motion vector. We use the 4D NCAT phantom, a realistic computer model for the human anatomy and cardiac motions, to generate the dynamic fan-beam projection data sets as well to provide a known truth for the motion. Then, the halfscan reconstruction with the sliding time-window technique is used to generate cine images: f(t, r r). Here, we use one heart beat for each position r so that the time information is retained. Next, the magnitude of the first derivative of f(t, r r) with respect to time, i.e., |df/dt|, is calculated and summed over a region-of-interest (ROI), which is called the mean-absolute difference (MAD). The initial estimation of the vector field are obtained using MAD for each ROI. Results of the preliminary study are presented.

  18. 5D respiratory motion model based image reconstruction algorithm for 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Liu, Jiulong; Zhang, Xue; Zhang, Xiaoqun; Zhao, Hongkai; Gao, Yu; Thomas, David; Low, Daniel A.; Gao, Hao

    2015-11-01

    4D cone-beam computed tomography (4DCBCT) reconstructs a temporal sequence of CBCT images for the purpose of motion management or 4D treatment in radiotherapy. However the image reconstruction often involves the binning of projection data to each temporal phase, and therefore suffers from deteriorated image quality due to inaccurate or uneven binning in phase, e.g., under the non-periodic breathing. A 5D model has been developed as an accurate model of (periodic and non-periodic) respiratory motion. That is, given the measurements of breathing amplitude and its time derivative, the 5D model parametrizes the respiratory motion by three time-independent variables, i.e., one reference image and two vector fields. In this work we aim to develop a new 4DCBCT reconstruction method based on 5D model. Instead of reconstructing a temporal sequence of images after the projection binning, the new method reconstructs time-independent reference image and vector fields with no requirement of binning. The image reconstruction is formulated as a optimization problem with total-variation regularization on both reference image and vector fields, and the problem is solved by the proximal alternating minimization algorithm, during which the split Bregman method is used to reconstruct the reference image, and the Chambolle's duality-based algorithm is used to reconstruct the vector fields. The convergence analysis of the proposed algorithm is provided for this nonconvex problem. Validated by the simulation studies, the new method has significantly improved image reconstruction accuracy due to no binning and reduced number of unknowns via the use of the 5D model.

  19. 4-D photoacoustic tomography.

    PubMed

    Xiang, Liangzhong; Wang, Bo; Ji, Lijun; Jiang, Huabei

    2013-01-01

    Photoacoustic tomography (PAT) offers three-dimensional (3D) structural and functional imaging of living biological tissue with label-free, optical absorption contrast. These attributes lend PAT imaging to a wide variety of applications in clinical medicine and preclinical research. Despite advances in live animal imaging with PAT, there is still a need for 3D imaging at centimeter depths in real-time. We report the development of four dimensional (4D) PAT, which integrates time resolutions with 3D spatial resolution, obtained using spherical arrays of ultrasonic detectors. The 4D PAT technique generates motion pictures of imaged tissue, enabling real time tracking of dynamic physiological and pathological processes at hundred micrometer-millisecond resolutions. The 4D PAT technique is used here to image needle-based drug delivery and pharmacokinetics. We also use this technique to monitor 1) fast hemodynamic changes during inter-ictal epileptic seizures and 2) temperature variations during tumor thermal therapy.

  20. 4-D Photoacoustic Tomography

    NASA Astrophysics Data System (ADS)

    Xiang, Liangzhong; Wang, Bo; Ji, Lijun; Jiang, Huabei

    2013-01-01

    Photoacoustic tomography (PAT) offers three-dimensional (3D) structural and functional imaging of living biological tissue with label-free, optical absorption contrast. These attributes lend PAT imaging to a wide variety of applications in clinical medicine and preclinical research. Despite advances in live animal imaging with PAT, there is still a need for 3D imaging at centimeter depths in real-time. We report the development of four dimensional (4D) PAT, which integrates time resolutions with 3D spatial resolution, obtained using spherical arrays of ultrasonic detectors. The 4D PAT technique generates motion pictures of imaged tissue, enabling real time tracking of dynamic physiological and pathological processes at hundred micrometer-millisecond resolutions. The 4D PAT technique is used here to image needle-based drug delivery and pharmacokinetics. We also use this technique to monitor 1) fast hemodynamic changes during inter-ictal epileptic seizures and 2) temperature variations during tumor thermal therapy.

  1. The effect of different adaptation strengths on image quality and radiation dose using Siemens Care Dose 4D.

    PubMed

    Söderberg, Marcus; Gunnarsson, Mikael

    2010-01-01

    The purpose of this study was to evaluate the effect of different choices of adaptation strengths on image quality and radiation exposure to the patient with Siemens automatic exposure control system called CARE Dose 4D. An anthropomorphic chest phantom was used to simulate the patient and computed tomography scans were performed with a Siemens SOMATOM Sensation 16 and 64. Owing to adaptation strengths, a considerable reduction (26.6-51.5 % and 27.5-49.5 % for Sensation 16 and Sensation 64, respectively) in the radiation dose was found when compared with using a fixed tube current. There was a substantial difference in the image quality (image noise) between the adaptation strengths. Independent of selected adaptation strengths, the level of image noise throughout the chest phantom increased when CARE Dose 4D was used (p < 0.0001). We conclude that the adaptation strengths can be used to obtain user-specified modifications to image quality or radiation exposure to the patient.

  2. 4D ultrafast ultrasound flow imaging: in vivo quantification of arterial volumetric flow rate in a single heartbeat

    NASA Astrophysics Data System (ADS)

    Correia, Mafalda; Provost, Jean; Tanter, Mickael; Pernot, Mathieu

    2016-12-01

    We present herein 4D ultrafast ultrasound flow imaging, a novel ultrasound-based volumetric imaging technique for the quantitative mapping of blood flow. Complete volumetric blood flow distribution imaging was achieved through 2D tilted plane-wave insonification, 2D multi-angle cross-beam beamforming, and 3D vector Doppler velocity components estimation by least-squares fitting. 4D ultrafast ultrasound flow imaging was performed in large volumetric fields of view at very high volume rate (>4000 volumes s-1) using a 1024-channel 4D ultrafast ultrasound scanner and a 2D matrix-array transducer. The precision of the technique was evaluated in vitro by using 3D velocity vector maps to estimate volumetric flow rates in a vessel phantom. Volumetric Flow rate errors of less than 5% were found when volumetric flow rates and peak velocities were respectively less than 360 ml min-1 and 100 cm s-1. The average volumetric flow rate error increased to 18.3% when volumetric flow rates and peak velocities were up to 490 ml min-1 and 1.3 m s-1, respectively. The in vivo feasibility of the technique was shown in the carotid arteries of two healthy volunteers. The 3D blood flow velocity distribution was assessed during one cardiac cycle in a full volume and it was used to quantify volumetric flow rates (375  ±  57 ml min-1 and 275  ±  43 ml min-1). Finally, the formation of 3D vortices at the carotid artery bifurcation was imaged at high volume rates.

  3. 4D ultrafast ultrasound flow imaging: in vivo quantification of arterial volumetric flow rate in a single heartbeat.

    PubMed

    Correia, Mafalda; Provost, Jean; Tanter, Mickael; Pernot, Mathieu

    2016-12-07

    We present herein 4D ultrafast ultrasound flow imaging, a novel ultrasound-based volumetric imaging technique for the quantitative mapping of blood flow. Complete volumetric blood flow distribution imaging was achieved through 2D tilted plane-wave insonification, 2D multi-angle cross-beam beamforming, and 3D vector Doppler velocity components estimation by least-squares fitting. 4D ultrafast ultrasound flow imaging was performed in large volumetric fields of view at very high volume rate (>4000 volumes s(-1)) using a 1024-channel 4D ultrafast ultrasound scanner and a 2D matrix-array transducer. The precision of the technique was evaluated in vitro by using 3D velocity vector maps to estimate volumetric flow rates in a vessel phantom. Volumetric Flow rate errors of less than 5% were found when volumetric flow rates and peak velocities were respectively less than 360 ml min(-1) and 100 cm s(-1). The average volumetric flow rate error increased to 18.3% when volumetric flow rates and peak velocities were up to 490 ml min(-1) and 1.3 m s(-1), respectively. The in vivo feasibility of the technique was shown in the carotid arteries of two healthy volunteers. The 3D blood flow velocity distribution was assessed during one cardiac cycle in a full volume and it was used to quantify volumetric flow rates (375  ±  57 ml min(-1) and 275  ±  43 ml min(-1)). Finally, the formation of 3D vortices at the carotid artery bifurcation was imaged at high volume rates.

  4. Diagnostic algorithm: how to make use of new 2D, 3D and 4D ultrasound technologies in breast imaging.

    PubMed

    Weismann, C F; Datz, L

    2007-11-01

    The aim of this publication is to present a time saving diagnostic algorithm consisting of two-dimensional (2D), three-dimensional (3D) and four-dimensional (4D) ultrasound (US) technologies. This algorithm of eight steps combines different imaging modalities and render modes which allow a step by step analysis of 2D, 3D and 4D diagnostic criteria. Advanced breast US systems with broadband high frequency linear transducers, full digital data management and high resolution are the actual basis for two-dimensional breast US studies in order to detect early breast cancer (step 1). The continuous developments of 2D US technologies including contrast resolution imaging (CRI) and speckle reduction imaging (SRI) have a direct influence on the high quality of three-dimensional and four-dimensional presentation of anatomical breast structures and pathological details. The diagnostic options provided by static 3D volume datasets according to US BI-RADS analogue assessment, concerning lesion shape, orientation, margin, echogenic rim sign, lesion echogenicity, acoustic transmission, associated calcifications, 3D criteria of the coronal plane, surrounding tissue composition (step 2) and lesion vascularity (step 6) are discussed. Static 3D datasets offer the combination of long axes distance measurements and volume calculations, which are the basis for an accurate follow-up in BI-RADS II and BI-RADS III lesions (step 3). Real time 4D volume contrast imaging (VCI) is able to demonstrate tissue elasticity (step 5). Glass body rendering is a static 3D tool which presents greyscale and colour information to study the vascularity and the vascular architecture of a lesion (step 6). Tomographic ultrasound imaging (TUI) is used for a slice by slice documentation in different investigation planes (A-,B- or C-plane) (steps 4 and 7). The final step 8 uses the panoramic view technique (XTD-View) to document the localisation within the breast and to make the position of a lesion simply

  5. Optical Aperture Synthesis Object's Information Extracting Based on Wavelet Denoising

    NASA Astrophysics Data System (ADS)

    Fan, W. J.; Lu, Y.

    2006-10-01

    Wavelet denoising is studied to improve OAS(optical aperture synthesis) object's Fourier information extracting. Translation invariance wavelet denoising based on Donoho wavelet soft threshold denoising is researched to remove Pseudo-Gibbs in wavelet soft threshold image. OAS object's information extracting based on translation invariance wavelet denoising is studied. The study shows that wavelet threshold denoising can improve the precision and the repetition of object's information extracting from interferogram, and the translation invariance wavelet denoising information extracting is better than soft threshold wavelet denoising information extracting.

  6. New variational image decomposition model for simultaneously denoising and segmenting optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Duan, Jinming; Tench, Christopher; Gottlob, Irene; Proudlock, Frank; Bai, Li

    2015-11-01

    Optical coherence tomography (OCT) imaging plays an important role in clinical diagnosis and monitoring of diseases of the human retina. Automated analysis of optical coherence tomography images is a challenging task as the images are inherently noisy. In this paper, a novel variational image decomposition model is proposed to decompose an OCT image into three components: the first component is the original image but with the noise completely removed; the second contains the set of edges representing the retinal layer boundaries present in the image; and the third is an image of noise, or in image decomposition terms, the texture, or oscillatory patterns of the original image. In addition, a fast Fourier transform based split Bregman algorithm is developed to improve computational efficiency of solving the proposed model. Extensive experiments are conducted on both synthesised and real OCT images to demonstrate that the proposed model outperforms the state-of-the-art speckle noise reduction methods and leads to accurate retinal layer segmentation.

  7. New variational image decomposition model for simultaneously denoising and segmenting optical coherence tomography images.

    PubMed

    Duan, Jinming; Tench, Christopher; Gottlob, Irene; Proudlock, Frank; Bai, Li

    2015-11-21

    Optical coherence tomography (OCT) imaging plays an important role in clinical diagnosis and monitoring of diseases of the human retina. Automated analysis of optical coherence tomography images is a challenging task as the images are inherently noisy. In this paper, a novel variational image decomposition model is proposed to decompose an OCT image into three components: the first component is the original image but with the noise completely removed; the second contains the set of edges representing the retinal layer boundaries present in the image; and the third is an image of noise, or in image decomposition terms, the texture, or oscillatory patterns of the original image. In addition, a fast Fourier transform based split Bregman algorithm is developed to improve computational efficiency of solving the proposed model. Extensive experiments are conducted on both synthesised and real OCT images to demonstrate that the proposed model outperforms the state-of-the-art speckle noise reduction methods and leads to accurate retinal layer segmentation.

  8. Impact of CT attenuation correction method on quantitative respiratory-correlated (4D) PET/CT imaging

    SciTech Connect

    Nyflot, Matthew J.; Lee, Tzu-Cheng; Alessio, Adam M.; Kinahan, Paul E.; Wollenweber, Scott D.; Stearns, Charles W.; Bowen, Stephen R.

    2015-01-15

    Purpose: Respiratory-correlated positron emission tomography (PET/CT) 4D PET/CT is used to mitigate errors from respiratory motion; however, the optimal CT attenuation correction (CTAC) method for 4D PET/CT is unknown. The authors performed a phantom study to evaluate the quantitative performance of CTAC methods for 4D PET/CT in the ground truth setting. Methods: A programmable respiratory motion phantom with a custom movable insert designed to emulate a lung lesion and lung tissue was used for this study. The insert was driven by one of five waveforms: two sinusoidal waveforms or three patient-specific respiratory waveforms. 3DPET and 4DPET images of the phantom under motion were acquired and reconstructed with six CTAC methods: helical breath-hold (3DHEL), helical free-breathing (3DMOT), 4D phase-averaged (4DAVG), 4D maximum intensity projection (4DMIP), 4D phase-matched (4DMATCH), and 4D end-exhale (4DEXH) CTAC. Recovery of SUV{sub max}, SUV{sub mean}, SUV{sub peak}, and segmented tumor volume was evaluated as RC{sub max}, RC{sub mean}, RC{sub peak}, and RC{sub vol}, representing percent difference relative to the static ground truth case. Paired Wilcoxon tests and Kruskal–Wallis ANOVA were used to test for significant differences. Results: For 4DPET imaging, the maximum intensity projection CTAC produced significantly more accurate recovery coefficients than all other CTAC methods (p < 0.0001 over all metrics). Over all motion waveforms, ratios of 4DMIP CTAC recovery were 0.2 ± 5.4, −1.8 ± 6.5, −3.2 ± 5.0, and 3.0 ± 5.9 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. In comparison, recovery coefficients for phase-matched CTAC were −8.4 ± 5.3, −10.5 ± 6.2, −7.6 ± 5.0, and −13.0 ± 7.7 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. When testing differences between phases over all CTAC methods and waveforms, end-exhale phases were significantly more accurate (p = 0.005). However, these differences were driven by

  9. Adaptive Denoising Technique for Robust Analysis of Functional Magnetic Resonance Imaging Data

    DTIC Science & Technology

    2007-11-02

    or receive while t fMRI o versatil of epoc method ER-fM to the studies comes intra-su functioADAPTIVE DENOISING TECHNIQUE FOR ROBUST ANALYSIS OF...supported in part by the Center for Advanced Software and Biomedical Engineering Consultations (CASBEC), Cairo University, and IBE Technologies , Egypt

  10. 4D reconstruction of the past: the image retrieval and 3D model construction pipeline

    NASA Astrophysics Data System (ADS)

    Hadjiprocopis, Andreas; Ioannides, Marinos; Wenzel, Konrad; Rothermel, Mathias; Johnsons, Paul S.; Fritsch, Dieter; Doulamis, Anastasios; Protopapadakis, Eftychios; Kyriakaki, Georgia; Makantasis, Kostas; Weinlinger, Guenther; Klein, Michael; Fellner, Dieter; Stork, Andre; Santos, Pedro

    2014-08-01

    One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Our main goal is to enable historians, architects, archaeolo- gists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web. This paper aims to provide an update of our progress in designing and imple- menting a pipeline for searching, filtering and retrieving photographs from Open Access Image Repositories and social media sites and using these images to build accurate 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EU- ROPEANA. We provide details of how our implemented software searches and retrieves images of archaeological sites from Flickr and Picasa repositories as well as strategies on how to filter the results, on two levels; a) based on their built-in metadata including geo-location information and b) based on image processing and clustering techniques. We also describe our implementation of a Structure from Motion pipeline designed for producing 3D models using the large collection of 2D input images (>1000) retrieved from Internet Repositories.

  11. 4D optical coherence tomography of the embryonic heart using gated imaging

    NASA Astrophysics Data System (ADS)

    Jenkins, Michael W.; Rothenberg, Florence; Roy, Debashish; Nikolski, Vladimir P.; Wilson, David L.; Efimov, Igor R.; Rollins, Andrew M.

    2005-04-01

    Computed tomography (CT), ultrasound, and magnetic resonance imaging have been used to image and diagnose diseases of the human heart. By gating the acquisition of the images to the heart cycle (gated imaging), these modalities enable one to produce 3D images of the heart without significant motion artifact and to more accurately calculate various parameters such as ejection fractions [1-3]. Unfortunately, these imaging modalities give inadequate resolution when investigating embryonic development in animal models. Defects in developmental mechanisms during embryogenesis have long been thought to result in congenital cardiac anomalies. Our understanding of normal mechanisms of heart development and how abnormalities can lead to defects has been hampered by our inability to detect anatomic and physiologic changes in these small (<2mm) organs. Optical coherence tomography (OCT) has made it possible to visualize internal structures of the living embryonic heart with high-resolution in two- and threedimensions. OCT offers higher resolution than ultrasound (30 um axial, 90 um lateral) and magnetic resonance microscopy (25 um axial, 31 um lateral) [4, 5], with greater depth penetration over confocal microscopy (200 um). Optical coherence tomography (OCT) uses back reflected light from a sample to create an image with axial resolutions ranging from 2-15 um, while penetrating 1-2 mm in depth [6]. In the past, OCT groups estimated ejection fractions using 2D images in a Xenopus laevis [7], created 3D renderings of chick embryo hearts [8], and used a gated reconstruction technique to produce 2D Doppler OCT image of an in vivo Xenopus laevis heart [9]. In this paper we present a gated imaging system that allowed us to produce a 16-frame 3D movie of a beating chick embryo heart. The heart was excised from a day two (stage 13) chicken embryo and electrically paced at 1 Hz. We acquired 2D images (B-scans) in 62.5 ms, which provides enough temporal resolution to distinguish end

  12. WE-G-BRF-09: Force- and Image-Adaptive Strategies for Robotised Placement of 4D Ultrasound Probes

    SciTech Connect

    Kuhlemann, I; Bruder, R; Ernst, F; Schweikard, A

    2014-06-15

    Purpose: To allow continuous acquisition of high quality 4D ultrasound images for non-invasive live tracking of tumours for IGRT, image- and force-adaptive strategies for robotised placement of 4D ultrasound probes are developed and evaluated. Methods: The developed robotised ultrasound system is based on a 6-axes industrial robot (adept Viper s850) carrying a 4D ultrasound transducer with a mounted force-torque sensor. The force-adaptive placement strategies include probe position control using artificial potential fields and contact pressure regulation by a PD controller strategy. The basis for live target tracking is a continuous minimum contact pressure to ensure good image quality and high patient comfort. This contact pressure can be significantly disturbed by respiratory movements and has to be compensated. All measurements were performed on human subjects under realistic conditions. When performing cardiac ultrasound, rib- and lung shadows are a common source of interference and can disrupt the tracking. To ensure continuous tracking, these artefacts had to be detected to automatically realign the probe. The detection is realised by multiple algorithms based on entropy calculations as well as a determination of the image quality. Results: Through active contact pressure regulation it was possible to reduce the variance of the contact pressure by 89.79% despite respiratory motion of the chest. The results regarding the image processing clearly demonstrate the feasibility to detect image artefacts like rib shadows in real-time. Conclusion: In all cases, it was possible to stabilise the image quality by active contact pressure control and automatically detected image artefacts. This fact enables the possibility to compensate for such interferences by realigning the probe and thus continuously optimising the ultrasound images. This is a huge step towards fully automated transducer positioning and opens the possibility for stable target tracking in

  13. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G.; Liu, Chihray; Lu, Bo

    2015-12-01

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm ‘the common mask guided image reconstruction’ (c-MGIR). In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and ‘well’ solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the

  14. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography.

    PubMed

    Park, Justin C; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G; Liu, Chihray; Lu, Bo

    2015-12-07

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm 'the common mask guided image reconstruction' (c-MGIR).In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and 'well' solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm

  15. 4D PET iterative deconvolution with spatiotemporal regularization for quantitative dynamic PET imaging.

    PubMed

    Reilhac, Anthonin; Charil, Arnaud; Wimberley, Catriona; Angelis, Georgios; Hamze, Hasar; Callaghan, Paul; Garcia, Marie-Paule; Boisson, Frederic; Ryder, Will; Meikle, Steven R; Gregoire, Marie-Claude

    2015-09-01

    Quantitative measurements in dynamic PET imaging are usually limited by the poor counting statistics particularly in short dynamic frames and by the low spatial resolution of the detection system, resulting in partial volume effects (PVEs). In this work, we present a fast and easy to implement method for the restoration of dynamic PET images that have suffered from both PVE and noise degradation. It is based on a weighted least squares iterative deconvolution approach of the dynamic PET image with spatial and temporal regularization. Using simulated dynamic [(11)C] Raclopride PET data with controlled biological variations in the striata between scans, we showed that the restoration method provides images which exhibit less noise and better contrast between emitting structures than the original images. In addition, the method is able to recover the true time activity curve in the striata region with an error below 3% while it was underestimated by more than 20% without correction. As a result, the method improves the accuracy and reduces the variability of the kinetic parameter estimates calculated from the corrected images. More importantly it increases the accuracy (from less than 66% to more than 95%) of measured biological variations as well as their statistical detectivity.

  16. 4D motion modeling of the coronary arteries from CT images for robotic assisted minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Zhang, Dong Ping; Edwards, Eddie; Mei, Lin; Rueckert, Daniel

    2009-02-01

    In this paper, we present a novel approach for coronary artery motion modeling from cardiac Computed Tomography( CT) images. The aim of this work is to develop a 4D motion model of the coronaries for image guidance in robotic-assisted totally endoscopic coronary artery bypass (TECAB) surgery. To utilize the pre-operative cardiac images to guide the minimally invasive surgery, it is essential to have a 4D cardiac motion model to be registered with the stereo endoscopic images acquired intraoperatively using the da Vinci robotic system. In this paper, we are investigating the extraction of the coronary arteries and the modelling of their motion from a dynamic sequence of cardiac CT. We use a multi-scale vesselness filter to enhance vessels in the cardiac CT images. The centerlines of the arteries are extracted using a ridge traversal algorithm. Using this method the coronaries can be extracted in near real-time as only local information is used in vessel tracking. To compute the deformation of the coronaries due to cardiac motion, the motion is extracted from a dynamic sequence of cardiac CT. Each timeframe in this sequence is registered to the end-diastole timeframe of the sequence using a non-rigid registration algorithm based on free-form deformations. Once the images have been registered a dynamic motion model of the coronaries can be obtained by applying the computed free-form deformations to the extracted coronary arteries. To validate the accuracy of the motion model we compare the actual position of the coronaries in each time frame with the predicted position of the coronaries as estimated from the non-rigid registration. We expect that this motion model of coronaries can facilitate the planning of TECAB surgery, and through the registration with real-time endoscopic video images it can reduce the conversion rate from TECAB to conventional procedures.

  17. Dynamic Multiscale Boundary Conditions for 4D CT Images of Healthy and Emphysematous Rat

    SciTech Connect

    Jacob, Rick E.; Carson, James P.; Thomas, Mathew; Einstein, Daniel R.

    2013-06-14

    Changes in the shape of the lung during breathing determine the movement of airways and alveoli, and thus impact airflow dynamics. Modeling airflow dynamics in health and disease is a key goal for predictive multiscale models of respiration. Past efforts to model changes in lung shape during breathing have measured shape at multiple breath-holds. However, breath-holds do not capture hysteretic differences between inspiration and expiration resulting from the additional energy required for inspiration. Alternatively, imaging dynamically – without breath-holds – allows measurement of hysteretic differences. In this study, we acquire multiple micro-CT images per breath (4DCT) in live rats, and from these images we develop, for the first time, dynamic volume maps. These maps show changes in local volume across the entire lung throughout the breathing cycle and accurately predict the global pressure-volume (PV) hysteresis.

  18. A novel non-registration based segmentation approach of 4D dynamic upper airway MR images: minimally interactive fuzzy connectedness

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Sin, Sanghun; Wagshul, Mark E.; Arens, Raanan

    2014-03-01

    There are several disease conditions that lead to upper airway restrictive disorders. In the study of these conditions, it is important to take into account the dynamic nature of the upper airway. Currently, dynamic MRI is the modality of choice for studying these diseases. Unfortunately, the contrast resolution obtainable in the images poses many challenges for an effective segmentation of the upper airway structures. No viable methods have been developed to date to solve this problem. In this paper, we demonstrate the adaptation of the iterative relative fuzzy connectedness (IRFC) algorithm for this application as a potential practical tool. After preprocessing to correct for background image non-uniformities and the non-standardness of MRI intensities, seeds are specified for the airway and its crucial background tissue components in only the 3D image corresponding to the first time instance of the 4D volume. Subsequently the process runs without human interaction and completes segmenting the whole 4D volume in 10 sec. Our evaluations indicate that the segmentations are of very good quality achieving true positive and false positive volume fractions and boundary distance with respect to reference manual segmentations of about 93%, 0.1%, and 0.5 mm, respectively.

  19. MO-F-CAMPUS-J-03: Sorting 2D Dynamic MR Images Using Internal Respiratory Signal for 4D MRI

    SciTech Connect

    Wen, Z; Hui, C; Beddar, S; Stemkens, B; Tijssen, R; Berg, C van den

    2015-06-15

    Purpose: To develop a novel algorithm to extract internal respiratory signal (IRS) for sorting dynamic magnetic resonance (MR) images in order to achieve four-dimensional (4D) MR imaging. Methods: Dynamic MR images were obtained with the balanced steady state free precession by acquiring each two-dimensional sagittal slice repeatedly for more than one breathing cycle. To generate a robust IRS, we used 5 different representative internal respiratory surrogates in both the image space (body area) and the Fourier space (the first two low-frequency phase components in the anterior-posterior direction, and the first two low-frequency phase components in the superior-inferior direction). A clustering algorithm was then used to search for a group of similar individual internal signals, which was then used to formulate the final IRS. A phantom study and a volunteer study were performed to demonstrate the effectiveness of this algorithm. The IRS was compared to the signal from the respiratory bellows. Results: The IRS computed by our algorithm matched well with the bellows signal in both the phantom and the volunteer studies. On average, the normalized cross correlation between the IRS and the bellows signal was 0.97 in the phantom study and 0.87 in the volunteer study, respectively. The average difference between the end inspiration times in the IRS and bellows signal was 0.18 s in the phantom study and 0.14 s in the volunteer study, respectively. 4D images sorted based on the IRS showed minimal mismatched artifacts, and the motion of the anatomy was coherent with the respiratory phases. Conclusion: A novel algorithm was developed to generate IRS from dynamic MR images to achieve 4D MR imaging. The performance of the IRS was comparable to that of the bellows signal. It can be easily implemented into the clinic and potentially could replace the use of external respiratory surrogates. This research was partially funded by the the Center for Radiation Oncology Research from

  20. Combined self-learning based single-image super-resolution and dual-tree complex wavelet transform denoising for medical images

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Ye, Xujiong; Slabaugh, Greg; Keegan, Jennifer; Mohiaddin, Raad; Firmin, David

    2016-03-01

    In this paper, we propose a novel self-learning based single-image super-resolution (SR) method, which is coupled with dual-tree complex wavelet transform (DTCWT) based denoising to better recover high-resolution (HR) medical images. Unlike previous methods, this self-learning based SR approach enables us to reconstruct HR medical images from a single low-resolution (LR) image without extra training on HR image datasets in advance. The relationships between the given image and its scaled down versions are modeled using support vector regression with sparse coding and dictionary learning, without explicitly assuming reoccurrence or self-similarity across image scales. In addition, we perform DTCWT based denoising to initialize the HR images at each scale instead of simple bicubic interpolation. We evaluate our method on a variety of medical images. Both quantitative and qualitative results show that the proposed approach outperforms bicubic interpolation and state-of-the-art single-image SR methods while effectively removing noise.

  1. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  2. TU-G-BRA-02: Can We Extract Lung Function Directly From 4D-CT Without Deformable Image Registration?

    SciTech Connect

    Kipritidis, J; Woodruff, H; Counter, W; Keall, P; Hofman, M; Siva, S; Callahan, J; Le Roux, P; Hardcastle, N

    2015-06-15

    Purpose: Dynamic CT ventilation imaging (CT-VI) visualizes air volume changes in the lung by evaluating breathing-induced lung motion using deformable image registration (DIR). Dynamic CT-VI could enable functionally adaptive lung cancer radiation therapy, but its sensitivity to DIR parameters poses challenges for validation. We hypothesize that a direct metric using CT parameters derived from Hounsfield units (HU) alone can provide similar ventilation images without DIR. We compare the accuracy of Direct and Dynamic CT-VIs versus positron emission tomography (PET) images of inhaled {sup 68}Ga-labelled nanoparticles (‘Galligas’). Methods: 25 patients with lung cancer underwent Galligas 4D-PET/CT scans prior to radiation therapy. For each patient we produced three CT- VIs. (i) Our novel method, Direct CT-VI, models blood-gas exchange as the product of air and tissue density at each lung voxel based on time-averaged 4D-CT HU values. Dynamic CT-VIs were produced by evaluating: (ii) regional HU changes, and (iii) regional volume changes between the exhale and inhale 4D-CT phase images using a validated B-spline DIR method. We assessed the accuracy of each CT-VI by computing the voxel-wise Spearman correlation with free-breathing Galligas PET, and also performed a visual analysis. Results: Surprisingly, Direct CT-VIs exhibited better global correlation with Galligas PET than either of the dynamic CT-VIs. The (mean ± SD) correlations were (0.55 ± 0.16), (0.41 ± 0.22) and (0.29 ± 0.27) for Direct, Dynamic HU-based and Dynamic volume-based CT-VIs respectively. Visual comparison of Direct CT-VI to PET demonstrated similarity for emphysema defects and ventral-to-dorsal gradients, but inability to identify decreased ventilation distal to tumor-obstruction. Conclusion: Our data supports the hypothesis that Direct CT-VIs are as accurate as Dynamic CT-VIs in terms of global correlation with Galligas PET. Visual analysis, however, demonstrated that different CT

  3. Online 4d Reconstruction Using Multi-Images Available Under Open Access

    NASA Astrophysics Data System (ADS)

    Ioannides, M.; Hadjiprocopi, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E.; Makantasis, K.; Santos, P.; Fellner, D.; Stork, A.; Balet, O.; Julien, M.; Weinlinger, G.; Johnson, P. S.; Klein, M.; Fritsch, D.

    2013-07-01

    The advent of technology in digital cameras and their incorporation into virtually any smart mobile device has led to an explosion of the number of photographs taken every day. Today, the number of images stored online and available freely has reached unprecedented levels. It is estimated that in 2011, there were over 100 billion photographs stored in just one of the major social media sites. This number is growing exponentially. Moreover, advances in the fields of Photogrammetry and Computer Vision have led to significant breakthroughs such as the Structure from Motion algorithm which creates 3D models of objects using their twodimensional photographs. The existence of powerful and affordable computational machinery not only the reconstruction of complex structures but also entire cities. This paper illustrates an overview of our methodology for producing 3D models of Cultural Heritage structures such as monuments and artefacts from 2D data (pictures, video), available on Internet repositories, social media, Google Maps, Bing, etc. We also present new approaches to semantic enrichment of the end results and their subsequent export to Europeana, the European digital library, for integrated, interactive 3D visualisation within regular web browsers using WebGl and X3D. Our main goal is to enable historians, architects, archaeologists, urban planners and affiliated professionals to reconstruct views of historical structures from millions of images floating around the web and interact with them.

  4. A radiobiological analysis of the effect of 3D versus 4D image-based planning in lung cancer radiotherapy.

    PubMed

    Roland, Teboh; Mavroidis, Panayiotis; Gutierrez, Alonso; Goytia, Virginia; Papanikolaou, Niko

    2009-09-21

    Dose distributions generated on a static anatomy may differ significantly from those delivered to temporally varying anatomy such as for abdominal and thoracic tumors, due largely in part to the unavoidable organ motion and deformation effects stemming from respiration. In this work, the degree of such variation for three treatment techniques, namely static conventional, gating and target tracking radiotherapy, was investigated. The actual delivered dose was approximated by planning all the phases of a 4DCT image set. Data from six (n = 6) previously treated lung cancer patients were used for this study with tumor motion ranging from 2 to 10 mm. Complete radiobiological analyses were performed to assess the clinical significance of the observed discrepancies between the 3D and 4DCT image-based dose distributions. Using the complication-free tumor control probability (P+) objective, we observed small differences in P+ between the 3D and 4DCT image-based plans (<2.0% difference on average) for the gating and static conventional regimens and higher differences in P+ (4.0% on average) for the tracking regimen. Furthermore, we observed, as a general trend, that the 3D plan underestimated the P+ values. While it is not possible to draw any general conclusions from a small patient cohort, our results suggest that there exists a patient population in which 4D planning does not provide any additional benefits beyond that afforded by 3D planning for static conventional or gated radiotherapy. This statement is consistent with previous studies based on physical dosimetric evaluations only. The higher differences observed with the tracking technique suggest that individual patient plans should be evaluated on a case-by-case basis to assess if 3D or 4D imaging is appropriate for the tracking technique.

  5. Assessment of regional ventilation and deformation using 4D-CT imaging for healthy human lungs during tidal breathing.

    PubMed

    Jahani, Nariman; Choi, Sanghun; Choi, Jiwoong; Iyer, Krishna; Hoffman, Eric A; Lin, Ching-Long

    2015-11-15

    This study aims to assess regional ventilation, nonlinearity, and hysteresis of human lungs during dynamic breathing via image registration of four-dimensional computed tomography (4D-CT) scans. Six healthy adult humans were studied by spiral multidetector-row CT during controlled tidal breathing as well as during total lung capacity and functional residual capacity breath holds. Static images were utilized to contrast static vs. dynamic (deep vs. tidal) breathing. A rolling-seal piston system was employed to maintain consistent tidal breathing during 4D-CT spiral image acquisition, providing required between-breath consistency for physiologically meaningful reconstructed respiratory motion. Registration-derived variables including local air volume and anisotropic deformation index (ADI, an indicator of preferential deformation in response to local force) were employed to assess regional ventilation and lung deformation. Lobar distributions of air volume change during tidal breathing were correlated with those of deep breathing (R(2) ≈ 0.84). Small discrepancies between tidal and deep breathing were shown to be likely due to different distributions of air volume change in the left and the right lungs. We also demonstrated an asymmetric characteristic of flow rate between inhalation and exhalation. With ADI, we were able to quantify nonlinearity and hysteresis of lung deformation that can only be captured in dynamic images. Nonlinearity quantified by ADI is greater during inhalation, and it is stronger in the lower lobes (P < 0.05). Lung hysteresis estimated by the difference of ADI between inhalation and exhalation is more significant in the right lungs than that in the left lungs.

  6. Assessment of regional ventilation and deformation using 4D-CT imaging for healthy human lungs during tidal breathing

    PubMed Central

    Jahani, Nariman; Choi, Jiwoong; Iyer, Krishna; Hoffman, Eric A.

    2015-01-01

    This study aims to assess regional ventilation, nonlinearity, and hysteresis of human lungs during dynamic breathing via image registration of four-dimensional computed tomography (4D-CT) scans. Six healthy adult humans were studied by spiral multidetector-row CT during controlled tidal breathing as well as during total lung capacity and functional residual capacity breath holds. Static images were utilized to contrast static vs. dynamic (deep vs. tidal) breathing. A rolling-seal piston system was employed to maintain consistent tidal breathing during 4D-CT spiral image acquisition, providing required between-breath consistency for physiologically meaningful reconstructed respiratory motion. Registration-derived variables including local air volume and anisotropic deformation index (ADI, an indicator of preferential deformation in response to local force) were employed to assess regional ventilation and lung deformation. Lobar distributions of air volume change during tidal breathing were correlated with those of deep breathing (R2 ≈ 0.84). Small discrepancies between tidal and deep breathing were shown to be likely due to different distributions of air volume change in the left and the right lungs. We also demonstrated an asymmetric characteristic of flow rate between inhalation and exhalation. With ADI, we were able to quantify nonlinearity and hysteresis of lung deformation that can only be captured in dynamic images. Nonlinearity quantified by ADI is greater during inhalation, and it is stronger in the lower lobes (P < 0.05). Lung hysteresis estimated by the difference of ADI between inhalation and exhalation is more significant in the right lungs than that in the left lungs. PMID:26316512

  7. 4-D imaging and monitoring of the Solfatara crater (Italy) by ambient noise tomography

    NASA Astrophysics Data System (ADS)

    Pilz, Marco; Parolai, Stefano; Woith, Heiko; Gresse, Marceau; Vandemeulebrouck, Jean

    2016-04-01

    Imaging shallow subsurface structures and monitoring related temporal variations are two of the main tasks for modern geosciences and seismology. Although many observations have reported temporal velocity changes, e.g., in volcanic areas and on landslides, new methods based on passive sources like ambient seismic noise can provide accurate spatially and temporally resolved information on the velocity structure and on velocity changes. The success of these passive applications is explained by the fact that these methods are based on surface waves which are always present in the ambient seismic noise wave field because they are excited preferentially by superficial sources. Such surface waves can easily be extracted because they dominate the Greeńs function between receivers located at the surface. For real-time monitoring of the shallow velocity structure of the Solfatara crater, one of the forty volcanoes in the Campi Flegrei area characterized by an intense hydrothermal activity due to the interaction of deep convection and meteoric water, we have installed a dense network of 50 seismological sensing units covering the whole surface area in the framework of the European project MED-SUV (The MED-SUV project has received funding from the European Union Seventh Framework Programme FP7 under Grant agreement no 308665). Continuous recordings of the ambient seismic noise over several days as well as signals of an active vibroseis source have been used. Based on a weighted inversion procedure for 3D-passive imaging using ambient noise cross-correlations of both Rayleigh and Love waves, we will present a high-resolution shear-wave velocity model of the structure beneath the Solfatara crater and its temporal changes. Results of seismic tomography are compared with a 3-D electrical resistivity model and CO2 flux map.

  8. Acoustic micro-tapping for non-contact 4D imaging of tissue elasticity.

    PubMed

    Ambroziński, Łukasz; Song, Shaozhen; Yoon, Soon Joon; Pelivanov, Ivan; Li, David; Gao, Liang; Shen, Tueng T; Wang, Ruikang K; O'Donnell, Matthew

    2016-12-23

    Elastography plays a key role in characterizing soft media such as biological tissue. Although this technology has found widespread use in both clinical diagnostics and basic science research, nearly all methods require direct physical contact with the object of interest and can even be invasive. For a number of applications, such as diagnostic measurements on the anterior segment of the eye, physical contact is not desired and may even be prohibited. Here we present a fundamentally new approach to dynamic elastography using non-contact mechanical stimulation of soft media with precise spatial and temporal shaping. We call it acoustic micro-tapping (AμT) because it employs focused, air-coupled ultrasound to induce significant mechanical displacement at the boundary of a soft material using reflection-based radiation force. Combining it with high-speed, four-dimensional (three space dimensions plus time) phase-sensitive optical coherence tomography creates a non-contact tool for high-resolution and quantitative dynamic elastography of soft tissue at near real-time imaging rates. The overall approach is demonstrated in ex-vivo porcine cornea.

  9. Acoustic micro-tapping for non-contact 4D imaging of tissue elasticity

    PubMed Central

    Ambroziński, Łukasz; Song, Shaozhen; Yoon, Soon Joon; Pelivanov, Ivan; Li, David; Gao, Liang; Shen, Tueng T.; Wang, Ruikang K.; O’Donnell, Matthew

    2016-01-01

    Elastography plays a key role in characterizing soft media such as biological tissue. Although this technology has found widespread use in both clinical diagnostics and basic science research, nearly all methods require direct physical contact with the object of interest and can even be invasive. For a number of applications, such as diagnostic measurements on the anterior segment of the eye, physical contact is not desired and may even be prohibited. Here we present a fundamentally new approach to dynamic elastography using non-contact mechanical stimulation of soft media with precise spatial and temporal shaping. We call it acoustic micro-tapping (AμT) because it employs focused, air-coupled ultrasound to induce significant mechanical displacement at the boundary of a soft material using reflection-based radiation force. Combining it with high-speed, four-dimensional (three space dimensions plus time) phase-sensitive optical coherence tomography creates a non-contact tool for high-resolution and quantitative dynamic elastography of soft tissue at near real-time imaging rates. The overall approach is demonstrated in ex-vivo porcine cornea. PMID:28008920

  10. 4D imaging of fracturing in organic-rich shales during heating

    SciTech Connect

    Maya Kobchenko; Hamed Panahi; François Renard; Dag K. Dysthe; Anders Malthe-Sørenssen; Adriano Mazzini; Julien Scheibert1; Bjørn Jamtveit; Paul Meakin

    2011-12-01

    To better understand the mechanisms of fracture pattern development and fluid escape in low permeability rocks, we performed time-resolved in situ X-ray tomography imaging to investigate the processes that occur during the slow heating (from 60 to 400 C) of organic-rich Green River shale. At about 350 C cracks nucleated in the sample, and as the temperature continued to increase, these cracks propagated parallel to shale bedding and coalesced, thus cutting across the sample. Thermogravimetry and gas chromatography revealed that the fracturing occurring at {approx}350 C was associated with significant mass loss and release of light hydrocarbons generated by the decomposition of immature organic matter. Kerogen decomposition is thought to cause an internal pressure build up sufficient to form cracks in the shale, thus providing pathways for the outgoing hydrocarbons. We show that a 2D numerical model based on this idea qualitatively reproduces the experimentally observed dynamics of crack nucleation, growth and coalescence, as well as the irregular outlines of the cracks. Our results provide a new description of fracture pattern formation in low permeability shales.

  11. Acoustic micro-tapping for non-contact 4D imaging of tissue elasticity

    NASA Astrophysics Data System (ADS)

    Ambroziński, Łukasz; Song, Shaozhen; Yoon, Soon Joon; Pelivanov, Ivan; Li, David; Gao, Liang; Shen, Tueng T.; Wang, Ruikang K.; O’Donnell, Matthew

    2016-12-01

    Elastography plays a key role in characterizing soft media such as biological tissue. Although this technology has found widespread use in both clinical diagnostics and basic science research, nearly all methods require direct physical contact with the object of interest and can even be invasive. For a number of applications, such as diagnostic measurements on the anterior segment of the eye, physical contact is not desired and may even be prohibited. Here we present a fundamentally new approach to dynamic elastography using non-contact mechanical stimulation of soft media with precise spatial and temporal shaping. We call it acoustic micro-tapping (AμT) because it employs focused, air-coupled ultrasound to induce significant mechanical displacement at the boundary of a soft material using reflection-based radiation force. Combining it with high-speed, four-dimensional (three space dimensions plus time) phase-sensitive optical coherence tomography creates a non-contact tool for high-resolution and quantitative dynamic elastography of soft tissue at near real-time imaging rates. The overall approach is demonstrated in ex-vivo porcine cornea.

  12. Reconstruction of 4D-CT from a Single Free-Breathing 3D-CT by Spatial-Temporal Image Registration

    PubMed Central

    Wu, Guorong; Wang, Qian; Lian, Jun; Shen, Dinggang

    2011-01-01

    In the radiation therapy of lung cancer, a free-breathing 3D-CT image is usually acquired in the treatment day for image-guided patient setup, by registering with the free-breathing 3D-CT image acquired in the planning day. In this way, the optimal dose plan computed in the planning day can be transferred onto the treatment day for cancer radiotherapy. However, patient setup based on the simple registration of the free-breathing 3D-CT images of the planning and the treatment days may mislead the radiotherapy, since the free-breathing 3D-CT is actually the mixed-phase image, with different slices often acquired from different respiratory phases. Moreover, a 4D-CT that is generally acquired in the planning day for improvement of dose planning is often ignored for guiding patient setup in the treatment day. To overcome these limitations, we present a novel two-step method to reconstruct the 4D-CT from a single free-breathing 3D-CT of the treatment day, by utilizing the 4D-CT model built in the planning day. Specifically, in the first step, we proposed a new spatial-temporal registration algorithm to align all phase images of the 4D-CT acquired in the planning day, for building a 4D-CT model with temporal correspondences established among all respiratory phases. In the second step, we first determine the optimal phase for each slice of the free-breathing (mixed-phase) 3D-CT of the treatment day by comparing with the 4D-CT of the planning day and thus obtain a sequence of partial 3D-CT images for the treatment day, each with only the incomplete image information in certain slices; and then we reconstruct a complete 4D-CT for the treatment day by warping the 4D-CT of the planning day (with complete information) to the sequence of partial 3D-CT images of the treatment day, under the guidance of the 4D-CT model built in the planning day. We have comprehensively evaluated our 4D-CT model building algorithm on a public lung image database, achieving the best registration

  13. 4D imaging of fluid escape in low permeability shales during heating

    NASA Astrophysics Data System (ADS)

    Renard, F.; Kobchenko, M.

    2012-04-01

    The coupling between thermal effects and deformation is relevant in many natural geological environments (rising magma, primary migration of hydrocarbons, vents) and has many industrial applications (storage of nuclear wastes, enhanced hydrocarbon recovery, coal exploitation, geothermic plants). When thermal effects involve phase transformation in the rock and production of fluids, a strong coupling may emerge between the processes of fluid escape and the ability of the rock to deform and transport fluids. To better understand the mechanisms of fracture pattern development and fluid escape in low permeability rocks, we performed time-resolved in situ X-ray tomography imaging to investigate the processes that occur during the slow heating (from 60° to 400°C) of organic-rich Green River shale. At about 350°C cracks nucleated in the sample, and as the temperature continued to increase, these cracks propagated parallel to shale bedding and coalesced, thus cutting across the sample. Thermogravimetry and gas chromatography revealed that the fracturing occurring at ~350°C was associated with significant mass loss and release of light hydrocarbons generated by the decomposition of immature organic matter. Kerogen decomposition is thought to cause an internal pressure build up sufficient to form cracks in the shale, thus providing pathways for the outgoing hydrocarbons. We show that a 2D numerical model based on this idea qualitatively reproduces the experimentally observed dynamics of crack nucleation, growth and coalescence, as well as the irregular outlines of the cracks. Our results provide a new description of fracture pattern formation in low permeability shales.

  14. Multidimensional immunolabeling and 4D time-lapse imaging of vital ex vivo lung tissue

    PubMed Central

    Vierkotten, Sarah; Lindner, Michael; Königshoff, Melanie; Eickelberg, Oliver

    2015-01-01

    During the last decades, the study of cell behavior was largely accomplished in uncoated or extracellular matrix (ECM)-coated plastic dishes. To date, considerable cell biological efforts have tried to model in vitro the natural microenvironment found in vivo. For the lung, explants cultured ex vivo as lung tissue cultures (LTCs) provide a three-dimensional (3D) tissue model containing all cells in their natural microenvironment. Techniques for assessing the dynamic live interaction between ECM and cellular tissue components, however, are still missing. Here, we describe specific multidimensional immunolabeling of living 3D-LTCs, derived from healthy and fibrotic mouse lungs, as well as patient-derived 3D-LTCs, and concomitant real-time four-dimensional multichannel imaging thereof. This approach allowed the evaluation of dynamic interactions between mesenchymal cells and macrophages with their ECM. Furthermore, fibroblasts transiently expressing focal adhesions markers incorporated into the 3D-LTCs, paving new ways for studying the dynamic interaction between cellular adhesions and their natural-derived ECM. A novel protein transfer technology (FuseIt/Ibidi) shuttled fluorescently labeled α-smooth muscle actin antibodies into the native cells of living 3D-LTCs, enabling live monitoring of α-smooth muscle actin-positive stress fibers in native tissue myofibroblasts residing in fibrotic lesions of 3D-LTCs. Finally, this technique can be applied to healthy and diseased human lung tissue, as well as to adherent cells in conventional two-dimensional cell culture. This novel method will provide valuable new insights into the dynamics of ECM (patho)biology, studying in detail the interaction between ECM and cellular tissue components in their natural microenvironment. PMID:26092995

  15. Computational biomechanics and experimental validation of vessel deformation based on 4D-CT imaging of the porcine aorta

    NASA Astrophysics Data System (ADS)

    Hazer, Dilana; Finol, Ender A.; Kostrzewa, Michael; Kopaigorenko, Maria; Richter, Götz-M.; Dillmann, Rüdiger

    2009-02-01

    Cardiovascular disease results from pathological biomechanical conditions and fatigue of the vessel wall. Image-based computational modeling provides a physical and realistic insight into the patient-specific biomechanics and enables accurate predictive simulations of development, growth and failure of cardiovascular disease. An experimental validation is necessary for the evaluation and the clinical implementation of such computational models. In the present study, we have implemented dynamic Computed-Tomography (4D-CT) imaging and catheter-based in vivo measured pressures to numerically simulate and experimentally evaluate the biomechanics of the porcine aorta. The computations are based on the Finite Element Method (FEM) and simulate the arterial wall response to the transient pressure-based boundary condition. They are evaluated by comparing the numerically predicted wall deformation and that calculated from the acquired 4D-CT data. The dynamic motion of the vessel is quantified by means of the hydraulic diameter, analyzing sequences at 5% increments over the cardiac cycle. Our results show that accurate biomechanical modeling is possible using FEM-based simulations. The RMS error of the computed hydraulic diameter at five cross-sections of the aorta was 0.188, 0.252, 0.280, 0.237 and 0.204 mm, which is equivalent to 1.7%, 2.3%, 2.7%, 2.3% and 2.0%, respectively, when expressed as a function of the time-averaged hydraulic diameter measured from the CT images. The present investigation is a first attempt to simulate and validate vessel deformation based on realistic morphological data and boundary conditions. An experimentally validated system would help in evaluating individual therapies and optimal treatment strategies in the field of minimally invasive endovascular surgery.

  16. Impact of scanning parameters and breathing patterns on image quality and accuracy of tumor motion reconstruction in 4D CBCT: a phantom study.

    PubMed

    Lee, Soyoung; Yan, Guanghua; Lu, Bo; Kahler, Darren; Li, Jonathan G; Sanjiv, Samat S

    2015-11-08

    Four-dimensional, cone-beam CT (4D CBCT) substantially reduces respiration-induced motion blurring artifacts in three-dimension (3D) CBCT. However, the image quality of 4D CBCT is significantly degraded which may affect its accuracy in localizing a mobile tumor for high-precision, image-guided radiation therapy (IGRT). The purpose of this study was to investigate the impact of scanning parameters hereinafter collectively referred to as scanning sequence) and breathing patterns on the image quality and the accuracy of computed tumor trajectory for a commercial 4D CBCT system, in preparation for its clinical implementation. We simulated a series of periodic and aperiodic sinusoidal breathing patterns with a respiratory motion phantom. The aperiodic pattern was created by varying the period or amplitude of individual sinusoidal breathing cycles. 4D CBCT scans of the phantom were acquired with a manufacturer-supplied scanning sequence (4D-S-slow) and two in-house modified scanning sequences (4D-M-slow and 4D-M-fast). While 4D-S-slow used small field of view (FOV), partial rotation (200°), and no imaging filter, 4D-M-slow and 4D-M-fast used medium FOV, full rotation, and the F1 filter. The scanning speed was doubled in 4D-M-fast (100°/min gantry rotation). The image quality of the 4D CBCT scans was evaluated using contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), and motion blurring ratio (MBR). The trajectory of the moving target was reconstructed by registering each phase of the 4D CBCT with a reference CT. The root-mean-squared-error (RMSE) analysis was used to quantify its accuracy. Significant decrease in CNR and SNR from 3D CBCT to 4D CBCT was observed. The 4D-S-slow and 4D-M-fast scans had comparable image quality, while the 4D-M-slow scans had better performance due to doubled projections. Both CNR and SNR decreased slightly as the breathing period increased, while no dependence on the amplitude was observed. The difference of both CNR and SNR

  17. SU-E-J-74: Impact of Respiration-Correlated Image Quality On Tumor Motion Reconstruction in 4D-CBCT: A Phantom Study

    SciTech Connect

    Lee, S; Lu, B; Samant, S

    2014-06-01

    Purpose: To investigate the effects of scanning parameters and respiratory patterns on the image quality for 4-dimensional cone-beam computed tomography(4D-CBCT) imaging, and assess the accuracy of computed tumor trajectory for lung imaging using registration of phased 4D-CBCT imaging with treatment planning-CT. Methods: We simulated a periodic and non-sinusoidal respirations with various breathing periods and amplitudes using a respiratory phantom(Quasar, Modus Medical Devices Inc) to acquire respiration-correlated 4D-CBCT images. 4D-CBCT scans(Elekta Oncology Systems Ltd) were performed with different scanning parameters for collimation size(e.g., small and medium field-of-views) and scanning speed(e.g., slow 50°·min{sup −1}, fast 100°·min{sup −1}). Using a standard CBCT-QA phantom(Catphan500, The Phantom Laboratory), the image qualities of all phases in 4D-CBCT were evaluated with contrast-to-noise ratio(CNR) for lung tissue and uniformity in each module. Using a respiratory phantom, the target imaging in 4D-CBCT was compared to 3D-CBCT target image. The target trajectory from 10-respiratory phases in 4D-CBCT was extracted using an automatic image registration and subsequently assessed the accuracy by comparing with actual motion of the target. Results: Image analysis indicated that a short respiration with a small amplitude resulted in superior CNR and uniformity. Smaller variation of CNR and uniformity was present amongst different respiratory phases. The small field-of-view with a partial scan using slow scan can improve CNR, but degraded uniformity. Large amplitude of respiration can degrade image quality. RMS of voxel densities in tumor area of 4D-CBCT images between sinusoidal and non-sinusoidal motion exhibited no significant difference. The maximum displacement errors of motion trajectories were less than 1.0 mm and 13.5 mm, for sinusoidal and non-sinusoidal breathings, respectively. The accuracy of motion reconstruction showed good overall

  18. First Steps Toward Ultrasound-Based Motion Compensation for Imaging and Therapy: Calibration with an Optical System and 4D PET Imaging

    PubMed Central

    Schwaab, Julia; Kurz, Christopher; Sarti, Cristina; Bongers, André; Schoenahl, Frédéric; Bert, Christoph; Debus, Jürgen; Parodi, Katia; Jenne, Jürgen Walter

    2015-01-01

    Target motion, particularly in the abdomen, due to respiration or patient movement is still a challenge in many diagnostic and therapeutic processes. Hence, methods to detect and compensate this motion are required. Diagnostic ultrasound (US) represents a non-invasive and dose-free alternative to fluoroscopy, providing more information about internal target motion than respiration belt or optical tracking. The goal of this project is to develop an US-based motion tracking for real-time motion correction in radiation therapy and diagnostic imaging, notably in 4D positron emission tomography (PET). In this work, a workflow is established to enable the transformation of US tracking data to the coordinates of the treatment delivery or imaging system – even if the US probe is moving due to respiration. It is shown that the US tracking signal is equally adequate for 4D PET image reconstruction as the clinically used respiration belt and provides additional opportunities in this concern. Furthermore, it is demonstrated that the US probe being within the PET field of view generally has no relevant influence on the image quality. The accuracy and precision of all the steps in the calibration workflow for US tracking-based 4D PET imaging are found to be in an acceptable range for clinical implementation. Eventually, we show in vitro that an US-based motion tracking in absolute room coordinates with a moving US transducer is feasible. PMID:26649277

  19. First Steps Toward Ultrasound-Based Motion Compensation for Imaging and Therapy: Calibration with an Optical System and 4D PET Imaging.

    PubMed

    Schwaab, Julia; Kurz, Christopher; Sarti, Cristina; Bongers, André; Schoenahl, Frédéric; Bert, Christoph; Debus, Jürgen; Parodi, Katia; Jenne, Jürgen Walter

    2015-01-01

    Target motion, particularly in the abdomen, due to respiration or patient movement is still a challenge in many diagnostic and therapeutic processes. Hence, methods to detect and compensate this motion are required. Diagnostic ultrasound (US) represents a non-invasive and dose-free alternative to fluoroscopy, providing more information about internal target motion than respiration belt or optical tracking. The goal of this project is to develop an US-based motion tracking for real-time motion correction in radiation therapy and diagnostic imaging, notably in 4D positron emission tomography (PET). In this work, a workflow is established to enable the transformation of US tracking data to the coordinates of the treatment delivery or imaging system - even if the US probe is moving due to respiration. It is shown that the US tracking signal is equally adequate for 4D PET image reconstruction as the clinically used respiration belt and provides additional opportunities in this concern. Furthermore, it is demonstrated that the US probe being within the PET field of view generally has no relevant influence on the image quality. The accuracy and precision of all the steps in the calibration workflow for US tracking-based 4D PET imaging are found to be in an acceptable range for clinical implementation. Eventually, we show in vitro that an US-based motion tracking in absolute room coordinates with a moving US transducer is feasible.

  20. Advancement in Understanding Volcanic Processes by 4D Synchrotron X-ray Computed Microtomography Imaging of Rock Textures

    NASA Astrophysics Data System (ADS)

    Polacci, M.; Arzilli, F.; La Spina, G.

    2015-12-01

    X-ray computed microtomography (μCT) is the only high-resolution, non-destructive technique that allows visualization and processing of geomaterials directly in three-dimensions. This, together with the development of more and more sophisticated imaging techniques, have generated in the last ten years a widespread application of this methodology in Earth Sciences, from structural geology to palaeontology to igneous petrology to volcanology. Here, I will describe how X-ray μCT has contributed to advance our knowledge of volcanic processes and eruption dynamics and illustrate the first, preliminary results from 4D (space+time) X-ray microtomographic experiments of magma kinetics in basaltic systems.

  1. Automatic detection of cardiac cycle and measurement of the mitral annulus diameter in 4D TEE images

    NASA Astrophysics Data System (ADS)

    Graser, Bastian; Hien, Maximilian; Rauch, Helmut; Meinzer, Hans-Peter; Heimann, Tobias

    2012-02-01

    Mitral regurgitation is a wide spread problem. For successful surgical treatment quantification of the mitral annulus, especially its diameter, is essential. Time resolved 3D transesophageal echocardiography (TEE) is suitable for this task. Yet, manual measurement in four dimensions is extremely time consuming, which confirms the need for automatic quantification methods. The method we propose is capable of automatically detecting the cardiac cycle (systole or diastole) for each time step and measuring the mitral annulus diameter. This is done using total variation noise filtering, the graph cut segmentation algorithm and morphological operators. An evaluation took place using expert measurements on 4D TEE data of 13 patients. The cardiac cycle was detected correctly on 78% of all images and the mitral annulus diameter was measured with an average error of 3.08 mm. Its full automatic processing makes the method easy to use in the clinical workflow and it provides the surgeon with helpful information.

  2. SU-F-303-02: Achieving 4D MRI in Regular Breathing Cycle with Extended Acquisition Time of Dynamic MR Images

    SciTech Connect

    Hui, C; Beddar, S; Wen, Z; Stemkens, B; Tijssen, R; Berg, C van den

    2015-06-15

    Purpose: The purpose of this study is to develop a technique to obtain four-dimensional (4D) magnetic resonance (MR) images that are more representative of a patient’s typical breathing cycle by utilizing an extended acquisition time while minimizing the image artifacts. Methods: The 4D MR data were acquired with the balanced steady state free precession in two-dimensional sagittal plane of view. Each slice was acquired repeatedly for about 15 s, thereby obtaining multiple images at each of the 10 phases in the respiratory cycle. This improves the probability that at least one of the images were acquired at the desired phase during a regular breathing cycle. To create optimal 4D MR images, an iterative approach was used to identify the set of images that yielded the highest slice-to-slice similarity. To assess the effectiveness of the approach, the data set was truncated into periods of 7 s (50 time points), 11 s (75 time points) and the full 15 s (100 time points). The 4D MR images were then sorted with data of the three different acquisition periods for comparison. Results: In general, the 4D MR images sorted using data from longer acquisition periods showed less mismatched artifacts. In addition, the normalized cross correlation (NCC) between slices of a 4D volume increases with increased acquisition period. The average NCC was 0.791 from the 7 s period, 0.794 from the 11 s period and 0.796 from the 15 s period. Conclusion: Our preliminary study showed that extending the acquisition time with the proposed sorting technique can improve image quality and reduce artifact presence in the 4D MR images. Data acquisition over two breathing cycles is a good trade-off between artifact reduction and scan time. This research was partially funded by the the Center for Radiation Oncology Research from UT MD Anderson Cancer Center.

  3. The development of a population of 4D pediatric XCAT phantoms for CT imaging research and optimization

    NASA Astrophysics Data System (ADS)

    Norris, Hannah; Zhang, Yakun; Frush, Jack; Sturgeon, Gregory M.; Minhas, Anum; Tward, Daniel J.; Ratnanather, J. Tilak; Miller, M. I.; Frush, Donald; Samei, Ehsan; Segars, W. Paul

    2014-03-01

    With the increased use of CT examinations, the associated radiation dose has become a large concern, especially for pediatrics. Much research has focused on reducing radiation dose through new scanning and reconstruction methods. Computational phantoms provide an effective and efficient means for evaluating image quality, patient-specific dose, and organ-specific dose in CT. We previously developed a set of highly-detailed 4D reference pediatric XCAT phantoms at ages of newborn, 1, 5, 10, and 15 years with organ and tissues masses matched to ICRP Publication 89 values. We now extend this reference set to a series of 64 pediatric phantoms of a variety of ages and height and weight percentiles, representative of the public at large. High resolution PET-CT data was reviewed by a practicing experienced radiologist for anatomic regularity and was then segmented with manual and semi-automatic methods to form a target model. A Multi-Channel Large Deformation Diffeomorphic Metric Mapping (MC-LDDMM) algorithm was used to calculate the transform from the best age matching pediatric reference phantom to the patient target. The transform was used to complete the target, filling in the non-segmented structures and defining models for the cardiac and respiratory motions. The complete phantoms, consisting of thousands of structures, were then manually inspected for anatomical accuracy. 3D CT data was simulated from the phantoms to demonstrate their ability to generate realistic, patient quality imaging data. The population of pediatric phantoms developed in this work provides a vital tool to investigate dose reduction techniques in 3D and 4D pediatric CT.

  4. SU-E-J-153: Reconstructing 4D Cone Beam CT Images for Clinical QA of Lung SABR Treatments

    SciTech Connect

    Beaudry, J; Bergman, A; Cropp, R

    2015-06-15

    Purpose: To verify that the planned Primary Target Volume (PTV) and Internal Gross Tumor Volume (IGTV) fully enclose a moving lung tumor volume as visualized on a pre-SABR treatment verification 4D Cone Beam CT. Methods: Daily 3DCBCT image sets were acquired immediately prior to treatment for 10 SABR lung patients using the on-board imaging system integrated into a Varian TrueBeam (v1.6: no 4DCBCT module available). Respiratory information was acquired during the scan using the Varian RPM system. The CBCT projections were sorted into 8 bins offline, both by breathing phase and amplitude, using in-house software. An iterative algorithm based on total variation minimization, implemented in the open source reconstruction toolkit (RTK), was used to reconstruct the binned projections into 4DCBCT images. The relative tumor motion was quantified by tracking the centroid of the tumor volume from each 4DCBCT image. Following CT-CBCT registration, the planning CT volumes were compared to the location of the CBCT tumor volume as it moves along its breathing trajectory. An overlap metric quantified the ability of the planned PTV and IGTV to contain the tumor volume at treatment. Results: The 4DCBCT reconstructed images visibly show the tumor motion. The mean overlap between the planned PTV (IGTV) and the 4DCBCT tumor volumes was 100% (94%), with an uncertainty of 5% from the 4DCBCT tumor volume contours. Examination of the tumor motion and overlap metric verify that the IGTV drawn at the planning stage is a good representation of the tumor location at treatment. Conclusion: It is difficult to compare GTV volumes from a 4DCBCT and a planning CT due to image quality differences. However, it was possible to conclude the GTV remained within the PTV 100% of the time thus giving the treatment staff confidence that SABR lung treatements are being delivered accurately.

  5. 4-D segmentation and normalization of 3He MR images for intrasubject assessment of ventilated lung volumes

    NASA Astrophysics Data System (ADS)

    Contrella, Benjamin; Tustison, Nicholas J.; Altes, Talissa A.; Avants, Brian B.; Mugler, John P., III; de Lange, Eduard E.

    2012-03-01

    Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under the same conditions from 7 to 467 days (mean +/- standard deviation: 185 +/- 37.2) later. Several techniques were evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.

  6. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm

    PubMed Central

    Molaei, Mehdi; Sheng, Jian

    2014-01-01

    Abstract: Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  7. A spatio-temporal filtering approach to denoising of single-trial ERP in rapid image triage.

    PubMed

    Yu, Ke; Shen, Kaiquan; Shao, Shiyun; Ng, Wu Chun; Kwok, Kenneth; Li, Xiaoping

    2012-03-15

    Conventional search for images containing points of interest (POI) in large-volume imagery is costly and sometimes even infeasible. The rapid image triage (RIT) system which is a human cognition guided computer vision technique is potentially a promising solution to the problem. In the RIT procedure, images are sequentially presented to a subject at a high speed. At the instant of observing a POI image, unique POI event-related potentials (ERP) characterized by P300 will be elicited and measured on the scalp. With accurate single-trial detection of such unique ERP, RIT can differentiate POI images from non-POI images. However, like other brain-computer interface systems relying on single-trial detection, RIT suffers from the low signal-to-noise ratio (SNR) of the single-trial ERP. This paper presents a spatio-temporal filtering approach tailored for the denoising of single-trial ERP for RIT. The proposed approach is essentially a non-uniformly delayed spatial Gaussian filter that attempts to suppress the non-event related background electroencephalogram (EEG) and other noises without significantly attenuating the useful ERP signals. The efficacy of the proposed approach is illustrated by both simulation tests and real RIT experiments. In particular, the real RIT experiments on 20 subjects show a statistically significant and meaningful average decrease of 9.8% in RIT classification error rate, compared to that without the proposed approach.

  8. Longitudinal Monitoring of Hepatic Blood Flow before and after TIPS by Using 4D-Flow MR Imaging

    PubMed Central

    Bannas, Peter; Roldán-Alzate, Alejandro; Johnson, Kevin M.; Woods, Michael A.; Ozkan, Orhan; Motosugi, Utaroh; Wieben, Oliver; Reeder, Scott B.; Kramer, Harald

    2016-01-01

    Purpose To demonstrate the feasibility of four-dimensional (4D)– flow magnetic resonance (MR) imaging for noninvasive longitudinal hemodynamic monitoring of hepatic blood flow before and after transjugular intrahepatic portosystemic shunt (TIPS) placement. Materials and Methods The institutional review board approved this prospective Health Insurance Portability and Accountability Act compliant study with written informed consent. Four-dimensional–flow MR imaging was performed in seven patients with portal hypertension and refractory ascites before and 2 and 12 weeks after TIPS placement by using a time-resolved three-dimensional radial phase-contrast acquisition. Flow and peak velocity measurements were obtained in the superior mesenteric vein (SMV), splenic vein (SV), portal vein (PV), and the TIPS. Flow volumes and peak velocities in each vessel, as well as the ratio of in-stent to PV flow, were compared before and after TIPS placement by using analysis of variance. Results Flow volumes significantly increased in the SMV (0.24 L/ min; 95% confidence interval [CI]: 0.07, 0.41), SV (0.31 L/min; 95% CI: 0.07, 0.54), and PV (0.88 L/min; 95% CI: 0.06, 1.70) after TIPS placement (all P < .05), with no significant difference between the first and second post-TIPS placement acquisitions (all P > .11). Ascites resolved in six of seven patients. In those with resolved ascites, the TIPS-to-PV flow ratio was 0.8 ± 6 0.2 and 0.9 ± 0.2 at the two post-TIPS time points, respectively, while the observed ratios were 4.6 and 4.3 in the patient with refractory ascites at the two post-TIPS time points, respectively. In this patient, 4D-flow MR imaging demonstrated arterio-portal-venous shunting, with draining into the TIPS. Conclusion Four-dimensional–flow MR imaging is feasible for noninvasive longitudinal hemodynamic monitoring of hepatic blood flow before and after TIPS placement. PMID:27171019

  9. SU-D-207-03: Development of 4D-CBCT Imaging System with Dual Source KV X-Ray Tubes

    SciTech Connect

    Nakamura, M; Ishihara, Y; Matsuo, Y; Ueki, N; Iizuka, Y; Mizowaki, T; Hiraoka, M

    2015-06-15

    Purpose: The purposes of this work are to develop 4D-CBCT imaging system with orthogonal dual source kV X-ray tubes, and to determine the imaging doses from 4D-CBCT scans. Methods: Dual source kV X-ray tubes were used for the 4D-CBCT imaging. The maximum CBCT field of view was 200 mm in diameter and 150 mm in length, and the imaging parameters were 110 kV, 160 mA and 5 ms. The rotational angle was 105°, the rotational speed of the gantry was 1.5°/s, the gantry rotation time was 70 s, and the image acquisition interval was 0.3°. The observed amplitude of infrared marker motion during respiration was used to sort each image into eight respiratory phase bins. The EGSnrc/BEAMnrc and EGSnrc/DOSXYZnrc packages were used to simulate kV X-ray dose distributions of 4D-CBCT imaging. The kV X-ray dose distributions were calculated for 9 lung cancer patients based on the planning CT images with dose calculation grid size of 2.5 x 2.5 x 2.5 mm. The dose covering a 2-cc volume of skin (D2cc), defined as the inner 5 mm of the skin surface with the exception of bone structure, was assessed. Results: A moving object was well identified on 4D-CBCT images in a phantom study. Given a gantry rotational angle of 105° and the configuration of kV X-ray imaging subsystems, both kV X-ray fields overlapped at a part of skin surface. The D2cc for the 4D-CBCT scans was in the range 73.8–105.4 mGy. Linear correlation coefficient between the 1000 minus averaged SSD during CBCT scanning and D2cc was −0.65 (with a slope of −0.17) for the 4D-CBCT scans. Conclusion: We have developed 4D-CBCT imaging system with dual source kV X-ray tubes. The total imaging dose with 4D-CBCT scans was up to 105.4 mGy.

  10. Pointwise shape-adaptive DCT for high-quality denoising and deblocking of grayscale and color images.

    PubMed

    Foi, Alessandro; Katkovnik, Vladimir; Egiazarian, Karen

    2007-05-01

    The shape-adaptive discrete cosine transform ISA-DCT) transform can be computed on a support of arbitrary shape, but retains a computational complexity comparable to that of the usual separable block-DCT (B-DCT). Despite the near-optimal decorrelation and energy compaction properties, application of the SA-DCT has been rather limited, targeted nearly exclusively to video compression. In this paper, we present a novel approach to image filtering based on the SA-DCT. We use the SA-DCT in conjunction with the Anisotropic Local Polynomial Approximation-Intersection of Confidence Intervals technique, which defines the shape of the transform's support in a pointwise adaptive manner. The thresholded or attenuated SA-DCT coefficients are used to reconstruct a local estimate of the signal within the adaptive-shape support. Since supports corresponding to different points are in general overlapping, the local estimates are averaged together using adaptive weights that depend on the region's statistics. This approach can be used for various image-processing tasks. In this paper, we consider, in particular, image denoising and image deblocking and deringing from block-DCT compression. A special structural constraint in luminance-chrominance space is also proposed to enable an accurate filtering of color images. Simulation experiments show a state-of-the-art quality of the final estimate, both in terms of objective criteria and visual appearance. Thanks to the adaptive support, reconstructed edges are clean, and no unpleasant ringing artifacts are introduced by the fitted transform.

  11. Evaluation of the combined effects of target size, respiratory motion and background activity on 3D and 4D PET/CT images

    NASA Astrophysics Data System (ADS)

    Park, Sang-June; Ionascu, Dan; Killoran, Joseph; Mamede, Marcelo; Gerbaudo, Victor H.; Chin, Lee; Berbeco, Ross

    2008-07-01

    Gated (4D) PET/CT has the potential to greatly improve the accuracy of radiotherapy at treatment sites where internal organ motion is significant. However, the best methodology for applying 4D-PET/CT to target definition is not currently well established. With the goal of better understanding how to best apply 4D information to radiotherapy, initial studies were performed to investigate the effect of target size, respiratory motion and target-to-background activity concentration ratio (TBR) on 3D (ungated) and 4D PET images. Using a PET/CT scanner with 4D or gating capability, a full 3D-PET scan corrected with a 3D attenuation map from 3D-CT scan and a respiratory gated (4D) PET scan corrected with corresponding attenuation maps from 4D-CT were performed by imaging spherical targets (0.5-26.5 mL) filled with 18F-FDG in a dynamic thorax phantom and NEMA IEC body phantom at different TBRs (infinite, 8 and 4). To simulate respiratory motion, the phantoms were driven sinusoidally in the superior-inferior direction with amplitudes of 0, 1 and 2 cm and a period of 4.5 s. Recovery coefficients were determined on PET images. In addition, gating methods using different numbers of gating bins (1-20 bins) were evaluated with image noise and temporal resolution. For evaluation, volume recovery coefficient, signal-to-noise ratio and contrast-to-noise ratio were calculated as a function of the number of gating bins. Moreover, the optimum thresholds which give accurate moving target volumes were obtained for 3D and 4D images. The partial volume effect and signal loss in the 3D-PET images due to the limited PET resolution and the respiratory motion, respectively were measured. The results show that signal loss depends on both the amplitude and pattern of respiratory motion. However, the 4D-PET successfully recovers most of the loss induced by the respiratory motion. The 5-bin gating method gives the best temporal resolution with acceptable image noise. The results based on the 4D

  12. A novel structured dictionary for fast processing of 3D medical images, with application to computed tomography restoration and denoising

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-03-01

    Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.

  13. WE-AB-204-09: Respiratory Motion Correction in 4D-PET by Simultaneous Motion Estimation and Image Reconstruction (SMEIR)

    SciTech Connect

    Kalantari, F; Wang, J; Li, T; Jin, M

    2015-06-15

    Purpose: In conventional 4D-PET, images from different frames are reconstructed individually and aligned by registration methods. Two issues with these approaches are: 1) Reconstruction algorithms do not make full use of all projections statistics; and 2) Image registration between noisy images can Result in poor alignment. In this study we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) method for cone beam CT for motion estimation/correction in 4D-PET. Methods: Modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM- TV) is used to obtain a primary motion-compensated PET (pmc-PET) from all projection data using Demons derived deformation vector fields (DVFs) as initial. Motion model update is done to obtain an optimal set of DVFs between the pmc-PET and other phases by matching the forward projection of the deformed pmc-PET and measured projections of other phases. Using updated DVFs, OSEM- TV image reconstruction is repeated and new DVFs are estimated based on updated images. 4D XCAT phantom with typical FDG biodistribution and a 10mm diameter tumor was used to evaluate the performance of the SMEIR algorithm. Results: Image quality of 4D-PET is greatly improved by the SMEIR algorithm. When all projections are used to reconstruct a 3D-PET, motion blurring artifacts are present, leading to a more than 5 times overestimation of the tumor size and 54% tumor to lung contrast ratio underestimation. This error reduced to 37% and 20% for post reconstruction registration methods and SMEIR respectively. Conclusion: SMEIR method can be used for motion estimation/correction in 4D-PET. The statistics is greatly improved since all projection data are combined together to update the image. The performance of the SMEIR algorithm for 4D-PET is sensitive to smoothness control parameters in the DVF estimation step.

  14. Venous and Arterial Flow Quantification, are Equally Accurate and Precise with Parallel Imaging Compressed Sensing 4D Phase Contrast MRI

    PubMed Central

    Tariq, Umar; Hsiao, Albert; Alley, Marcus; Zhang, Tao; Lustig, Michael; Vasanawala, Shreyas S.

    2012-01-01

    Purpose To evaluate precision and accuracy of parallel-imaging compressed-sensing 4D phase contrast (PICS-4DPC) MRI venous flow quantification in children with patients referred for cardiac MRI at our children’s hospital. Materials and Methods With IRB approval and HIPAA compliance, 22 consecutive patients without shunts underwent 4DPC as part of clinical cardiac MRI examinations. Flow measurements were obtained in the superior and inferior vena cava, ascending and descending aorta and the pulmonary trunk. Conservation of flow to the upper, lower and whole body was used as an internal physiologic control. The arterial and venous flow rates at each location were compared with paired t-tests and F-tests to assess relative accuracy and precision. RESULTS Arterial and venous flow measurements were strongly correlated for the upper (ρ=0.89), lower (ρ=0.96) and whole body (ρ=0.97); net aortic and pulmonary trunk flow rates were also tightly correlated (ρ=0.97). There was no significant difference in the value or precision of arterial and venous flow measurements in upper, lower or whole body, though there was a trend toward improved precision with lower velocity-encoding settings. Conclusion With PICS-4DPC MRI, the accuracy and precision of venous flow quantification are comparable to that of arterial flow quantification at velocity-encodings appropriate for arterial vessels. PMID:23172846

  15. SU-E-J-28: Gantry Speed Significantly Affects Image Quality and Imaging Dose for 4D Cone-Beam Computed Tomography On the Varian Edge Platform

    SciTech Connect

    Santoso, A; Song, K; Gardner, S; Chetty, I; Wen, N

    2015-06-15

    Purpose: 4D-CBCT facilitates assessment of tumor motion at treatment position. We investigated the effect of gantry speed on 4D-CBCT image quality and dose using the Varian Edge On-Board Imager (OBI). Methods: A thoracic protocol was designed using a 125 kVp spectrum. Image quality parameters were obtained via 4D acquisition using a Catphan phantom with a gating system. A sinusoidal waveform was executed with a five second period and superior-inferior motion. 4D-CBCT scans were sorted into 4 and 10 phases. Image quality metrics included spatial resolution, contrast-to-noise ratio (CNR), uniformity index (UI), Hounsfield unit (HU) sensitivity, and RMS error (RMSE) of motion amplitude. Dosimetry was accomplished using Gafchromic XR-QA2 films within a CIRS Thorax phantom. This was placed on the gating phantom using the same motion waveform. Results: High contrast resolution decreased linearly from 5.93 to 4.18 lp/cm, 6.54 to 4.18 lp/cm, and 5.19 to 3.91 lp/cm for averaged, 4 phase, and 10 phase 4DCBCT volumes respectively as gantry speed increased from 1.0 to 6.0 degs/sec. CNRs decreased linearly from 4.80 to 1.82 as the gantry speed increased from 1.0 to 6.0 degs/sec, respectively. No significant variations in UIs, HU sensitivities, or RMSEs were observed with variable gantry speed. Ion chamber measurements compared to film yielded small percent differences in plastic water regions (0.1–9.6%), larger percent differences in lung equivalent regions (7.5–34.8%), and significantly larger percent differences in bone equivalent regions (119.1–137.3%). Ion chamber measurements decreased from 17.29 to 2.89 cGy with increasing gantry speed from 1.0 to 6.0 degs/sec. Conclusion: Maintaining technique factors while changing gantry speed changes the number of projections used for reconstruction. Increasing the number of projections by decreasing gantry speed decreases noise, however, dose is increased. The future of 4DCBCT’s clinical utility relies on further

  16. TU-F-12A-05: Sensitivity of Textural Features to 3D Vs. 4D FDG-PET/CT Imaging in NSCLC Patients

    SciTech Connect

    Yang, F; Nyflot, M; Bowen, S; Kinahan, P; Sandison, G

    2014-06-15

    Purpose: Neighborhood Gray-level difference matrices (NGLDM) based texture parameters extracted from conventional (3D) 18F-FDG PET scans in patients with NSCLC have been previously shown to associate with response to chemoradiation and poorer patient outcome. However, the change in these parameters when utilizing respiratory-correlated (4D) FDG-PET scans has not yet been characterized for NSCLC. The Objectives: of this study was to assess the extent to which NGLDM-based texture parameters on 4D PET images vary with reference to values derived from 3D scans in NSCLC. Methods: Eight patients with newly diagnosed NSCLC treated with concomitant chemoradiotherapy were included in this study. 4D PET scans were reconstructed with OSEM-IR in 5 respiratory phase-binned images and corresponding CT data of each phase were employed for attenuation correction. NGLDM-based texture features, consisting of coarseness, contrast, busyness, complexity and strength, were evaluated for gross tumor volumes defined on 3D/4D PET scans by radiation oncologists. Variation of the obtained texture parameters over the respiratory cycle were examined with respect to values extracted from 3D scans. Results: Differences between texture parameters derived from 4D scans at different respiratory phases and those extracted from 3D scans ranged from −30% to 13% for coarseness, −12% to 40% for contrast, −5% to 50% for busyness, −7% to 38% for complexity, and −43% to 20% for strength. Furthermore, no evident correlations were observed between respiratory phase and 4D scan texture parameters. Conclusion: Results of the current study showed that NGLDM-based texture parameters varied considerably based on choice of 3D PET and 4D PET reconstruction of NSCLC patient images, indicating that standardized image acquisition and analysis protocols need to be established for clinical studies, especially multicenter clinical trials, intending to validate prognostic values of texture features for NSCLC.

  17. Spatiotemporal directional analysis of 4D echocardiography

    NASA Astrophysics Data System (ADS)

    Angelini-Casadevall, Elsa D.; Laine, Andrew F.; Takuma, Shin; Homma, Shunichi

    2000-12-01

    Speckle noise corrupts ultrasonic data by introducing sharp changes in an echocardiographic image intensity profile, while attenuation alters the intensity of equally significant cardiac structures. These properties introduce inhomogeneity in the spatial domain and suggests that measures based on phase information rather than intensity are more appropriate for denoising and cardiac border detection. The present analysis method relies on the expansion of temporal ultrasonic volume data on complex exponential wavelet-like basis functions called Brushlets. These basis functions decompose a signal into distinct patterns of oriented textures. Projected coefficients are associated with distinct 'brush strokes' of a particular size and orientation. 4D overcomplete brushlet analysis is applied to temporal echocardiographic values. We show that adding the time dimension in the analysis dramatically improves the quality and robustness of the method without adding complexity in the design of a segmentation tool. We have investigated mathematical and empirical methods for identifying the most 'efficient' brush stroke sizes and orientations for decomposition and reconstruction on both phantom and clinical data. In order to determine the 'best tiling' or equivalently, the 'best brushlet basis', we use an entorpy-based information cost metric function. Quantitative validation and clinical applications of this new spatio-temporal analysis tool are reported for balloon phantoms and clinical data sets.

  18. SU-F-207-13: Comparison of Four Dimensional Computed Tomography (4D CT) Versus Breath Hold Images to Determine Pulmonary Nodule Elasticity

    SciTech Connect

    Negahdar, M; Loo, B; Maxim, P

    2015-06-15

    Purpose: Elasticity may distinguish malignant from benign pulmonary nodules. To compare determining of malignant pulmonary nodule (MPN) elasticity from four dimensional computed tomography (4D CT) images versus inhale/exhale breath-hold CT images. Methods: We analyzed phase 00 and 50 of 4D CT and deep inhale and natural exhale of breath-hold CT images of 30 MPN treated with stereotactic ablative radiotherapy (SABR). The radius of the smallest MPN was 0.3 cm while the biggest one was 2.1 cm. An intensity based deformable image registration (DIR) workflow was applied to the 4D CT and breath-hold images to determine the volumes of the MPNs and a 1 cm ring of surrounding lung tissue (ring) in each state. Next, an elasticity parameter was derived by calculating the ratio of the volume changes of MPN (exhale:inhale or phase50:phase00) to that of a 1 cm ring of lung tissue surrounding the MPN. The proposed formulation of elasticity enables us to compare volume changes of two different MPN in two different locations of lung. Results: The calculated volume ratio of MPNs from 4D CT (phase50:phase00) and breath-hold images (exhale:inhale) was 1.00±0.23 and 0.95±0.11, respectively. It shows the stiffness of MPN and comparably bigger volume changes of MPN in breath-hold images because of the deeper degree of inhalation. The calculated elasticity of MPNs from 4D CT and breath-hold images was 1.12±0.22 and 1.23±0.26, respectively. For five patients who have had two MPN in their lung, calculated elasticity of tumor A and tumor B follows same trend in both 4D CT and breath-hold images. Conclusion: We showed that 4D CT and breath-hold images are comparable in the ability to calculate the elasticity of MPN. This study has been supported by Department of Defense LCRP 2011 #W81XWH-12-1-0286.

  19. SU-E-J-154: Image Quality Assessment of Contrast-Enhanced 4D-CT for Pancreatic Adenocarcinoma in Radiotherapy Simulation

    SciTech Connect

    Choi, W; Xue, M; Patel, K; Regine, W; Wang, J; D’Souza, W; Lu, W; Kang, M; Klahr, P

    2015-06-15

    Purpose: This study presents quantitative and qualitative assessment of the image qualities in contrast-enhanced (CE) 3D-CT, 4D-CT and CE 4D-CT to identify feasibility for replacing the clinical standard simulation with a single CE 4D-CT for pancreatic adenocarcinoma (PDA) in radiotherapy simulation. Methods: Ten PDA patients were enrolled and underwent three CT scans: a clinical standard pair of CE 3D-CT immediately followed by a 4D-CT, and a CE 4D-CT one week later. Physicians qualitatively evaluated the general image quality and regional vessel definitions and gave a score from 1 to 5. Next, physicians delineated the contours of the tumor (T) and the normal pancreatic parenchyma (P) on the three CTs (CE 3D-CT, 50% phase for 4D-CT and CE 4D-CT), then high density areas were automatically removed by thresholding at 500 HU and morphological operations. The pancreatic tumor contrast-to-noise ratio (CNR), signal-tonoise ratio (SNR) and conspicuity (C, absolute difference of mean enhancement levels in P and T) were computed to quantitatively assess image quality. The Wilcoxon rank sum test was used to compare these quantities. Results: In qualitative evaluations, CE 3D-CT and CE 4D-CT scored equivalently (4.4±0.4 and 4.3±0.4) and both were significantly better than 4D-CT (3.1±0.6). In quantitative evaluations, the C values were higher in CE 4D-CT (28±19 HU, p=0.19 and 0.17) than the clinical standard pair of CE 3D-CT and 4D-CT (17±12 and 16±17 HU, p=0.65). In CE 3D-CT and CE 4D-CT, mean CNR (1.8±1.4 and 1.8±1.7, p=0.94) and mean SNR (5.8±2.6 and 5.5±3.2, p=0.71) both were higher than 4D-CT (CNR: 1.1±1.3, p<0.3; SNR: 3.3±2.1, p<0.1). The absolute enhancement levels for T and P were higher in CE 4D-CT (87, 82 HU) than in CE 3D-CT (60, 56) and 4DCT (53, 70). Conclusions: The individually optimized CE 4D-CT is feasible and achieved comparable image qualities to the clinical standard simulation. This study was supported in part by Philips Healthcare.

  20. 4D-Imaging of the Lung: Reproducibility of Lesion Size and Displacement on Helical CT, MRI, and Cone Beam CT in a Ventilated Ex Vivo System

    SciTech Connect

    Biederer, Juergen Dinkel, Julien; Remmert, Gregor; Jetter, Siri; Nill, Simeon; Moser, Torsten; Bendl, Rolf; Thierfelder, Carsten; Fabel, Michael; Oelfke, Uwe; Bock, Michael; Plathow, Christian; Bolte, Hendrik; Welzel, Thomas; Hoffmann, Beata; Hartmann, Guenter; Schlegel, Wolfgang; Debus, Juergen; Heller, Martin

    2009-03-01

    Purpose: Four-dimensional (4D) imaging is a key to motion-adapted radiotherapy of lung tumors. We evaluated in a ventilated ex vivo system how size and displacement of artificial pulmonary nodules are reproduced with helical 4D-CT, 4D-MRI, and linac-integrated cone beam CT (CBCT). Methods and Materials: Four porcine lungs with 18 agarose nodules (mean diameters 1.3-1.9 cm), were ventilated inside a chest phantom at 8/min and subject to 4D-CT (collimation 24 x 1.2 mm, pitch 0.1, slice/increment 24x10{sup 2}/1.5/0.8 mm, pitch 0.1, temporal resolution 0.5 s), 4D-MRI (echo-shared dynamic three-dimensional-flash; repetition/echo time 2.13/0.72 ms, voxel size 2.7 x 2.7 x 4.0 mm, temporal resolution 1.4 s) and linac-integrated 4D-CBCT (720 projections, 3-min rotation, temporal resolution {approx}1 s). Static CT without respiration served as control. Three observers recorded lesion size (RECIST-diameters x/y/z) and axial displacement. Interobserver- and interphase-variation coefficients (IO/IP VC) of measurements indicated reproducibility. Results: Mean x/y/z lesion diameters in cm were equal on static and dynamic CT (1.88/1.87; 1.30/1.39; 1.71/1.73; p > 0.05), but appeared larger on MRI and CBCT (2.06/1.95 [p < 0.05 vs. CT]; 1.47/1.28 [MRI vs. CT/CBCT p < 0.05]; 1.86/1.83 [CT vs. CBCT p < 0.05]). Interobserver-VC for lesion sizes were 2.54-4.47% (CT), 2.29-4.48% (4D-CT); 5.44-6.22% (MRI) and 4.86-6.97% (CBCT). Interphase-VC for lesion sizes ranged from 2.28% (4D-CT) to 10.0% (CBCT). Mean displacement in cm decreased from static CT (1.65) to 4D-CT (1.40), CBCT (1.23) and MRI (1.16). Conclusions: Lesion sizes are exactly reproduced with 4D-CT but overestimated on 4D-MRI and CBCT with a larger variability due to limited temporal and spatial resolution. All 4D-modalities underestimate lesion displacement.

  1. A Novel Assessment of Various Bio-Imaging Methods for Lung Tumor Detection and Treatment by using 4-D and 2-D CT Images

    PubMed Central

    Judice A., Antony; Geetha, Dr. K. Parimala

    2013-01-01

    Lung Cancer is known as one of the most difficult cancer to cure, and the number of deaths that it causes generally increasing. A detection of the Lung Cancer in its early stage can be helpful for Medical treatment to limit the danger, but it is a challenging problem due to Cancer cell structure. Interpretation of Medical image is often difficult and time consuming, even for the experienced Physicians. The aid of image analysis Based on machine learning can make this process easier. This paper describes fully Automatic Decision Support system for Lung Cancer diagnostic from CT Lung images. Most traditional medical diagnosis systems are founded on huge quantity of training data and takes long processing time. However, on the occasion that very little volume of data is available, the traditional diagnosis systems derive defects such as larger error, Time complexity. Focused on the solution to this problem, a Medical Diagnosis System based on Hidden Markov Model (HMM) is presented. In this paper we describe a pre-processing stage involving some Noise removal techniques help to solve this problem, we preprocess an images (by Mean Error Square Filtering and Histogram analysis)obtained after scanning the Lung CT images. Secondly separate the lung areas from an image by a segmentation process (by Thresholding and region growing techniques). Finally we developed HMM for the classification of Cancer Nodule. Results are checked for 2D and 4D CT images. This automation process reduces the time complexity and increases the diagnosis confidence. PMID:23847454

  2. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    SciTech Connect

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J.; Kuncic, Zdenka

    2014-04-15

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  3. TU-F-17A-01: BEST IN PHYSICS (JOINT IMAGING-THERAPY) - An Automatic Toolkit for Efficient and Robust Analysis of 4D Respiratory Motion

    SciTech Connect

    Wei, J; Yuan, A; Li, G

    2014-06-15

    Purpose: To provide an automatic image analysis toolkit to process thoracic 4-dimensional computed tomography (4DCT) and extract patient-specific motion information to facilitate investigational or clinical use of 4DCT. Methods: We developed an automatic toolkit in MATLAB to overcome the extra workload from the time dimension in 4DCT. This toolkit employs image/signal processing, computer vision, and machine learning methods to visualize, segment, register, and characterize lung 4DCT automatically or interactively. A fully-automated 3D lung segmentation algorithm was designed and 4D lung segmentation was achieved in batch mode. Voxel counting was used to calculate volume variations of the torso, lung and its air component, and local volume changes at the diaphragm and chest wall to characterize breathing pattern. Segmented lung volumes in 12 patients are compared with those from a treatment planning system (TPS). Voxel conversion was introduced from CT# to other physical parameters, such as gravity-induced pressure, to create a secondary 4D image. A demon algorithm was applied in deformable image registration and motion trajectories were extracted automatically. Calculated motion parameters were plotted with various templates. Machine learning algorithms, such as Naive Bayes and random forests, were implemented to study respiratory motion. This toolkit is complementary to and will be integrated with the Computational Environment for Radiotherapy Research (CERR). Results: The automatic 4D image/data processing toolkit provides a platform for analysis of 4D images and datasets. It processes 4D data automatically in batch mode and provides interactive visual verification for manual adjustments. The discrepancy in lung volume calculation between this and the TPS is <±2% and the time saving is by 1–2 orders of magnitude. Conclusion: A framework of 4D toolkit has been developed to analyze thoracic 4DCT automatically or interactively, facilitating both investigational

  4. PDE-based Non-Linear Diffusion Techniques for Denoising Scientific and Industrial Images: An Empirical Study

    SciTech Connect

    Weeratunga, S K; Kamath, C

    2001-12-20

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, they focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. They complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. They explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. They also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. The empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  5. Denoising and artefact reduction in dynamic flat detector CT perfusion imaging using high speed acquisition: first experimental and clinical results

    NASA Astrophysics Data System (ADS)

    Manhart, Michael T.; Aichert, André; Struffert, Tobias; Deuerling-Zheng, Yu; Kowarschik, Markus; Maier, Andreas K.; Hornegger, Joachim; Doerfler, Arnd

    2014-08-01

    Flat detector CT perfusion (FD-CTP) is a novel technique using C-arm angiography systems for interventional dynamic tissue perfusion measurement with high potential benefits for catheter-guided treatment of stroke. However, FD-CTP is challenging since C-arms rotate slower than conventional CT systems. Furthermore, noise and artefacts affect the measurement of contrast agent flow in tissue. Recent robotic C-arms are able to use high speed protocols (HSP), which allow sampling of the contrast agent flow with improved temporal resolution. However, low angular sampling of projection images leads to streak artefacts, which are translated to the perfusion maps. We recently introduced the FDK-JBF denoising technique based on Feldkamp (FDK) reconstruction followed by joint bilateral filtering (JBF). As this edge-preserving noise reduction preserves streak artefacts, an empirical streak reduction (SR) technique is presented in this work. The SR method exploits spatial and temporal information in the form of total variation and time-curve analysis to detect and remove streaks. The novel approach is evaluated in a numerical brain phantom and a patient study. An improved noise and artefact reduction compared to existing post-processing methods and faster computation speed compared to an algebraic reconstruction method are achieved.

  6. SU-E-J-200: A Dosimetric Analysis of 3D Versus 4D Image-Based Dose Calculation for Stereotactic Body Radiation Therapy in Lung Tumors

    SciTech Connect

    Ma, M; Rouabhi, O; Flynn, R; Xia, J; Bayouth, J

    2014-06-01

    Purpose: To evaluate the dosimetric difference between 3D and 4Dweighted dose calculation using patient specific respiratory trace and deformable image registration for stereotactic body radiation therapy in lung tumors. Methods: Two dose calculation techniques, 3D and 4D-weighed dose calculation, were used for dosimetric comparison for 9 lung cancer patients. The magnitude of the tumor motion varied from 3 mm to 23 mm. Breath-hold exhale CT was used for 3D dose calculation with ITV generated from the motion observed from 4D-CT. For 4D-weighted calculation, dose of each binned CT image from the ten breathing amplitudes was first recomputed using the same planning parameters as those used in the 3D calculation. The dose distribution of each binned CT was mapped to the breath-hold CT using deformable image registration. The 4D-weighted dose was computed by summing the deformed doses with the temporal probabilities calculated from their corresponding respiratory traces. Dosimetric evaluation criteria includes lung V20, mean lung dose, and mean tumor dose. Results: Comparing with 3D calculation, lung V20, mean lung dose, and mean tumor dose using 4D-weighted dose calculation were changed by −0.67% ± 2.13%, −4.11% ± 6.94% (−0.36 Gy ± 0.87 Gy), −1.16% ± 1.36%(−0.73 Gy ± 0.85 Gy) accordingly. Conclusion: This work demonstrates that conventional 3D dose calculation method may overestimate the lung V20, MLD, and MTD. The absolute difference between 3D and 4D-weighted dose calculation in lung tumor may not be clinically significant. This research is supported by Siemens Medical Solutions USA, Inc and Iowa Center for Research By Undergraduates.

  7. Task-based evaluation of a 4D MAP-RBI-EM image reconstruction method for gated myocardial perfusion SPECT using a human observer study

    NASA Astrophysics Data System (ADS)

    Lee, Taek-Soo; Higuchi, Takahiro; Lautamäki, Riikka; Bengel, Frank M.; Tsui, Benjamin M. W.

    2015-09-01

    We evaluated the performance of a new 4D image reconstruction method for improved 4D gated myocardial perfusion (MP) SPECT using a task-based human observer study. We used a realistic 4D NURBS-based Cardiac-Torso (NCAT) phantom that models cardiac beating motion. Half of the population was normal; the other half had a regional hypokinetic wall motion abnormality. Noise-free and noisy projection data with 16 gates/cardiac cycle were generated using an analytical projector that included the effects of attenuation, collimator-detector response, and scatter (ADS), and were reconstructed using the 3D FBP without and 3D OS-EM with ADS corrections followed by different cut-off frequencies of a 4D linear post-filter. A 4D iterative maximum a posteriori rescaled-block (MAP-RBI)-EM image reconstruction method with ADS corrections was also used to reconstruct the projection data using various values of the weighting factor for its prior. The trade-offs between bias and noise were represented by the normalized mean squared error (NMSE) and averaged normalized standard deviation (NSDav), respectively. They were used to select reasonable ranges of the reconstructed images for use in a human observer study. The observers were trained with the simulated cine images and were instructed to rate their confidence on the absence or presence of a motion defect on a continuous scale. We then applied receiver operating characteristic (ROC) analysis and used the area under the ROC curve (AUC) index. The results showed that significant differences in detection performance among the different NMSE-NSDav combinations were found and the optimal trade-off from optimized reconstruction parameters corresponded to a maximum AUC value. The 4D MAP-RBI-EM with ADS correction, which had the best trade-off among the tested reconstruction methods, also had the highest AUC value, resulting in significantly better human observer detection performance when detecting regional myocardial wall motion

  8. Abdominal 4D Flow MR Imaging in a Breath Hold: Combination of Spiral Sampling and Dynamic Compressed Sensing for Highly Accelerated Acquisition

    PubMed Central

    Knight-Greenfield, Ashley; Jajamovich, Guido; Besa, Cecilia; Cui, Yong; Stalder, Aurélien; Markl, Michael; Taouli, Bachir

    2015-01-01

    Purpose To develop a highly accelerated phase-contrast cardiac-gated volume flow measurement (four-dimensional [4D] flow) magnetic resonance (MR) imaging technique based on spiral sampling and dynamic compressed sensing and to compare this technique with established phase-contrast imaging techniques for the quantification of blood flow in abdominal vessels. Materials and Methods This single-center prospective study was compliant with HIPAA and approved by the institutional review board. Ten subjects (nine men, one woman; mean age, 51 years; age range, 30–70 years) were enrolled. Seven patients had liver disease. Written informed consent was obtained from all participants. Two 4D flow acquisitions were performed in each subject, one with use of Cartesian sampling with respiratory tracking and the other with use of spiral sampling and a breath hold. Cartesian two-dimensional (2D) cine phase-contrast images were also acquired in the portal vein. Two observers independently assessed vessel conspicuity on phase-contrast three-dimensional angiograms. Quantitative flow parameters were measured by two independent observers in major abdominal vessels. Intertechnique concordance was quantified by using Bland-Altman and logistic regression analyses. Results There was moderate to substantial agreement in vessel conspicuity between 4D flow acquisitions in arteries and veins (κ = 0.71 and 0.61, respectively, for observer 1; κ = 0.71 and 0.44 for observer 2), whereas more artifacts were observed with spiral 4D flow (κ = 0.30 and 0.20). Quantitative measurements in abdominal vessels showed good equivalence between spiral and Cartesian 4D flow techniques (lower bound of the 95% confidence interval: 63%, 77%, 60%, and 64% for flow, area, average velocity, and peak velocity, respectively). For portal venous flow, spiral 4D flow was in better agreement with 2D cine phase-contrast flow (95% limits of agreement: −8.8 and 9.3 mL/sec, respectively) than was Cartesian 4D flow (95

  9. Task-Based Evaluation of a 4D MAP-RBI-EM Image Reconstruction Method for Gated Myocardial Perfusion SPECT using a Human Observer Study

    PubMed Central

    Lee, Taek-Soo; Higuchi, Takahiro; Lautamäki, Riikka; Bengel, Frank M.; Tsui, Benjamin M. W.

    2015-01-01

    We evaluated the performance of a new 4D image reconstruction method for improved 4D gated myocardial perfusion (MP) SPECT using a task-based human observer study. We used a realistic 4D NURBS-based Cardiac-Torso (NCAT) phantom that models cardiac beating motion. Half of the population was normal; the other half had a regional hypokinetic wall motion abnormality. Noise-free and noisy projection data with 16 gates/cardiac cycle were generated using an analytical projector that included the effects of attenuation, collimator-detector response, and scatter (ADS), and were reconstructed using the 3D FBP without and 3D OS-EM with ADS corrections followed by different cut-off frequencies of a 4D linear post-filter. A 4D iterative maximum a posteriori rescaled-block (MAP-RBI)-EM image reconstruction method with ADS corrections was also used to reconstruct the projection data using various values of the weighting factor for its prior. The trade-offs between bias and noise were represented by the normalized mean squared error (NMSE) and averaged normalized standard deviation (NSDav), respectively. They were used to select reasonable ranges of the reconstructed images for use in a human observer study. The observers were trained with the simulated cine images and were instructed to rate their confidence on the absence or presence of a motion defect on a continuous scale. We then applied receiver operating characteristic (ROC) analysis and used the area under the ROC curve (AUC) index. The results showed that significant differences in detection performance among the different NMSE-NSDav combinations were found and the optimal trade-off from optimized reconstruction parameters corresponded to a maximum AUC value. The 4D MAP-RBI-EM with ADS correction, which had the best trade-off among the tested reconstruction methods, also had the highest AUC value, resulting in significantly better human observer detection performance when detecting regional myocardial wall motion

  10. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  11. Transformation of light double cones in the human retina: the origin of trichromatism, of 4D-spatiotemporal vision, and of patchwise 4D Fourier transformation in Talbot imaging

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    1997-09-01

    The interpretation of the 'inverted' retina of primates as an 'optoretina' (a light cones transforming diffractive cellular 3D-phase grating) integrates the functional, structural, and oscillatory aspects of a cortical layer. It is therefore relevant to consider prenatal developments as a basis of the macro- and micro-geometry of the inner eye. This geometry becomes relevant for the postnatal trichromatic synchrony organization (TSO) as well as the adaptive levels of human vision. It is shown that the functional performances, the trichromatism in photopic vision, the monocular spatiotemporal 3D- and 4D-motion detection, as well as the Fourier optical image transformation with extraction of invariances all become possible. To transform light cones into reciprocal gratings especially the spectral phase conditions in the eikonal of the geometrical optical imaging before the retinal 3D-grating become relevant first, then in the von Laue resp. reciprocal von Laue equation for 3D-grating optics inside the grating and finally in the periodicity of Talbot-2/Fresnel-planes in the near-field behind the grating. It is becoming possible to technically realize -- at least in some specific aspects -- such a cortical optoretina sensor element with its typical hexagonal-concentric structure which leads to these visual functions.

  12. Quantification of accuracy of the automated nonlinear image matching and anatomical labeling (ANIMAL) nonlinear registration algorithm for 4D CT images of lung.

    PubMed

    Heath, E; Collins, D L; Keall, P J; Dong, L; Seuntjens, J

    2007-11-01

    The performance of the ANIMAL (Automated Nonlinear Image Matching and Anatomical Labeling) nonlinear registration algorithm for registration of thoracic 4D CT images was investigated. The algorithm was modified to minimize the incidence of deformation vector discontinuities that occur during the registration of lung images. Registrations were performed between the inhale and exhale phases for five patients. The registration accuracy was quantified by the cross-correlation of transformed and target images and distance to agreement (DTA) measured based on anatomical landmarks and triangulated surfaces constructed from manual contours. On average, the vector DTA between transformed and target landmarks was 1.6 mm. Comparing transformed and target 3D triangulated surfaces derived from planning contours, the average target volume (GTV) center-of-mass shift was 2.0 mm and the 3D DTA was 1.6 mm. An average DTA of 1.8 mm was obtained for all planning structures. All DTA metrics were comparable to inter observer uncertainties established for landmark identification and manual contouring.

  13. Denoising and covariance estimation of single particle cryo-EM images.

    PubMed

    Bhamre, Tejal; Zhang, Teng; Singer, Amit

    2016-07-01

    The problem of image restoration in cryo-EM entails correcting for the effects of the Contrast Transfer Function (CTF) and noise. Popular methods for image restoration include 'phase flipping', which corrects only for the Fourier phases but not amplitudes, and Wiener filtering, which requires the spectral signal to noise ratio. We propose a new image restoration method which we call 'Covariance Wiener Filtering' (CWF). In CWF, the covariance matrix of the projection images is used within the classical Wiener filtering framework for solving the image restoration deconvolution problem. Our estimation procedure for the covariance matrix is new and successfully corrects for the CTF. We demonstrate the efficacy of CWF by applying it to restore both simulated and experimental cryo-EM images. Results with experimental datasets demonstrate that CWF provides a good way to evaluate the particle images and to see what the dataset contains even without 2D classification and averaging.

  14. Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics.

    PubMed

    Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A; Calhoun, Vince D

    2011-02-14

    We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D denoising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional denoising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the denoised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of denoised wavelet coefficients for each voxel. Given the de-correlated nature of these denoised wavelet coefficients, it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules: First, in the analysis module we combine a new 3-D wavelet denoising approach with signal separation properties of ICA in the wavelet domain. This step helps obtain an activation component that corresponds closely to the true underlying signal, which is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing+spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false

  15. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  16. Extensions to total variation denoising

    NASA Astrophysics Data System (ADS)

    Blomgren, Peter; Chan, Tony F.; Mulet, Pep

    1997-10-01

    The total variation denoising method, proposed by Rudin, Osher and Fatermi, 92, is a PDE-based algorithm for edge-preserving noise removal. The images resulting from its application are usually piecewise constant, possibly with a staircase effect at smooth transitions and may contain significantly less fine details than the original non-degraded image. In this paper we present some extensions to this technique that aim to improve the above drawbacks, through redefining the total variation functional or the noise constraints.

  17. IMRT treatment plans and functional planning with functional lung imaging from 4D-CT for thoracic cancer patients

    PubMed Central

    2013-01-01

    Background and purpose Currently, the inhomogeneity of the pulmonary function is not considered when treatment plans are generated in thoracic cancer radiotherapy. This study evaluates the dose of treatment plans on highly-functional volumes and performs functional treatment planning by incorporation of ventilation data from 4D-CT. Materials and methods Eleven patients were included in this retrospective study. Ventilation was calculated using 4D-CT. Two treatment plans were generated for each case, the first one without the incorporation of the ventilation and the second with it. The dose of the first plans was overlapped with the ventilation and analyzed. Highly-functional regions were avoided in the second treatment plans. Results For small targets in the first plans (PTV < 400 cc, 6 cases), all V5, V20 and the mean lung dose values for the highly-functional regions were lower than that of the total lung. For large targets, two out of five cases had higher V5 and V20 values for the highly-functional regions. All the second plans were within constraints. Conclusion Radiation treatments affect functional lung more seriously in large tumor cases. With compromise of dose to other critical organs, functional treatment planning to reduce dose in highly-functional lung volumes can be achieved PMID:23281734

  18. Comparison of two respiration monitoring systems for 4D imaging with a Siemens CT using a new dynamic breathing phantom.

    PubMed

    Vásquez, A C; Runz, A; Echner, G; Sroka-Perez, G; Karger, C P

    2012-05-07

    Four-dimensional computed tomography (4D-CT) requires breathing information from the patient, and for this, several systems are available. Testing of these systems, under realistic conditions, requires a phantom with a moving target and an expandable outer contour. An anthropomorphic phantom was developed to simulate patient breathing as well as lung tumor motion. Using the phantom, an optical camera system (GateCT) and a pressure sensor (AZ-733V) were simultaneously operated, and 4D-CTs were reconstructed with a Siemens CT using the provided local-amplitude-based sorting algorithm. The comparison of the tumor trajectories of both systems revealed discrepancies up to 9.7 mm. Breathing signal differences, such as baseline drift, temporal resolution and noise level were shown not to be the reason for this. Instead, the variability of the sampling interval and the accuracy of the sampling rate value written on the header of the GateCT-signal file were identified as the cause. Interpolation to regular sampling intervals and correction of the sampling rate to the actual value removed the observed discrepancies. Consistently, the introduction of sampling interval variability and inaccurate sampling rate values into the header of the AZ-733V file distorted the tumor trajectory for this system. These results underline the importance of testing new equipment thoroughly, especially if components of different manufacturers are combined.

  19. Compression and denoising in magnetic resonance imaging via SVD on the Fourier domain using computer algebra

    NASA Astrophysics Data System (ADS)

    Díaz, Felipe

    2015-09-01

    Magnetic resonance (MR) data reconstruction can be computationally a challenging task. The signal-to-noise ratio might also present complications, especially with high-resolution images. In this sense, data compression can be useful not only for reducing the complexity and memory requirements, but also to reduce noise, even to allow eliminate spurious components.This article proposes the use of a system based on singular value decomposition of low order for noise reconstruction and reduction in MR imaging system. The proposed method is evaluated using in vivo MRI data. Rebuilt images with less than 20 of the original data and with similar quality in terms of visual inspection are presented. Also a quantitative evaluation of the method is presented.

  20. A novel CT-FFR method for the coronary artery based on 4D-CT image analysis and structural and fluid analysis

    NASA Astrophysics Data System (ADS)

    Hirohata, K.; Kano, A.; Goryu, A.; Ooga, J.; Hongo, T.; Higashi, S.; Fujisawa, Y.; Wakai, S.; Arakita, K.; Ikeda, Y.; Kaminaga, S.; Ko, B. S.; Seneviratne, S. K.

    2015-03-01

    Non invasive fractional flow reserve derived from CT coronary angiography (CT-FFR) has to date been typically performed using the principles of fluid analysis in which a lumped parameter coronary vascular bed model is assigned to represent the impedance of the downstream coronary vascular networks absent in the computational domain for each coronary outlet. This approach may have a number of limitations. It may not account for the impact of the myocardial contraction and relaxation during the cardiac cycle, patient-specific boundary conditions for coronary artery outlets and vessel stiffness. We have developed a novel approach based on 4D-CT image tracking (registration) and structural and fluid analysis, to address these issues. In our approach, we analyzed the deformation variation of vessels and the volume variation of vessels, primarily from 70% to 100% of cardiac phase, to better define boundary conditions and stiffness of vessels. We used a statistical estimation method based on a hierarchical Bayes model to integrate 4D-CT measurements and structural and fluid analysis data. Under these analysis conditions, we performed structural and fluid analysis to determine pressure, flow rate and CT-FFR. The consistency of this method has been verified by a comparison of 4D-CTFFR analysis results derived from five clinical 4D-CT datasets with invasive measurements of FFR. Additionally, phantom experiments of flexible tubes with/without stenosis using pulsating pumps, flow sensors and pressure sensors were performed. Our results show that the proposed 4D-CT-FFR analysis method has the potential to accurately estimate the effect of coronary artery stenosis on blood flow.

  1. A weighted dictionary learning model for denoising images corrupted by mixed noise.

    PubMed

    Liu, Jun; Tai, Xue-Cheng; Huang, Haiyang; Huan, Zhongdan

    2013-03-01

    This paper proposes a general weighted l(2)-l(0) norms energy minimization model to remove mixed noise such as Gaussian-Gaussian mixture, impulse noise, and Gaussian-impulse noise from the images. The approach is built upon maximum likelihood estimation framework and sparse representations over a trained dictionary. Rather than optimizing the likelihood functional derived from a mixture distribution, we present a new weighting data fidelity function, which has the same minimizer as the original likelihood functional but is much easier to optimize. The weighting function in the model can be determined by the algorithm itself, and it plays a role of noise detection in terms of the different estimated noise parameters. By incorporating the sparse regularization of small image patches, the proposed method can efficiently remove a variety of mixed or single noise while preserving the image textures well. In addition, a modified K-SVD algorithm is designed to address the weighted rank-one approximation. The experimental results demonstrate its better performance compared with some existing methods.

  2. 4-D OCT in Developmental Cardiology

    NASA Astrophysics Data System (ADS)

    Jenkins, Michael W.; Rollins, Andrew M.

    Although strong evidence exists to suggest that altered cardiac function can lead to CHDs, few studies have investigated the influential role of cardiac function and biophysical forces on the development of the cardiovascular system due to a lack of proper in vivo imaging tools. 4-D imaging is needed to decipher the complex spatial and temporal patterns of biomechanical forces acting upon the heart. Numerous solutions over the past several years have demonstrated 4-D OCT imaging of the developing cardiovascular system. This chapter will focus on these solutions and explain their context in the evolution of 4-D OCT imaging. The first sections describe the relevant techniques (prospective gating, direct 4-D imaging, retrospective gating), while later sections focus on 4-D Doppler imaging and measurements of force implementing 4-D OCT Doppler. Finally, the techniques are summarized, and some possible future directions are discussed.

  3. Magnetic Particle / Magnetic Resonance Imaging: In-Vitro MPI-Guided Real Time Catheter Tracking and 4D Angioplasty Using a Road Map and Blood Pool Tracer Approach

    PubMed Central

    Jung, Caroline; Kaul, Michael Gerhard; Werner, Franziska; Them, Kolja; Reimer, Rudolph; Nielsen, Peter; vom Scheidt, Annika; Adam, Gerhard; Knopp, Tobias; Ittrich, Harald

    2016-01-01

    Purpose In-vitro evaluation of the feasibility of 4D real time tracking of endovascular devices and stenosis treatment with a magnetic particle imaging (MPI) / magnetic resonance imaging (MRI) road map approach and an MPI-guided approach using a blood pool tracer. Materials and Methods A guide wire and angioplasty-catheter were labeled with a thin layer of magnetic lacquer. For real time MPI a custom made software framework was developed. A stenotic vessel phantom filled with saline or superparamagnetic iron oxide nanoparticles (MM4) was equipped with bimodal fiducial markers for co-registration in preclinical 7T MRI and MPI. In-vitro angioplasty was performed inflating the balloon with saline or MM4. MPI data were acquired using a field of view of 37.3×37.3×18.6 mm3 and a frame rate of 46 volumes/sec. Analysis of the magnetic lacquer-marks on the devices were performed with electron microscopy, atomic absorption spectrometry and micro-computed tomography. Results Magnetic marks allowed for MPI/MRI guidance of interventional devices. Bimodal fiducial markers enable MPI/MRI image fusion for MRI based roadmapping. MRI roadmapping and the blood pool tracer approach facilitate MPI real time monitoring of in-vitro angioplasty. Successful angioplasty was verified with MPI and MRI. Magnetic marks consist of micrometer sized ferromagnetic plates mainly composed of iron and iron oxide. Conclusions 4D real time MP imaging, tracking and guiding of endovascular instruments and in-vitro angioplasty is feasible. In addition to an approach that requires a blood pool tracer, MRI based roadmapping might emerge as a promising tool for radiation free 4D MPI-guided interventions. PMID:27249022

  4. MO-C-17A-02: A Novel Method for Evaluating Hepatic Stiffness Based On 4D-MRI and Deformable Image Registration

    SciTech Connect

    Cui, T; Liang, X; Czito, B; Palta, M; Bashir, M; Yin, F; Cai, J

    2014-06-15

    Purpose: Quantitative imaging of hepatic stiffness has significant potential in radiation therapy, ranging from treatment planning to response assessment. This study aims to develop a novel, noninvasive method to quantify liver stiffness with 3D strains liver maps using 4D-MRI and deformable image registration (DIR). Methods: Five patients with liver cancer were imaged with an institutionally developed 4D-MRI technique under an IRB-approved protocol. Displacement vector fields (DVFs) across the liver were generated via DIR of different phases of 4D-MRI. Strain tensor at each voxel of interest (VOI) was computed from the relative displacements between the VOI and each of the six adjacent voxels. Three principal strains (E{sub 1}, E{sub 2} and E{sub 3}) of the VOI were derived as the eigenvalue of the strain tensor, which represent the magnitudes of the maximum and minimum stretches. Strain tensors for two regions of interest (ROIs) were calculated and compared for each patient, one within the tumor (ROI{sub 1}) and the other in normal liver distant from the heart (ROI{sub 2}). Results: 3D strain maps were successfully generated fort each respiratory phase of 4D-MRI for all patients. Liver deformations induced by both respiration and cardiac motion were observed. Differences in strain values adjacent to the distant from the heart indicate significant deformation caused by cardiac expansion during diastole. The large E{sub 1}/E{sub 2} (∼2) and E{sub 1}/E{sub 2} (∼10) ratios reflect the predominance of liver deformation in the superior-inferior direction. The mean E{sub 1} in ROI{sub 1} (0.12±0.10) was smaller than in ROI{sub 2} (0.15±0.12), reflecting a higher degree of stiffness of the cirrhotic tumor. Conclusion: We have successfully developed a novel method for quantitatively evaluating regional hepatic stiffness based on DIR of 4D-MRI. Our initial findings indicate that liver strain is heterogeneous, and liver tumors may have lower principal strain values

  5. Imaging 4-D hydrogeologic processes with geophysics: an example using crosswell electrical measurements to characterize a tracer plume

    NASA Astrophysics Data System (ADS)

    Singha, K.; Gorelick, S. M.

    2005-05-01

    Geophysical methods provide an inexpensive way to collect spatially exhaustive data about hydrogeologic, mechanical or geochemical parameters. In the presence of heterogeneity over multiple scales of these parameters at most field sites, geophysical data can contribute greatly to our understanding about the subsurface by providing important data we would otherwise lack without extensive, and often expensive, direct sampling. Recent work has highlighted the use of time-lapse geophysical data to help characterize hydrogeologic processes. We investigate the potential for making quantitative assessments of sodium-chloride tracer transport using 4-D crosswell electrical resistivity tomography (ERT) in a sand and gravel aquifer at the Massachusetts Military Reservation on Cape Cod. Given information about the relation between electrical conductivity and tracer concentration, we can estimate spatial moments from the 3-D ERT inversions, which give us information about tracer mass, center of mass, and dispersivity through time. The accuracy of these integrated measurements of tracer plume behavior is dependent on spatially variable resolution. The ERT inversions display greater apparent dispersion than tracer plumes estimated by 3D advective-dispersive simulation. This behavior is attributed to reduced measurement sensitivity to electrical conductivity values with distance from the electrodes and differential smoothing from tomographic inversion. The latter is a problem common to overparameterized inverse problems, which often occur when real-world budget limitations preclude extensive well-drilling or additional data collection. These results prompt future work on intelligent methods for reparameterizing the inverse problem and coupling additional disparate data sets.

  6. TU-G-BRA-04: Changes in Regional Lung Function Measured by 4D-CT Ventilation Imaging for Thoracic Radiotherapy

    SciTech Connect

    Nakajima, Y; Kadoya, N; Kabus, S; Loo, B; Keall, P; Yamamoto, T

    2015-06-15

    Purpose: To test the hypothesis: 4D-CT ventilation imaging can show the known effects of radiotherapy on lung function: (1) radiation-induced ventilation reductions, and (2) ventilation increases caused by tumor regression. Methods: Repeat 4D-CT scans (pre-, mid- and/or post-treatment) were acquired prospectively for 11 thoracic cancer patients in an IRB-approved clinical trial. A ventilation image for each time point was created using deformable image registration and the Hounsfield unit (HU)-based or Jacobian-based metric. The 11 patients were divided into two subgroups based on tumor volume reduction using a threshold of 5 cm{sup 3}. To quantify radiation-induced ventilation reduction, six patients who showed a small tumor volume reduction (<5 cm{sup 3}) were analyzed for dose-response relationships. To investigate ventilation increase caused by tumor regression, two of the other five patients were analyzed to compare ventilation changes in the lung lobes affected and unaffected by the tumor. The remaining three patients were excluded because there were no unaffected lobes. Results: Dose-dependent reductions of HU-based ventilation were observed in a majority of the patient-specific dose-response curves and in the population-based dose-response curve, whereas no clear relationship was seen for Jacobian-based ventilation. The post-treatment population-based dose-response curve of HU-based ventilation demonstrated the average ventilation reductions of 20.9±7.0% at 35–40 Gy (equivalent dose in 2-Gy fractions, EQD2), and 40.6±22.9% at 75–80 Gy EQD2. Remarkable ventilation increases in the affected lobes were observed for the two patients who showed an average tumor volume reduction of 37.1 cm{sup 3} and re-opening airways. The mid-treatment increase in HU-based ventilation of patient 3 was 100.4% in the affected lobes, which was considerably greater than 7.8% in the unaffected lobes. Conclusion: This study has demonstrated that 4D-CT ventilation imaging shows

  7. Optimization of dynamic measurement of receptor kinetics by wavelet denoising.

    PubMed

    Alpert, Nathaniel M; Reilhac, Anthonin; Chio, Tat C; Selesnick, Ivan

    2006-04-01

    The most important technical limitation affecting dynamic measurements with PET is low signal-to-noise ratio (SNR). Several reports have suggested that wavelet processing of receptor kinetic data in the human brain can improve the SNR of parametric images of binding potential (BP). However, it is difficult to fully assess these reports because objective standards have not been developed to measure the tradeoff between accuracy (e.g. degradation of resolution) and precision. This paper employs a realistic simulation method that includes all major elements affecting image formation. The simulation was used to derive an ensemble of dynamic PET ligand (11C-raclopride) experiments that was subjected to wavelet processing. A method for optimizing wavelet denoising is presented and used to analyze the simulated experiments. Using optimized wavelet denoising, SNR of the four-dimensional PET data increased by about a factor of two and SNR of three-dimensional BP maps increased by about a factor of 1.5. Analysis of the difference between the processed and unprocessed means for the 4D concentration data showed that more than 80% of voxels in the ensemble mean of the wavelet processed data deviated by less than 3%. These results show that a 1.5x increase in SNR can be achieved with little degradation of resolution. This corresponds to injecting about twice the radioactivity, a maneuver that is not possible in human studies without saturating the PET camera and/or exposing the subject to more than permitted radioactivity.

  8. Nuisance Regression of High-Frequency Functional Magnetic Resonance Imaging Data: Denoising Can Be Noisy.

    PubMed

    Chen, Jingyuan E; Jahanian, Hesamoddin; Glover, Gary H

    2017-02-01

    Recently, emerging studies have demonstrated the existence of brain resting-state spontaneous activity at frequencies higher than the conventional 0.1 Hz. A few groups utilizing accelerated acquisitions have reported persisting signals beyond 1 Hz, which seems too high to be accommodated by the sluggish hemodynamic process underpinning blood oxygen level-dependent contrasts (the upper limit of the canonical model is ∼0.3 Hz). It is thus questionable whether the observed high-frequency (HF) functional connectivity originates from alternative mechanisms (e.g., inflow effects, proton density changes in or near activated neural tissue) or rather is artificially introduced by improper preprocessing operations. In this study, we examined the influence of a common preprocessing step-whole-band linear nuisance regression (WB-LNR)-on resting-state functional connectivity (RSFC) and demonstrated through both simulation and analysis of real dataset that WB-LNR can introduce spurious network structures into the HF bands of functional magnetic resonance imaging (fMRI) signals. Findings of present study call into question whether published observations on HF-RSFC are partly attributable to improper data preprocessing instead of actual neural activities.

  9. Imaging and dosimetric errors in 4D PET/CT-guided radiotherapy from patient-specific respiratory patterns: a dynamic motion phantom end-to-end study

    NASA Astrophysics Data System (ADS)

    Bowen, S. R.; Nyflot, M. J.; Herrmann, C.; Groh, C. M.; Meyer, J.; Wollenweber, S. D.; Stearns, C. W.; Kinahan, P. E.; Sandison, G. A.

    2015-05-01

    Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10-20%, treatment planning errors were 5-10%, and treatment delivery errors were 5-30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5-10% in PET/CT imaging, <5% in treatment planning, and <2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT

  10. High-Resolution 4D Imaging of Technetium Transport in Porous Media using Preclinical SPECT-CT

    NASA Astrophysics Data System (ADS)

    Dogan, M.; DeVol, T. A.; Groen, H.; Moysey, S. M.; Ramakers, R.; Powell, B. A.

    2015-12-01

    Preclinical SPECT-CT (single-photon emission computed tomography with integrated X-ray computed tomography) offers the potential to quantitatively image the dynamic three-dimensional distribution of radioisotopes with sub-millimeter resolution, overlaid with structural CT images (20-200 micron resolution), making this an attractive method for studying transport in porous media. A preclinical SPECT-CT system (U-SPECT4CT, MILabs BV. Utrecht, The Netherlands) was evaluated for imaging flow and transport of 99mTc (t1/2=6hrs) using a 46,5mm by 156,4mm column packed with individual layers consisting of <0.2mm diameter silica gel, 0.2-0.25, 0.5, 1.0, 2.0, 3.0, and 4.0mm diameter glass beads, and a natural soil sample obtained from the Savannah River Site. The column was saturated with water prior to injecting the 99mTc solution. During the injection the flow was interrupted intermittently for 10 minute periods to allow for the acquisition of a SPECT image of the transport front. Non-uniformity of the front was clearly observed in the images as well as the retarded movement of 99mTc in the soil layer. The latter is suggesting good potential for monitoring transport processes occurring on the timescale of hours. After breakthrough of 99mTc was achieved, the flow was stopped and SPECT data were collected in one hour increments to evaluate the sensitivity of the instrument as the isotope decayed. Fused SPECT- CT images allowed for improved interpretation of 99mTc distributions within individual pore spaces. With ~3 MBq remaining in the column, the lowest activity imaged, it was not possible to clearly discriminate any of the pore spaces.

  11. A finite element updating approach for identification of the anisotropic hyperelastic properties of normal and diseased aortic walls from 4D ultrasound strain imaging.

    PubMed

    Wittek, Andreas; Derwich, Wojciech; Karatolios, Konstantinos; Fritzen, Claus Peter; Vogt, Sebastian; Schmitz-Rixen, Thomas; Blase, Christopher

    2016-05-01

    Computational analysis of the biomechanics of the vascular system aims at a better understanding of its physiology and pathophysiology and eventually at diagnostic clinical use. Because of great inter-individual variations, such computational models have to be patient-specific with regard to geometry, material properties and applied loads and boundary conditions. Full-field measurements of heterogeneous displacement or strain fields can be used to improve the reliability of parameter identification based on a reduced number of observed load cases as is usually given in an in vivo setting. Time resolved 3D ultrasound combined with speckle tracking (4D US) is an imaging technique that provides full field information of heterogeneous aortic wall strain distributions in vivo. In a numerical verification experiment, we have shown the feasibility of identifying nonlinear and orthotropic constitutive behaviour based on the observation of just two load cases, even though the load free geometry is unknown, if heterogeneous strain fields are available. Only clinically available 4D US measurements of wall motion and diastolic and systolic blood pressure are required as input for the inverse FE updating approach. Application of the developed inverse approach to 4D US data sets of three aortic wall segments from volunteers of different age and pathology resulted in the reproducible identification of three distinct and (patho-) physiologically reasonable constitutive behaviours. The use of patient-individual material properties in biomechanical modelling of AAAs is a step towards more personalized rupture risk assessment.

  12. Prenatal diagnosis of a patent urachus cyst with the use of 2D, 3D, 4D ultrasound and fetal magnetic resonance imaging.

    PubMed

    Fuchs, F; Picone, O; Levaillant, J M; Mabille, M; Mas, A E; Frydman, R; Senat, M V

    2008-01-01

    Patent urachus cyst is a rare umbilical anomaly, which is poorly detected prenatally and frequently confounded with pseudo bladder exstrophy or omphalocele. A 27-year-old woman was referred to our prenatal diagnosis centre at 18 weeks of gestation after diagnosis of a megabladder and 2 umbilical cord cysts. Subsequent 2D, 3D and 4D ultrasound examinations and fetal magnetic resonance imaging (MRI) revealed a typical umbilical cyst and an extra-abdominal cyst, communicating with the vertex of the fetal bladder through a small channel that increased in size when the fetus voided urine. Termination of pregnancy occured at 31 weeks because of associated cerebral septal agenesis, and autopsy confirmed the prenatal diagnosis of urachus cyst. Few cases of urachus cyst diagnosed prenatally are reported in literature, but none were associated with other extra-abdominal disorders and none used 3D, 4D and fetal MRI. Our case illustrated the efficiency in prenatal diagnosis of 3D and 4D ultrasound examinations. This could help pediatrician surgeons to explain to a couple about neonatal surgical repair and plastic reconstruction in the prenatal period.

  13. Assessing Cardiac Injury in Mice With Dual Energy-MicroCT, 4D-MicroCT, and MicroSPECT Imaging After Partial Heart Irradiation

    SciTech Connect

    Lee, Chang-Lung; Min, Hooney; Befera, Nicholas; Clark, Darin; Qi, Yi; Das, Shiva; Johnson, G. Allan; Badea, Cristian T.; Kirsch, David G.

    2014-03-01

    Purpose: To develop a mouse model of cardiac injury after partial heart irradiation (PHI) and to test whether dual energy (DE)-microCT and 4-dimensional (4D)-microCT can be used to assess cardiac injury after PHI to complement myocardial perfusion imaging using micro-single photon emission computed tomography (SPECT). Methods and Materials: To study cardiac injury from tangent field irradiation in mice, we used a small-field biological irradiator to deliver a single dose of 12 Gy x-rays to approximately one-third of the left ventricle (LV) of Tie2Cre; p53{sup FL/+} and Tie2Cre; p53{sup FL/−} mice, where 1 or both alleles of p53 are deleted in endothelial cells. Four and 8 weeks after irradiation, mice were injected with gold and iodinated nanoparticle-based contrast agents, and imaged with DE-microCT and 4D-microCT to evaluate myocardial vascular permeability and cardiac function, respectively. Additionally, the same mice were imaged with microSPECT to assess myocardial perfusion. Results: After PHI with tangent fields, DE-microCT scans showed a time-dependent increase in accumulation of gold nanoparticles (AuNp) in the myocardium of Tie2Cre; p53{sup FL/−} mice. In Tie2Cre; p53{sup FL/−} mice, extravasation of AuNp was observed within the irradiated LV, whereas in the myocardium of Tie2Cre; p53{sup FL/+} mice, AuNp were restricted to blood vessels. In addition, data from DE-microCT and microSPECT showed a linear correlation (R{sup 2} = 0.97) between the fraction of the LV that accumulated AuNp and the fraction of LV with a perfusion defect. Furthermore, 4D-microCT scans demonstrated that PHI caused a markedly decreased ejection fraction, and higher end-diastolic and end-systolic volumes, to develop in Tie2Cre; p53{sup FL/−} mice, which were associated with compensatory cardiac hypertrophy of the heart that was not irradiated. Conclusions: Our results show that DE-microCT and 4D-microCT with nanoparticle-based contrast agents are novel imaging approaches

  14. Perfusion-weighted imaging and dynamic 4D angiograms for the estimation of collateral blood flow in lacunar infarction.

    PubMed

    Förster, Alex; Mürle, Bettina; Böhme, Johannes; Al-Zghloul, Mansour; Kerl, Hans U; Wenz, Holger; Groden, Christoph

    2016-10-01

    Although lacunar infarction accounts for approximately 25% of ischemic strokes, collateral blood flow through anastomoses is not well evaluated in lacunar infarction. In 111 lacunar infarction patients, we analyzed diffusion-weighted images, perfusion-weighted images, and blood flow on dynamic four-dimensional angiograms generated by use of Signal Processing In NMR-Software. Blood flow was classified as absent (type 1), from periphery to center (type 2), from center to periphery (type 3), and combination of type 2 and 3 (type 4). On diffusion-weighted images, lacunar infarction was found in the basal ganglia (11.7%), internal capsule (24.3%), corona radiata (30.6%), thalamus (24.3%), and brainstem (9.0%). In 58 (52.2%) patients, perfusion-weighted image showed a circumscribed hypoperfusion, in one (0.9%) a circumscribed hyperperfusion, whereas the remainder was normal. In 36 (62.1%) patients, a larger perfusion deficit (>7 mm) was observed. In these, blood flow was classified type 1 in four (11.1%), 2 in 17 (47.2%), 3 in 9 (25.0%), and 4 in six (16.7%) patients. Patients with lacunar infarction in the posterior circulation more often demonstrated blood flow type 2 and less often type 3 (p = 0.01). Detailed examination and graduation of blood flow in lacunar infarction by use of dynamic four-dimensional angiograms is feasible and may serve for a better characterization of this stroke subtype.

  15. An automated landmark-based elastic registration technique for large deformation recovery from 4-D CT lung images

    NASA Astrophysics Data System (ADS)

    Negahdar, Mohammadreza; Zacarias, Albert; Milam, Rebecca A.; Dunlap, Neal; Woo, Shiao Y.; Amini, Amir A.

    2012-03-01

    The treatment plan evaluation for lung cancer patients involves pre-treatment and post-treatment volume CT imaging of the lung. However, treatment of the tumor volume lung results in structural changes to the lung during the course of treatment. In order to register the pre-treatment volume to post-treatment volume, there is a need to find robust and homologous features which are not affected by the radiation treatment along with a smooth deformation field. Since airways are well-distributed in the entire lung, in this paper, we propose use of airway tree bifurcations for registration of the pre-treatment volume to the post-treatment volume. A dedicated and automated algorithm has been developed that finds corresponding airway bifurcations in both images. To derive the 3-D deformation field, a B-spline transformation model guided by mutual information similarity metric was used to guarantee the smoothness of the transformation while combining global information from bifurcation points. Therefore, the approach combines both global statistical intensity information with local image feature information. Since during normal breathing, the lung undergoes large nonlinear deformations, it is expected that the proposed method would also be applicable to large deformation registration between maximum inhale and maximum exhale images in the same subject. The method has been evaluated by registering 3-D CT volumes at maximum exhale data to all the other temporal volumes in the POPI-model data.

  16. A compact representation for the auditory full-range response and its fast denoising using an image filter based on the Radon transform.

    PubMed

    Kohl, Manuel C; Strauss, Daniel J; Kohl, Manuel C; Strauss, Daniel J; Strauss, Daniel J; Kohl, Manuel C

    2016-08-01

    The Auditory Brainstem, Middle-Latency and Late Responses, a class of event-related potentials (ERPs), are of considerable interest in neuroscience research as robust neural correlates of different processing stages along the auditory pathway. While most research to date centers around one of the responses at a time for practical reasons, recent efforts indicate a paradigm shift towards acquiring them together, enabling the simultaneous monitoring of all auditory processing stages from the brainstem to the cortex. In this paper, we introduce a compact representation for this Auditory Full-Range Response (AFRR) as an ERP map with adaptive sampling rate, making it suitable for computationally inexpencive image filtering. Furthermore, we propose a novel algorithm for the fast denoising of such ERP maps based on the Radon Transform and its inversion by filtered backprojection. Its performance is compared qualitatively to a Gaussian means filter using a real-world chirp-evoked AFRR recording. The algorithm exhibits good noise suppression as well as high preservance of the single-response structure, making it a promising denoising tool for future ERP studies.

  17. 4D megahertz optical coherence tomography (OCT): imaging and live display beyond 1 gigavoxel/sec (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Huber, Robert A.; Draxinger, Wolfgang; Wieser, Wolfgang; Kolb, Jan Philip; Pfeiffer, Tom; Karpf, Sebastian N.; Eibl, Matthias; Klein, Thomas

    2016-03-01

    Over the last 20 years, optical coherence tomography (OCT) has become a valuable diagnostic tool in ophthalmology with several 10,000 devices sold today. Other applications, like intravascular OCT in cardiology and gastro-intestinal imaging will follow. OCT provides 3-dimensional image data with microscopic resolution of biological tissue in vivo. In most applications, off-line processing of the acquired OCT-data is sufficient. However, for OCT applications like OCT aided surgical microscopes, for functional OCT imaging of tissue after a stimulus, or for interactive endoscopy an OCT engine capable of acquiring, processing and displaying large and high quality 3D OCT data sets at video rate is highly desired. We developed such a prototype OCT engine and demonstrate live OCT with 25 volumes per second at a size of 320x320x320 pixels. The computer processing load of more than 1.5 TFLOPS was handled by a GTX 690 graphics processing unit with more than 3000 stream processors operating in parallel. In the talk, we will describe the optics and electronics hardware as well as the software of the system in detail and analyze current limitations. The talk also focuses on new OCT applications, where such a system improves diagnosis and monitoring of medical procedures. The additional acquisition of hyperspectral stimulated Raman signals with the system will be discussed.

  18. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations.

  19. SU-E-J-151: Dosimetric Evaluation of DIR Mapped Contours for Image Guided Adaptive Radiotherapy with 4D Cone-Beam CT

    SciTech Connect

    Balik, S; Weiss, E; Williamson, J; Hugo, G; Jan, N; Zhang, L; Roman, N; Christensen, G

    2014-06-01

    Purpose: To estimate dosimetric errors resulting from using contours deformably mapped from planning CT to 4D cone beam CT (CBCT) images for image-guided adaptive radiotherapy of locally advanced non-small cell lung cancer (NSCLC). Methods: Ten locally advanced non-small cell lung cancer (NSCLC) patients underwent one planning 4D fan-beam CT (4DFBCT) and weekly 4DCBCT scans. Multiple physicians delineated the gross tumor volume (GTV) and normal structures in planning CT images and only GTV in CBCT images. Manual contours were mapped from planning CT to CBCTs using small deformation, inverse consistent linear elastic (SICLE) algorithm for two scans in each patient. Two physicians reviewed and rated the DIR-mapped (auto) and manual GTV contours as clinically acceptable (CA), clinically acceptable after minor modification (CAMM) and unacceptable (CU). Mapped normal structures were visually inspected and corrected if necessary, and used to override tissue density for dose calculation. CTV (6mm expansion of GTV) and PTV (5mm expansion of CTV) were created. VMAT plans were generated using the DIR-mapped contours to deliver 66 Gy in 33 fractions with 95% and 100% coverage (V66) to PTV and CTV, respectively. Plan evaluation for V66 was based on manual PTV and CTV contours. Results: Mean PTV V66 was 84% (range 75% – 95%) and mean CTV V66 was 97% (range 93% – 100%) for CAMM scored plans (12 plans); and was 90% (range 80% – 95%) and 99% (range 95% – 100%) for CA scored plans (7 plans). The difference in V66 between CAMM and CA was significant for PTV (p = 0.03) and approached significance for CTV (p = 0.07). Conclusion: The quality of DIR-mapped contours directly impacted the plan quality for 4DCBCT-based adaptation. Larger safety margins may be needed when planning with auto contours for IGART with 4DCBCT images. Reseach was supported by NIH P01CA116602.

  20. Feasibility of a new image processing (4D Auto LVQ) to assessing right ventricular function in patients with chronic obstructive pulmonary disease.

    PubMed

    Zheng, Xiao-Zhi; Yang, Bin; Wu, Jing

    2014-06-01

    A new single-beat three-dimensional (3D) real time echocardiographic semi-automatic images processing (4D Auto LVQ) allows accurate assessment of left ventricular function, but whether it is suitable for the evaluation of right ventricular function remains unknown. To evaluate the feasibility of this procedure for assessing right ventricular volumes and function, right ventricular end-diastolic volumes (RVEDV), end-systolic volumes (RVESV) and ejection fraction (RVEF), stroke volumes (SV) and cardiac output (CO) were computed in 49 patients with chronic obstructive pulmonary disease (COPD) using 4D Auto LVQ. The myocardial performance index (MPI) was obtained by Doppler tissue imaging. The RV function parameters were compared with MPI by linear correlation analysis. A comparison of the performance of these RV function parameters in discrimination between MPI at a value of >0.45 or not was done. Compared with normal subjects, patients with COPD had significantly greater RVEDV, RVESV, MPI and significantly lower RVEF. Significant correlations were found between RVEF and MPI (r = -0.67, p < 0.001). The areas under the receiver operating characteristic curve for RVEF in discrimination between MPI at a value of >0.45 or not were 0.72, while they were 0.55 for SV and 0.57 for CO, respectively. The overall sensitivity, specificity and accuracy for RVEF analysis in predicting a >0.45 MPI in patients with COPD was 78.57%, 66.67% and 73.46%, respectively. These data suggest that 4D Auto LVQ is a feasible method for right ventricular volumes and function quantification in patients with COPD. Further studies are needed to improve the accuracy of the measurements.

  1. Multimodal 4D imaging of cell-pathogen interactions in the lungs provides new insights into pulmonary infections

    NASA Astrophysics Data System (ADS)

    Fiole, Daniel; Douady, Julien; Cleret, Aurélie; Garraud, Kévin; Mathieu, Jacques; Quesnel-Hellmann, Anne; Tournier, Jean-Nicolas

    2011-07-01

    Lung efficiency as gas exchanger organ is based on the delicate balance of its associated mucosal immune system between inflammation and sterility. In this study, we developed a dynamic imaging protocol using confocal and twophoton excitation fluorescence (2PEF) on freshly harvested infected lungs. This modus operandi allowed the collection of important information about CX3CR1+ pulmonary cells. This major immune cell subset turned out to be distributed in an anisotropic way in the lungs: subpleural, parenchymal and bronchial CX3CR1+ cells have then been described. The way parenchymal CX3CR1+ cells react against LPS activation has been considered using Matlab software, demonstrating a dramatic increase of average cell speed. Then, interactions between Bacillus anthracis spores and CX3CR1+ dendritic cells have been investigated, providing not only evidences of CX3CR1+ cells involvement in pathogen uptake but also details about the capture mechanisms.

  2. 4D Imaging of Salt Precipitation during Evaporation from Saline Porous Media Influenced by the Particle Size Distribution

    NASA Astrophysics Data System (ADS)

    Norouzi Rad, M.; Shokri, N.

    2014-12-01

    Understanding the physics of water evaporation from saline porous media is important in many processes such as evaporation from porous media, vegetation, plant growth, biodiversity in soil, and durability of building materials. To investigate the effect of particle size distribution on the dynamics of salt precipitation in saline porous media during evaporation, we applied X-ray micro-tomography technique. Six samples of quartz sand with different grain size distributions were used in the present study enabling us to constrain the effects of particle and pore sizes on salt precipitation patterns and dynamics. The pore size distributions were computed using the pore-scale X-ray images. The packed beds were saturated with NaCl solution of 3 Molal and the X-ray imaging was continued for one day with temporal resolution of 30 min resulting in pore scale information about the evaporation and precipitation dynamics. Our results show more precipitation at the early stage of the evaporation in the case of sand with the larger particle size due to the presence of fewer evaporation sites at the surface. The presence of more preferential evaporation sites at the surface of finer sands significantly modified the patterns and thickness of the salt crust deposited on the surface such that a thinner salt crust was formed in the case of sand with smaller particle size covering larger area at the surface as opposed to the thicker patchy crusts in samples with larger particle sizes. Our results provide new insights regarding the physics of salt precipitation in porous media during evaporation.

  3. Wavelet denoising in voxel-based parametric estimation of small animal PET images: a systematic evaluation of spatial constraints and noise reduction algorithms.

    PubMed

    Su, Yi; Shoghi, Kooresh I

    2008-11-07

    Voxel-based estimation of PET images, generally referred to as parametric imaging, can provide invaluable information about the heterogeneity of an imaging agent in a given tissue. Due to high level of noise in dynamic images, however, the estimated parametric image is often noisy and unreliable. Several approaches have been developed to address this challenge, including spatial noise reduction techniques, cluster analysis and spatial constrained weighted nonlinear least-square (SCWNLS) methods. In this study, we develop and test several noise reduction techniques combined with SCWNLS using simulated dynamic PET images. Both spatial smoothing filters and wavelet-based noise reduction techniques are investigated. In addition, 12 different parametric imaging methods are compared using simulated data. With the combination of noise reduction techniques and SCWNLS methods, more accurate parameter estimation can be achieved than with either of the two techniques alone. A less than 10% relative root-mean-square error is achieved with the combined approach in the simulation study. The wavelet denoising based approach is less sensitive to noise and provides more accurate parameter estimation at higher noise levels. Further evaluation of the proposed methods is performed using actual small animal PET datasets. We expect that the proposed method would be useful for cardiac, neurological and oncologic applications.

  4. Cardiac function and perfusion dynamics measured on a beat-by-beat basis in the live mouse using ultra-fast 4D optoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Ford, Steven J.; Deán-Ben, Xosé L.; Razansky, Daniel

    2015-03-01

    The fast heart rate (~7 Hz) of the mouse makes cardiac imaging and functional analysis difficult when studying mouse models of cardiovascular disease, and cannot be done truly in real-time and 3D using established imaging modalities. Optoacoustic imaging, on the other hand, provides ultra-fast imaging at up to 50 volumetric frames per second, allowing for acquisition of several frames per mouse cardiac cycle. In this study, we combined a recently-developed 3D optoacoustic imaging array with novel analytical techniques to assess cardiac function and perfusion dynamics of the mouse heart at high, 4D spatiotemporal resolution. In brief, the heart of an anesthetized mouse was imaged over a series of multiple volumetric frames. In another experiment, an intravenous bolus of indocyanine green (ICG) was injected and its distribution was subsequently imaged in the heart. Unique temporal features of the cardiac cycle and ICG distribution profiles were used to segment the heart from background and to assess cardiac function. The 3D nature of the experimental data allowed for determination of cardiac volumes at ~7-8 frames per mouse cardiac cycle, providing important cardiac function parameters (e.g., stroke volume, ejection fraction) on a beat-by-beat basis, which has been previously unachieved by any other cardiac imaging modality. Furthermore, ICG distribution dynamics allowed for the determination of pulmonary transit time and thus additional quantitative measures of cardiovascular function. This work demonstrates the potential for optoacoustic cardiac imaging and is expected to have a major contribution toward future preclinical studies of animal models of cardiovascular health and disease.

  5. Verifying 4D gated radiotherapy using time-integrated electronic portal imaging: a phantom and clinical study

    PubMed Central

    van Sörnsen de Koste, John R; Cuijpers, Johan P; de Geest, Frank GM; Lagerwaard, Frank J; Slotman, Ben J; Senan, Suresh

    2007-01-01

    Background Respiration-gated radiotherapy (RGRT) can decrease treatment toxicity by allowing for smaller treatment volumes for mobile tumors. RGRT is commonly performed using external surrogates of tumor motion. We describe the use of time-integrated electronic portal imaging (TI-EPI) to verify the position of internal structures during RGRT delivery Methods TI-EPI portals were generated by continuously collecting exit dose data (aSi500 EPID, Portal vision, Varian Medical Systems) when a respiratory motion phantom was irradiated during expiration, inspiration and free breathing phases. RGRT was delivered using the Varian RPM system, and grey value profile plots over a fixed trajectory were used to study object positions. Time-related positional information was derived by subtracting grey values from TI-EPI portals sharing the pixel matrix. TI-EPI portals were also collected in 2 patients undergoing RPM-triggered RGRT for a lung and hepatic tumor (with fiducial markers), and corresponding planning 4-dimensional CT (4DCT) scans were analyzed for motion amplitude. Results Integral grey values of phantom TI-EPI portals correlated well with mean object position in all respiratory phases. Cranio-caudal motion of internal structures ranged from 17.5–20.0 mm on planning 4DCT scans. TI-EPI of bronchial images reproduced with a mean value of 5.3 mm (1 SD 3.0 mm) located cranial to planned position. Mean hepatic fiducial markers reproduced with 3.2 mm (SD 2.2 mm) caudal to planned position. After bony alignment to exclude set-up errors, mean displacement in the two structures was 2.8 mm and 1.4 mm, respectively, and corresponding reproducibility in anatomy improved to 1.6 mm (1 SD). Conclusion TI-EPI appears to be a promising method for verifying delivery of RGRT. The RPM system was a good indirect surrogate of internal anatomy, but use of TI-EPI allowed for a direct link between anatomy and breathing patterns. PMID:17760960

  6. Integration of speckle de-noising and image segmentation using Synthetic Aperture Radar image for flood extent extraction

    NASA Astrophysics Data System (ADS)

    Senthilnath, J.; Shenoy, H. Vikram; Rajendra, Ritwik; Omkar, S. N.; Mani, V.; Diwakar, P. G.

    2013-06-01

    Flood is one of the detrimental hydro-meteorological threats to mankind. This compels very efficient flood assessment models. In this paper, we propose remote sensing based flood assessment using Synthetic Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they suffer from the speckle noise. Hence, the processing of SAR image is applied in two stages: speckle removal filters and image segmentation methods for flood mapping. The speckle noise has been reduced with the help of Lee, Frost and Gamma MAP filters. A performance comparison of these speckle removal filters is presented. From the results obtained, we deduce that the Gamma MAP is reliable. The selected Gamma MAP filtered image is segmented using Gray Level Co-occurrence Matrix (GLCM) and Mean Shift Segmentation (MSS). The GLCM is a texture analysis method that separates the image pixels into water and non-water groups based on their spectral feature whereas MSS is a gradient ascent method, here segmentation is carried out using spectral and spatial information. As test case, Kosi river flood is considered in our study. From the segmentation result of both these methods are comprehensively analysed and concluded that the MSS is efficient for flood mapping.

  7. Validating and improving CT ventilation imaging by correlating with ventilation 4D-PET/CT using {sup 68}Ga-labeled nanoparticles

    SciTech Connect

    Kipritidis, John Keall, Paul J.; Siva, Shankar; Hofman, Michael S.; Callahan, Jason; Hicks, Rodney J.

    2014-01-15

    Purpose: CT ventilation imaging is a novel functional lung imaging modality based on deformable image registration. The authors present the first validation study of CT ventilation using positron emission tomography with{sup 68}Ga-labeled nanoparticles (PET-Galligas). The authors quantify this agreement for different CT ventilation metrics and PET reconstruction parameters. Methods: PET-Galligas ventilation scans were acquired for 12 lung cancer patients using a four-dimensional (4D) PET/CT scanner. CT ventilation images were then produced by applying B-spline deformable image registration between the respiratory correlated phases of the 4D-CT. The authors test four ventilation metrics, two existing and two modified. The two existing metrics model mechanical ventilation (alveolar air-flow) based on Hounsfield unit (HU) change (V{sub HU}) or Jacobian determinant of deformation (V{sub Jac}). The two modified metrics incorporate a voxel-wise tissue-density scaling (ρV{sub HU} and ρV{sub Jac}) and were hypothesized to better model the physiological ventilation. In order to assess the impact of PET image quality, comparisons were performed using both standard and respiratory-gated PET images with the former exhibiting better signal. Different median filtering kernels (σ{sub m} = 0 or 3 mm) were also applied to all images. As in previous studies, similarity metrics included the Spearman correlation coefficient r within the segmented lung volumes, and Dice coefficient d{sub 20} for the (0 − 20)th functional percentile volumes. Results: The best agreement between CT and PET ventilation was obtained comparing standard PET images to the density-scaled HU metric (ρV{sub HU}) with σ{sub m} = 3 mm. This leads to correlation values in the ranges 0.22 ⩽ r ⩽ 0.76 and 0.38 ⩽ d{sub 20} ⩽ 0.68, with r{sup ¯}=0.42±0.16 and d{sup ¯}{sub 20}=0.52±0.09 averaged over the 12 patients. Compared to Jacobian-based metrics, HU-based metrics lead to statistically significant

  8. 4D seismic to image a thin carbonate reservoir during a miscible C02 flood: Hall-Gurney Field, Kansas, USA

    USGS Publications Warehouse

    Raef, A.E.; Miller, R.D.; Franseen, E.K.; Byrnes, A.P.; Watney, W.L.; Harrison, W.E.

    2005-01-01

    The movement of miscible CO2 injected into a shallow (900 m) thin (3.6-6m) carbonate reservoir was monitored using the high-resolution parallel progressive blanking (PPB) approach. The approach concentrated on repeatability during acquisition and processing, and use of amplitude envelope 4D horizon attributes. Comparison of production data and reservoir simulations to seismic images provided a measure of the effectiveness of time-lapse (TL) to detect weak anomalies associated with changes in fluid concentration. Specifically, the method aided in the analysis of high-resolution data to distinguish subtle seismic characteristics and associated trends related to depositional lithofacies and geometries and structural elements of this carbonate reservoir that impact fluid character and EOR efforts.

  9. Direct 4D PET MLEM reconstruction of parametric images using the simplified reference tissue model with the basis function method for [¹¹C]raclopride.

    PubMed

    Gravel, Paul; Reader, Andrew J

    2015-06-07

    This work assesses the one-step late maximum likelihood expectation maximization (OSL-MLEM) 4D PET reconstruction algorithm for direct estimation of parametric images from raw PET data when using the simplified reference tissue model with the basis function method (SRTM-BFM) for the kinetic analysis. To date, the OSL-MLEM method has been evaluated using kinetic models based on two-tissue compartments with an irreversible component. We extend the evaluation of this method for two-tissue compartments with a reversible component, using SRTM-BFM on simulated 3D + time data sets (with use of [(11)C]raclopride time-activity curves from real data) and on real data sets acquired with the high resolution research tomograph. The performance of the proposed method is evaluated by comparing voxel-level binding potential (BPND) estimates with those obtained from conventional post-reconstruction kinetic parameter estimation. For the commonly chosen number of iterations used in practice, our results show that for the 3D + time simulation, the direct method delivers results with lower (%)RMSE at the normal count level (decreases of 9-10 percentage points, corresponding to a 38-44% reduction), and also at low count levels (decreases of 17-21 percentage points, corresponding to a 26-36% reduction). As for the real 3D data set, the results obtained follow a similar trend, with the direct reconstruction method offering a 21% decrease in (%)CV compared to the post reconstruction method at low count levels. Thus, based on the results presented herein, using the SRTM-BFM kinetic model in conjunction with the OSL-MLEM direct 4D PET MLEM reconstruction method offers an improvement in performance when compared to conventional post reconstruction methods.

  10. MRI denoising using non-local means.

    PubMed

    Manjón, José V; Carbonell-Caballero, José; Lull, Juan J; García-Martí, Gracián; Martí-Bonmatí, Luís; Robles, Montserrat

    2008-08-01

    Magnetic Resonance (MR) images are affected by random noise which limits the accuracy of any quantitative measurements from the data. In the present work, a recently proposed filter for random noise removal is analyzed and adapted to reduce this noise in MR magnitude images. This parametric filter, named Non-Local Means (NLM), is highly dependent on the setting of its parameters. The aim of this paper is to find the optimal parameter selection for MR magnitude image denoising. For this purpose, experiments have been conducted to find the optimum parameters for different noise levels. Besides, the filter has been adapted to fit with specific characteristics of the noise in MR image magnitude images (i.e. Rician noise). From the results over synthetic and real images we can conclude that this filter can be successfully used for automatic MR denoising.

  11. 4D in-vivo ultrafast ultrasound imaging using a row-column addressed matrix and coherently-compounded orthogonal plane waves.

    PubMed

    Flesch, Martin; Pernot, Mathieu; Provost, Jean; Ferin, Guillaume; Nguyen-Dinh, An; Tanter, Mickael; Deffieux, Thomas

    2017-03-01

    4D ultrafast ultrasound imaging was recently shown using a 2D matrix (i.e., fully populated) connected to a 1024-channel ultrafast ultrasound scanner. In this study, we investigate the Row-Column Addressing (RCA) matrix approach, which allows a reduction of independent channels from N x N to N + N, with a dedicated beamforming strategy for ultrafast ultrasound imaging based on the coherent compounding of Orthogonal Plane Wave (OPW). OPW is based on coherent compounding of plane wave transmissions in one direction with receive beamforming along the orthogonal direction and its orthogonal companion sequence. Such coherent recombination of complementary orthogonal sequences leads to virtual transmit focusing in both directions which results into a final isotropic Point Spread Function (PSF). In this study, a 32 x 32 2D matrix array probe (1024 channels), centered at 5 MHz was considered. An RCA array, of same footprint with 32 + 32 elements (64 channels), was emulated by summing the elements either along a line or a column in software prior to beamforming. This approach allowed for the direct comparison of the 32 + 32 RCA scheme to the optimal fully sampled 32 x 32 2D matrix configuration, which served as the gold standard. This approach was first studied through PSF simulations and then validated experimentally on a phantom consisting of anechoic cysts and echogenic wires. The Contrast-to-Noise Ratio (CNR) and the lateral resolution of the RCA approach were found to be approximately equal to half (in decibel) and twice the values, respectively, obtained when using the 2D matrix approach. Results in a Doppler phantom and the human humeral artery in vivo confirmed that OPW compound imaging using emulated RCA matrix can achieve a power Doppler with sufficient contrast to recover the vein shape and provides an accurate Doppler spectrum.

  12. Feasibility of quantitative lung perfusion by 4D CT imaging by a new dynamic-scanning protocol in an animal model

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Goldin, Jonathan G.; Abtin, Fereidoun G.; Brown, Matt; McNitt-Gray, Mike

    2008-03-01

    The purpose of this study is to test a new dynamic Perfusion-CT imaging protocol in an animal model and investigate the feasibility of quantifying perfusion of lung parenchyma to perform functional analysis from 4D CT image data. A novel perfusion-CT protocol was designed with 25 scanning time points: the first at baseline and 24 scans after a bolus injection of contrast material. Post-contrast CT scanning images were acquired with a high sampling rate before the first blood recirculation and then a relatively low sampling rate until 10 minutes after administrating contrast agent. Lower radiation techniques were used to keep the radiation dose to an acceptable level. 2 Yorkshire swine with pulmonary emboli underwent this perfusion- CT protocol at suspended end inspiration. The software tools were designed to measure the quantitative perfusion parameters (perfusion, permeability, relative blood volume, blood flow, wash-in & wash-out enhancement) of voxel or interesting area of lung. The perfusion values were calculated for further lung functional analysis and presented visually as contrast enhancement maps for the volume being examined. The results show increased CT temporal sampling rate provides the feasibility of quantifying lung function and evaluating the pulmonary emboli. Differences between areas with known perfusion defects and those without perfusion defects were observed. In conclusion, the techniques to calculate the lung perfusion on animal model have potential application in human lung functional analysis such as evaluation of functional effects of pulmonary embolism. With further study, these techniques might be applicable in human lung parenchyma characterization and possibly for lung nodule characterization.

  13. A Novel Fast Helical 4D-CT Acquisition Technique to Generate Low-Noise Sorting Artifact–Free Images at User-Selected Breathing Phases

    SciTech Connect

    Thomas, David; Lamb, James; White, Benjamin; Jani, Shyam; Gaudio, Sergio; Lee, Percy; Ruan, Dan; McNitt-Gray, Michael; Low, Daniel

    2014-05-01

    Purpose: To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Methods and Materials: Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. The tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Results: Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. Conclusions: The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact–free images at a patient dose similar to or less than current 4D-CT techniques.

  14. 4-D imaging of seepage in earthen embankments with time-lapse inversion of self-potential data constrained by acoustic emissions localization

    NASA Astrophysics Data System (ADS)

    Rittgers, J. B.; Revil, A.; Planes, T.; Mooney, M. A.; Koelewijn, A. R.

    2015-02-01

    New methods are required to combine the information contained in the passive electrical and seismic signals to detect, localize and monitor hydromechanical disturbances in porous media. We propose a field experiment showing how passive seismic and electrical data can be combined together to detect a preferential flow path associated with internal erosion in a Earth dam. Continuous passive seismic and electrical (self-potential) monitoring data were recorded during a 7-d full-scale levee (earthen embankment) failure test, conducted in Booneschans, Netherlands in 2012. Spatially coherent acoustic emissions events and the development of a self-potential anomaly, associated with induced concentrated seepage and internal erosion phenomena, were identified and imaged near the downstream toe of the embankment, in an area that subsequently developed a series of concentrated water flows and sand boils, and where liquefaction of the embankment toe eventually developed. We present a new 4-D grid-search algorithm for acoustic emissions localization in both time and space, and the application of the localization results to add spatially varying constraints to time-lapse 3-D modelling of self-potential data in the terms of source current localization. Seismic signal localization results are utilized to build a set of time-invariant yet spatially varying model weights used for the inversion of the self-potential data. Results from the combination of these two passive techniques show results that are more consistent in terms of focused ground water flow with respect to visual observation on the embankment. This approach to geophysical monitoring of earthen embankments provides an improved approach for early detection and imaging of the development of embankment defects associated with concentrated seepage and internal erosion phenomena. The same approach can be used to detect various types of hydromechanical disturbances at larger scales.

  15. GL4D: a GPU-based architecture for interactive 4D visualization.

    PubMed

    Chu, Alan; Fu, Chi-Wing; Hanson, Andrew J; Heng, Pheng-Ann

    2009-01-01

    This paper describes GL4D, an interactive system for visualizing 2-manifolds and 3-manifolds embedded in four Euclidean dimensions and illuminated by 4D light sources. It is a tetrahedron-based rendering pipeline that projects geometry into volume images, an exact parallel to the conventional triangle-based rendering pipeline for 3D graphics. Novel features include GPU-based algorithms for real-time 4D occlusion handling and transparency compositing; we thus enable a previously impossible level of quality and interactivity for exploring lit 4D objects. The 4D tetrahedrons are stored in GPU memory as vertex buffer objects, and the vertex shader is used to perform per-vertex 4D modelview transformations and 4D-to-3D projection. The geometry shader extension is utilized to slice the projected tetrahedrons and rasterize the slices into individual 2D layers of voxel fragments. Finally, the fragment shader performs per-voxel operations such as lighting and alpha blending with previously computed layers. We account for 4D voxel occlusion along the 4D-to-3D projection ray by supporting a multi-pass back-to-front fragment composition along the projection ray; to accomplish this, we exploit a new adaptation of the dual depth peeling technique to produce correct volume image data and to simultaneously render the resulting volume data using 3D transfer functions into the final 2D image. Previous CPU implementations of the rendering of 4D-embedded 3-manifolds could not perform either the 4D depth-buffered projection or manipulation of the volume-rendered image in real-time; in particular, the dual depth peeling algorithm is a novel GPU-based solution to the real-time 4D depth-buffering problem. GL4D is implemented as an integrated OpenGL-style API library, so that the underlying shader operations are as transparent as possible to the user.

  16. Use of INSAT-3D sounder and imager radiances in the 4D-VAR data assimilation system and its implications in the analyses and forecasts

    NASA Astrophysics Data System (ADS)

    Indira Rani, S.; Taylor, Ruth; George, John P.; Rajagopal, E. N.

    2016-05-01

    INSAT-3D, the first Indian geostationary satellite with sounding capability, provides valuable information over India and the surrounding oceanic regions which are pivotal to Numerical Weather Prediction. In collaboration with UK Met Office, NCMRWF developed the assimilation capability of INSAT-3D Clear Sky Brightness Temperature (CSBT), both from the sounder and imager, in the 4D-Var assimilation system being used at NCMRWF. Out of the 18 sounder channels, radiances from 9 channels are selected for assimilation depending on relevance of the information in each channel. The first three high peaking channels, the CO2 absorption channels and the three water vapor channels (channel no. 10, 11, and 12) are assimilated both over land and Ocean, whereas the window channels (channel no. 6, 7, and 8) are assimilated only over the Ocean. Measured satellite radiances are compared with that from short range forecasts to monitor the data quality. This is based on the assumption that the observed satellite radiances are free from calibration errors and the short range forecast provided by NWP model is free from systematic errors. Innovations (Observation - Forecast) before and after the bias correction are indicative of how well the bias correction works. Since the biases vary with air-masses, time, scan angle and also due to instrument degradation, an accurate bias correction algorithm for the assimilation of INSAT-3D sounder radiance is important. This paper discusses the bias correction methods and other quality controls used for the selected INSAT-3D sounder channels and the impact of bias corrected radiance in the data assimilation system particularly over India and surrounding oceanic regions.

  17. MCAT to XCAT: The Evolution of 4-D Computerized Phantoms for Imaging Research: Computer models that take account of body movements promise to provide evaluation and improvement of medical imaging devices and technology.

    PubMed

    Paul Segars, W; Tsui, Benjamin M W

    2009-12-01

    Recent work in the development of computerized phantoms has focused on the creation of ideal "hybrid" models that seek to combine the realism of a patient-based voxelized phantom with the flexibility of a mathematical or stylized phantom. We have been leading the development of such computerized phantoms for use in medical imaging research. This paper will summarize our developments dating from the original four-dimensional (4-D) Mathematical Cardiac-Torso (MCAT) phantom, a stylized model based on geometric primitives, to the current 4-D extended Cardiac-Torso (XCAT) and Mouse Whole-Body (MOBY) phantoms, hybrid models of the human and laboratory mouse based on state-of-the-art computer graphics techniques. This paper illustrates the evolution of computerized phantoms toward more accurate models of anatomy and physiology. This evolution was catalyzed through the introduction of nonuniform rational b-spline (NURBS) and subdivision (SD) surfaces, tools widely used in computer graphics, as modeling primitives to define a more ideal hybrid phantom. With NURBS and SD surfaces as a basis, we progressed from a simple geometrically based model of the male torso (MCAT) containing only a handful of structures to detailed, whole-body models of the male and female (XCAT) anatomies (at different ages from newborn to adult), each containing more than 9000 structures. The techniques we applied for modeling the human body were similarly used in the creation of the 4-D MOBY phantom, a whole-body model for the mouse designed for small animal imaging research. From our work, we have found the NURBS and SD surface modeling techniques to be an efficient and flexible way to describe the anatomy and physiology for realistic phantoms. Based on imaging data, the surfaces can accurately model the complex organs and structures in the body, providing a level of realism comparable to that of a voxelized phantom. In addition, they are very flexible. Like stylized models, they can easily be

  18. Improving 4D plan quality for PBS-based liver tumour treatments by combining online image guided beam gating with rescanning

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Knopf, Antje-Christin; Weber, Damien Charles; Lomax, Antony John

    2015-10-01

    Pencil beam scanned (PBS) proton therapy has many advantages over conventional radiotherapy, but its effectiveness for treating mobile tumours remains questionable. Gating dose delivery to the breathing pattern is a well-developed method in conventional radiotherapy for mitigating tumour-motion, but its clinical efficiency for PBS proton therapy is not yet well documented. In this study, the dosimetric benefits and the treatment efficiency of beam gating for PBS proton therapy has been comprehensively evaluated. A series of dedicated 4D dose calculations (4DDC) have been performed on 9 different 4DCT(MRI) liver data sets, which give realistic 4DCT extracting motion information from 4DMRI. The value of 4DCT(MRI) is its capability of providing not only patient geometries and deformable breathing characteristics, but also includes variations in the breathing patterns between breathing cycles. In order to monitor target motion and derive a gating signal, we simulate time-resolved beams’ eye view (BEV) x-ray images as an online motion surrogate. 4DDCs have been performed using three amplitude-based gating window sizes (10/5/3 mm) with motion surrogates derived from either pre-implanted fiducial markers or the diaphragm. In addition, gating has also been simulated in combination with up to 19 times rescanning using either volumetric or layered approaches. The quality of the resulting 4DDC plans has been quantified in terms of the plan homogeneity index (HI), total treatment time and duty cycle. Results show that neither beam gating nor rescanning alone can fully retrieve the plan homogeneity of the static reference plan. Especially for variable breathing patterns, reductions of the effective duty cycle to as low as 10% have been observed with the smallest gating rescanning window (3 mm), implying that gating on its own for such cases would result in much longer treatment times. In addition, when rescanning is applied on its own, large differences between volumetric

  19. Improving 4D plan quality for PBS-based liver tumour treatments by combining online image guided beam gating with rescanning.

    PubMed

    Zhang, Ye; Knopf, Antje-Christin; Weber, Damien Charles; Lomax, Antony John

    2015-10-21

    Pencil beam scanned (PBS) proton therapy has many advantages over conventional radiotherapy, but its effectiveness for treating mobile tumours remains questionable. Gating dose delivery to the breathing pattern is a well-developed method in conventional radiotherapy for mitigating tumour-motion, but its clinical efficiency for PBS proton therapy is not yet well documented. In this study, the dosimetric benefits and the treatment efficiency of beam gating for PBS proton therapy has been comprehensively evaluated. A series of dedicated 4D dose calculations (4DDC) have been performed on 9 different 4DCT(MRI) liver data sets, which give realistic 4DCT extracting motion information from 4DMRI. The value of 4DCT(MRI) is its capability of providing not only patient geometries and deformable breathing characteristics, but also includes variations in the breathing patterns between breathing cycles. In order to monitor target motion and derive a gating signal, we simulate time-resolved beams' eye view (BEV) x-ray images as an online motion surrogate. 4DDCs have been performed using three amplitude-based gating window sizes (10/5/3 mm) with motion surrogates derived from either pre-implanted fiducial markers or the diaphragm. In addition, gating has also been simulated in combination with up to 19 times rescanning using either volumetric or layered approaches. The quality of the resulting 4DDC plans has been quantified in terms of the plan homogeneity index (HI), total treatment time and duty cycle. Results show that neither beam gating nor rescanning alone can fully retrieve the plan homogeneity of the static reference plan. Especially for variable breathing patterns, reductions of the effective duty cycle to as low as 10% have been observed with the smallest gating rescanning window (3 mm), implying that gating on its own for such cases would result in much longer treatment times. In addition, when rescanning is applied on its own, large differences between volumetric

  20. [Adaptive Wiener filter based on Gaussian mixture distribution model for denoising chest X-ray CT image].

    PubMed

    Tabuchi, Motohiro; Yamane, Nobumoto; Morikawa, Yoshitaka

    2008-05-20

    In recent decades, X-ray CT imaging has become more important as a result of its high-resolution performance. However, it is well known that the X-ray dose is insufficient in the techniques that use low-dose imaging in health screening or thin-slice imaging in work-up. Therefore, the degradation of CT images caused by the streak artifact frequently becomes problematic. In this study, we applied a Wiener filter (WF) using the universal Gaussian mixture distribution model (UNI-GMM) as a statistical model to remove streak artifact. In designing the WF, it is necessary to estimate the statistical model and the precise co-variances of the original image. In the proposed method, we obtained a variety of chest X-ray CT images using a phantom simulating a chest organ, and we estimated the statistical information using the images for training. The results of simulation showed that it is possible to fit the UNI-GMM to the chest X-ray CT images and reduce the specific noise.

  1. Method for identifying subsurface fluid migration and drainage pathways in and among oil and gas reservoirs using 3-D and 4-D seismic imaging

    DOEpatents

    Anderson, R.N.; Boulanger, A.; Bagdonas, E.P.; Xu, L.; He, W.

    1996-12-17

    The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.

  2. Method for identifying subsurface fluid migration and drainage pathways in and among oil and gas reservoirs using 3-D and 4-D seismic imaging

    DOEpatents

    Anderson, Roger N.; Boulanger, Albert; Bagdonas, Edward P.; Xu, Liqing; He, Wei

    1996-01-01

    The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.

  3. CT reconstruction via denoising approximate message passing

    NASA Astrophysics Data System (ADS)

    Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.

    2016-05-01

    In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.

  4. WE-AB-204-03: A Novel 3D Printed Phantom for 4D PET/CT Imaging and SIB Radiotherapy Verification

    SciTech Connect

    Soultan, D; Murphy, J; Moiseenko, V; Cervino, L; Gill, B

    2015-06-15

    Purpose: To construct and test a 3D printed phantom designed to mimic variable PET tracer uptake seen in lung tumor volumes. To assess segmentation accuracy of sub-volumes of the phantom following 4D PET/CT scanning with ideal and patient-specific respiratory motion. To plan, deliver and verify delivery of PET-driven, gated, simultaneous integrated boost (SIB) radiotherapy plans. Methods: A set of phantoms and inserts were designed and manufactured for a realistic representation of lung cancer gated radiotherapy steps from 4D PET/CT scanning to dose delivery. A cylindrical phantom (40x 120 mm) holds inserts for PET/CT scanning. The novel 3D printed insert dedicated to 4D PET/CT mimics high PET tracer uptake in the core and lower uptake in the periphery. This insert is a variable density porous cylinder (22.12×70 mm), ABS-P430 thermoplastic, 3D printed by uPrint SE Plus with inner void volume (5.5×42 mm). The square pores (1.8×1.8 mm2 each) fill 50% of outer volume, resulting in a 2:1 SUV ratio of PET-tracer in the void volume with respect to porous volume. A matching in size cylindrical phantom is dedicated to validate gated radiotherapy. It contains eight peripheral holes matching the location of the porous part of the 3D printed insert, and one central hole. These holes accommodate adaptors for Farmer-type ion chamber and cells vials. Results: End-to-end test were performed from 4D PET/CT scanning to transferring data to the planning system and target volume delineation. 4D PET/CT scans were acquired of the phantom with different respiratory motion patterns and gating windows. A measured 2:1 18F-FDG SUV ratio between inner void and outer volume matched the 3D printed design. Conclusion: The novel 3D printed phantom mimics variable PET tracer uptake typical of tumors. Obtained 4D PET/CT scans are suitable for segmentation, treatment planning and delivery in SIB gated treatments of NSCLC.

  5. A Variational Approach to the Denoising of Images Based on Different Variants of the TV-Regularization

    SciTech Connect

    Bildhauer, Michael Fuchs, Martin

    2012-12-15

    We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.

  6. Geometric properties of solutions to the total variation denoising problem

    NASA Astrophysics Data System (ADS)

    Chambolle, Antonin; Duval, Vincent; Peyré, Gabriel; Poon, Clarice

    2017-01-01

    This article studies the denoising performance of total variation (TV) image regularization. More precisely, we study geometrical properties of the solution to the so-called Rudin-Osher-Fatemi total variation denoising method. The first contribution of this paper is a precise mathematical definition of the ‘extended support’ (associated to the noise-free image) of TV denoising. It is intuitively the region which is unstable and will suffer from the staircasing effect. We highlight in several practical cases, such as the indicator of convex sets, that this region can be determined explicitly. Our second and main contribution is a proof that the TV denoising method indeed restores an image which is exactly constant outside a small tube surrounding the extended support. The radius of this tube shrinks toward zero as the noise level vanishes, and we are able to determine, in some cases, an upper bound on the convergence rate. For indicators of so-called ‘calibrable’ sets (such as disks or properly eroded squares), this extended support matches the edges, so that discontinuities produced by TV denoising cluster tightly around the edges. In contrast, for indicators of more general shapes or for complicated images, this extended support can be larger. Beside these main results, our paper also proves several intermediate results about fine properties of TV regularization, in particular for indicators of calibrable and convex sets, which are of independent interest.

  7. An approach for SLAR images denoising based on removing regions with low visual quality for oil spill detection

    NASA Astrophysics Data System (ADS)

    Alacid, Beatriz; Gil, Pablo

    2016-10-01

    This paper presents an approach to remove SLAR (Side-Looking Airborne Radar) image regions with low visual quality to be used for an automatic detection of oil slicks on a board system. This approach is focused on the detection and labelling of SLAR image regions caused by a poor acquisition from two antennas located on both sides of an aircraft. Thereby, the method distinguishes ineligible regions which are not suitable to be used on the steps of an automatic detection process of oil slicks because they have a high probability of causing false positive results in the detection process. To do this, the method uses a hybrid approach based on edge-based segmentation aided by Gabor filters for texture detection combined with a search algorithm of significant grey-level changes for fitting the boundary lines in each of all the bad regions. Afterwards, a statistical analysis is done to label the set of pixels which should be used for recognition of oil slicks. The results show a successful detection of the ineligible regions and consequently how the image is partitioned in sub-regions of interest in terms of detecting the oil slicks, improving the accuracy and reliability of the oil slick detection.

  8. HARDI DATA DENOISING USING VECTORIAL TOTAL VARIATION AND LOGARITHMIC BARRIER

    PubMed Central

    Kim, Yunho; Thompson, Paul M.; Vese, Luminita A.

    2010-01-01

    In this work, we wish to denoise HARDI (High Angular Resolution Diffusion Imaging) data arising in medical brain imaging. Diffusion imaging is a relatively new and powerful method to measure the three-dimensional profile of water diffusion at each point in the brain. These images can be used to reconstruct fiber directions and pathways in the living brain, providing detailed maps of fiber integrity and connectivity. HARDI data is a powerful new extension of diffusion imaging, which goes beyond the diffusion tensor imaging (DTI) model: mathematically, intensity data is given at every voxel and at any direction on the sphere. Unfortunately, HARDI data is usually highly contaminated with noise, depending on the b-value which is a tuning parameter pre-selected to collect the data. Larger b-values help to collect more accurate information in terms of measuring diffusivity, but more noise is generated by many factors as well. So large b-values are preferred, if we can satisfactorily reduce the noise without losing the data structure. Here we propose two variational methods to denoise HARDI data. The first one directly denoises the collected data S, while the second one denoises the so-called sADC (spherical Apparent Diffusion Coefficient), a field of radial functions derived from the data. These two quantities are related by an equation of the form S = SSexp (−b · sADC) (in the noise-free case). By applying these two different models, we will be able to determine which quantity will most accurately preserve data structure after denoising. The theoretical analysis of the proposed models is presented, together with experimental results and comparisons for denoising synthetic and real HARDI data. PMID:20802839

  9. Sinogram denoising via simultaneous sparse representation in learned dictionaries.

    PubMed

    Karimi, Davood; Ward, Rabab K

    2016-05-07

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  10. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  11. GPU-Accelerated Denoising in 3D (GD3D)

    SciTech Connect

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer the second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.

  12. Total Variation Denoising and Support Localization of the Gradient

    NASA Astrophysics Data System (ADS)

    Chambolle, A.; Duval, V.; Peyré, G.; Poon, C.

    2016-10-01

    This paper describes the geometrical properties of the solutions to the total variation denoising method. A folklore statement is that this method is able to restore sharp edges, but at the same time, might introduce some staircasing (i.e. “fake” edges) in flat areas. Quite surprisingly, put aside numerical evidences, almost no theoretical result are available to backup these claims. The first contribution of this paper is a precise mathematical definition of the “extended support” (associated to the noise-free image) of TV denoising. This is intuitively the region which is unstable and will suffer from the staircasing effect. Our main result shows that the TV denoising method indeed restores a piece-wise constant image outside a small tube surrounding the extended support. Furthermore, the radius of this tube shrinks toward zero as the noise level vanishes and in some cases, an upper bound on the convergence rate is given.

  13. Denoising of single-trial matrix representations using 2D nonlinear diffusion filtering.

    PubMed

    Mustaffa, I; Trenado, C; Schwerdtfeger, K; Strauss, D J

    2010-01-15

    In this paper we present a novel application of denoising by means of nonlinear diffusion filters (NDFs). NDFs have been successfully applied for image processing and computer vision areas, particularly in image denoising, smoothing, segmentation, and restoration. We apply two types of NDFs for the denoising of evoked responses in single-trials in a matrix form, the nonlinear isotropic and the anisotropic diffusion filters. We show that by means of NDFs we are able to denoise the evoked potentials resulting in a better extraction of physiologically relevant morphological features over the ongoing experiment. This technique offers the advantage of translation-invariance in comparison to other well-known methods, e.g., wavelet denoising based on maximally decimated filter banks, due to an adaptive diffusion feature. We compare the proposed technique with a wavelet denoising scheme that had been introduced before for evoked responses. It is concluded that NDFs represent a promising and useful approach in the denoising of event related potentials. N