Science.gov

Sample records for 4d image denoising

  1. Performance evaluation and optimization of BM4D-AV denoising algorithm for cone-beam CT images

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Tian, Xiaofei; Zhang, Dinghua; Zhang, Hua

    2015-12-01

    The broadening application of cone-beam Computed Tomography (CBCT) in medical diagnostics and nondestructive testing, necessitates advanced denoising algorithms for its 3D images. The block-matching and four dimensional filtering algorithm with adaptive variance (BM4D-AV) is applied to the 3D image denoising in this research. To optimize it, the key filtering parameters of the BM4D-AV algorithm are assessed firstly based on the simulated CBCT images and a table of optimized filtering parameters is obtained. Then, considering the complexity of the noise in realistic CBCT images, possible noise standard deviations in BM4D-AV are evaluated to attain the chosen principle for the realistic denoising. The results of corresponding experiments demonstrate that the BM4D-AV algorithm with optimized parameters presents excellent denosing effect on the realistic 3D CBCT images.

  2. Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter

    NASA Astrophysics Data System (ADS)

    Dansereau, Donald G.; Bongiorno, Daniel L.; Pizarro, Oscar; Williams, Stefan B.

    2013-02-01

    Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan's performance scales with aperture count.

  3. Iterative denoising of ghost imaging.

    PubMed

    Yao, Xu-Ri; Yu, Wen-Kai; Liu, Xue-Feng; Li, Long-Zhen; Li, Ming-Fei; Wu, Ling-An; Zhai, Guang-Jie

    2014-10-01

    We present a new technique to denoise ghost imaging (GI) in which conventional intensity correlation GI and an iteration process have been combined to give an accurate estimate of the actual noise affecting image quality. The blurring influence of the speckle areas in the beam is reduced in the iteration by setting a threshold. It is shown that with an appropriate choice of threshold value, the quality of the iterative GI reconstructed image is much better than that of differential GI for the same number of measurements. This denoising method thus offers a very effective approach to promote the implementation of GI in real applications. PMID:25322001

  4. Green Channel Guiding Denoising on Bayer Image

    PubMed Central

    Zhang, Maojun

    2014-01-01

    Denoising is an indispensable function for digital cameras. In respect that noise is diffused during the demosaicking, the denoising ought to work directly on bayer data. The difficulty of denoising on bayer image is the interlaced mosaic pattern of red, green, and blue. Guided filter is a novel time efficient explicit filter kernel which can incorporate additional information from the guidance image, but it is still not applied for bayer image. In this work, we observe that the green channel of bayer mode is higher in both sampling rate and Signal-to-Noise Ratio (SNR) than the red and blue ones. Therefore the green channel can be used to guide denoising. This kind of guidance integrates the different color channels together. Experiments on both actual and simulated bayer images indicate that green channel acts well as the guidance signal, and the proposed method is competitive with other popular filter kernel denoising methods. PMID:24741370

  5. Vector anisotropic filter for multispectral image denoising

    NASA Astrophysics Data System (ADS)

    Ben Said, Ahmed; Foufou, Sebti; Hadjidj, Rachid

    2015-04-01

    In this paper, we propose an approach to extend the application of anisotropic Gaussian filtering for multi- spectral image denoising. We study the case of images corrupted with additive Gaussian noise and use sparse matrix transform for noise covariance matrix estimation. Specifically we show that if an image has a low local variability, we can make the assumption that in the noisy image, the local variability originates from the noise variance only. We apply the proposed approach for the denoising of multispectral images corrupted by noise and compare the proposed method with some existing methods. Results demonstrate an improvement in the denoising performance.

  6. Denoising Medical Images using Calculus of Variations

    PubMed Central

    Kohan, Mahdi Nakhaie; Behnam, Hamid

    2011-01-01

    We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674

  7. Denoising of 4D Cardiac Micro-CT Data Using Median-Centric Bilateral Filtration

    PubMed Central

    Clark, D.; Johnson, G.A.; Badea, C.T.

    2012-01-01

    Bilateral filtration has proven an effective tool for denoising CT data. The classic filter utilizes Gaussian domain and range weighting functions in 2D. More recently, other distributions have yielded more accurate results in specific applications, and the bilateral filtration framework has been extended to higher dimensions. In this study, brute-force optimization is employed to evaluate the use of several alternative distributions for both domain and range weighting: Andrew's Sine Wave, El Fallah Ford, Gaussian, Flat, Lorentzian, Huber's Minimax, Tukey's Bi-weight, and Cosine. Two variations on the classic bilateral filter which use median filtration to reduce bias in range weights are also investigated: median-centric and hybrid bilateral filtration. Using the 4D MOBY mouse phantom reconstructed with noise (stdev. ~ 65 HU), hybrid bilateral filtration, a combination of the classic and median-centric filters, with Flat domain and range weighting is shown to provide optimal denoising results (PSNRs: 31.69, classic; 31.58 median-centric; 32.25, hybrid). To validate these phantom studies, the optimal filters are also applied to in vivo, 4D cardiac micro-CT data acquired in the mouse. In a constant region of the left ventricle, hybrid bilateral filtration with Flat domain and range weighting is shown to provide optimal smoothing (stdev: original, 72.2 HU; classic, 20.3 HU; median-centric, 24.1 HU; hybrid, 15.9 HU). While the optimal results were obtained using 4D filtration, the 3D hybrid filter is ultimately recommended for denoising 4D cardiac micro-CT data because it is more computationally tractable and less prone to artifacts (MOBY PSNR: 32.05; left ventricle stdev: 20.5 HU). PMID:24386540

  8. Image-Specific Prior Adaptation for Denoising.

    PubMed

    Lu, Xin; Lin, Zhe; Jin, Hailin; Yang, Jianchao; Wang, James Z

    2015-12-01

    Image priors are essential to many image restoration applications, including denoising, deblurring, and inpainting. Existing methods use either priors from the given image (internal) or priors from a separate collection of images (external). We find through statistical analysis that unifying the internal and external patch priors may yield a better patch prior. We propose a novel prior learning algorithm that combines the strength of both internal and external priors. In particular, we first learn a generic Gaussian mixture model from a collection of training images and then adapt the model to the given image by simultaneously adding additional components and refining the component parameters. We apply this image-specific prior to image denoising. The experimental results show that our approach yields better or competitive denoising results in terms of both the peak signal-to-noise ratio and structural similarity. PMID:26316129

  9. An image denoising application using shearlets

    NASA Astrophysics Data System (ADS)

    Sevindir, Hulya Kodal; Yazici, Cuneyt

    2013-10-01

    Medical imaging is a multidisciplinary field related to computer science, electrical/electronic engineering, physics, mathematics and medicine. There has been dramatic increase in variety, availability and resolution of medical imaging devices for the last half century. For proper medical imaging highly trained technicians and clinicians are needed to pull out clinically pertinent information from medical data correctly. Artificial systems must be designed to analyze medical data sets either in a partially or even a fully automatic manner to fulfil the need. For this purpose there has been numerous ongoing research for finding optimal representations in image processing and computer vision [1, 18]. Medical images almost always contain artefacts and it is crucial to remove these artefacts to obtain healthy results. Out of many methods for denoising images, in this paper, two denoising methods, wavelets and shearlets, have been applied to mammography images. Comparing these two methods, shearlets give better results for denoising such data.

  10. Geodesic denoising for optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Shahrian Varnousfaderani, Ehsan; Vogl, Wolf-Dieter; Wu, Jing; Gerendas, Bianca S.; Simader, Christian; Langs, Georg; Waldstein, Sebastian M.; Schmidt-Erfurth, Ursula

    2016-03-01

    Optical coherence tomography (OCT) is an optical signal acquisition method capturing micrometer resolution, cross-sectional three-dimensional images. OCT images are used widely in ophthalmology to diagnose and monitor retinal diseases such as age-related macular degeneration (AMD) and Glaucoma. While OCT allows the visualization of retinal structures such as vessels and retinal layers, image quality and contrast is reduced by speckle noise, obfuscating small, low intensity structures and structural boundaries. Existing denoising methods for OCT images may remove clinically significant image features such as texture and boundaries of anomalies. In this paper, we propose a novel patch based denoising method, Geodesic Denoising. The method reduces noise in OCT images while preserving clinically significant, although small, pathological structures, such as fluid-filled cysts in diseased retinas. Our method selects optimal image patch distribution representations based on geodesic patch similarity to noisy samples. Patch distributions are then randomly sampled to build a set of best matching candidates for every noisy sample, and the denoised value is computed based on a geodesic weighted average of the best candidate samples. Our method is evaluated qualitatively on real pathological OCT scans and quantitatively on a proposed set of ground truth, noise free synthetic OCT scans with artificially added noise and pathologies. Experimental results show that performance of our method is comparable with state of the art denoising methods while outperforming them in preserving the critical clinically relevant structures.

  11. Multiresolution Bilateral Filtering for Image Denoising

    PubMed Central

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multiresolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) subbands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided. PMID:19004705

  12. Magnetic resonance image denoising using multiple filters

    NASA Astrophysics Data System (ADS)

    Ai, Danni; Wang, Jinjuan; Miwa, Yuichi

    2013-07-01

    We introduced and compared ten denoisingfilters which are all proposed during last fifteen years. Especially, the state-of-art denoisingalgorithms, NLM and BM3D, have attracted much attention. Several expansions are proposed to improve the noise reduction based on these two algorithms. On the other hand, optimal dictionaries, sparse representations and appropriate shapes of the transform's support are also considered for the image denoising. The comparison among variousfiltersis implemented by measuring the SNR of a phantom image and denoising effectiveness of a clinical image. The computational time is finally evaluated.

  13. Astronomical image denoising using dictionary learning

    NASA Astrophysics Data System (ADS)

    Beckouche, S.; Starck, J. L.; Fadili, J.

    2013-08-01

    Astronomical images suffer a constant presence of multiple defects that are consequences of the atmospheric conditions and of the intrinsic properties of the acquisition equipment. One of the most frequent defects in astronomical imaging is the presence of additive noise which makes a denoising step mandatory before processing data. During the last decade, a particular modeling scheme, based on sparse representations, has drawn the attention of an ever growing community of researchers. Sparse representations offer a promising framework to many image and signal processing tasks, especially denoising and restoration applications. At first, the harmonics, wavelets and similar bases, and overcomplete representations have been considered as candidate domains to seek the sparsest representation. A new generation of algorithms, based on data-driven dictionaries, evolved rapidly and compete now with the off-the-shelf fixed dictionaries. Although designing a dictionary relies on guessing the representative elementary forms and functions, the framework of dictionary learning offers the possibility of constructing the dictionary using the data themselves, which provides us with a more flexible setup to sparse modeling and allows us to build more sophisticated dictionaries. In this paper, we introduce the centered dictionary learning (CDL) method and we study its performance for astronomical image denoising. We show how CDL outperforms wavelet or classic dictionary learning denoising techniques on astronomical images, and we give a comparison of the effects of these different algorithms on the photometry of the denoised images. The current version of the code is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/556/A132

  14. Cardiac 4D Ultrasound Imaging

    NASA Astrophysics Data System (ADS)

    D'hooge, Jan

    Volumetric cardiac ultrasound imaging has steadily evolved over the last 20 years from an electrocardiography (ECC) gated imaging technique to a true real-time imaging modality. Although the clinical use of echocardiography is still to a large extent based on conventional 2D ultrasound imaging it can be anticipated that the further developments in image quality, data visualization and interaction and image quantification of three-dimensional cardiac ultrasound will gradually make volumetric ultrasound the modality of choice. In this chapter, an overview is given of the technological developments that allow for volumetric imaging of the beating heart by ultrasound.

  15. A New Adaptive Image Denoising Method

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    In this paper, a new adaptive image denoising method is proposed that follows the soft-thresholding technique. In our method, a new threshold function is also proposed, which is determined by taking the various combinations of noise level, noise-free signal variance, subband size, and decomposition level. It is simple and adaptive as it depends on the data-driven parameters estimation in each subband. The state-of-the-art denoising methods viz. VisuShrink, SureShrink, BayesShrink, WIDNTF and IDTVWT are not able to modify the coefficients in an efficient manner to provide the good quality of image. Our method removes the noise from the noisy image significantly and provides better visual quality of an image.

  16. Nonlocal Markovian models for image denoising

    NASA Astrophysics Data System (ADS)

    Salvadeo, Denis H. P.; Mascarenhas, Nelson D. A.; Levada, Alexandre L. M.

    2016-01-01

    Currently, the state-of-the art methods for image denoising are patch-based approaches. Redundant information present in nonlocal regions (patches) of the image is considered for better image modeling, resulting in an improved quality of filtering. In this respect, nonlocal Markov random field (MRF) models are proposed by redefining the energy functions of classical MRF models to adopt a nonlocal approach. With the new energy functions, the pairwise pixel interaction is weighted according to the similarities between the patches corresponding to each pair. Also, a maximum pseudolikelihood estimation of the spatial dependency parameter (β) for these models is presented here. For evaluating this proposal, these models are used as an a priori model in a maximum a posteriori estimation to denoise additive white Gaussian noise in images. Finally, results display a notable improvement in both quantitative and qualitative terms in comparison with the local MRFs.

  17. Adaptive Image Denoising by Mixture Adaptation.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms. PMID:27416593

  18. Infrared image denoising by nonlocal means filtering

    NASA Astrophysics Data System (ADS)

    Dee-Noor, Barak; Stern, Adrian; Yitzhaky, Yitzhak; Kopeika, Natan

    2012-05-01

    The recently introduced non-local means (NLM) image denoising technique broke the traditional paradigm according to which image pixels are processed by their surroundings. Non-local means technique was demonstrated to outperform state-of-the art denoising techniques when applied to images in the visible. This technique is even more powerful when applied to low contrast images, which makes it tractable for denoising infrared (IR) images. In this work we investigate the performance of NLM applied to infrared images. We also present a new technique designed to speed-up the NLM filtering process. The main drawback of the NLM is the large computational time required by the process of searching similar patches. Several techniques were developed during the last years to reduce the computational burden. Here we present a new techniques designed to reduce computational cost and sustain optimal filtering results of NLM technique. We show that the new technique, which we call Multi-Resolution Search NLM (MRS-NLM), reduces significantly the computational cost of the filtering process and we present a study of its performance on IR images.

  19. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  20. Musculoskeletal ultrasound image denoising using Daubechies wavelets

    NASA Astrophysics Data System (ADS)

    Gupta, Rishu; Elamvazuthi, I.; Vasant, P.

    2012-11-01

    Among various existing medical imaging modalities Ultrasound is providing promising future because of its ease availability and use of non-ionizing radiations. In this paper we have attempted to denoise ultrasound image using daubechies wavelet and analyze the results with peak signal to noise ratio and coefficient of correlation as performance measurement index. The different daubechies from 1 to 6 is used on four different ultrasound bone fracture images with three different levels from 1 to 3. The images for visual inspection and PSNR, Coefficient of correlation values are graphically shown for quantitaive analysis of resultant images.

  1. Postprocessing of Compressed Images via Sequential Denoising.

    PubMed

    Dar, Yehuda; Bruckstein, Alfred M; Elad, Michael; Giryes, Raja

    2016-07-01

    In this paper, we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via alternating direction method of multipliers, leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. In particular, we demonstrate impressive gains in image quality for several leading compression methods-JPEG, JPEG2000, and HEVC. PMID:27214878

  2. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  3. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  4. Analysis the application of several denoising algorithm in the astronomical image denoising

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Geng, Ze-xun; Bao, Yong-qiang; Wei, Xiao-feng; Pan, Ying-feng

    2014-02-01

    Image denoising is an important method of preprocessing, it is one of the forelands in the field of Computer Graphic and Computer Vision. Astronomical target imaging are most vulnerable to atmospheric turbulence and noise interference, in order to reconstruct the high quality image of the target, we need to restore the high frequency signal of image, but noise also belongs to the high frequency signal, so there will be noise amplification in the reconstruction process. In order to avoid this phenomenon, join image denoising in the process of reconstruction is a feasible solution. This paper mainly research on the principle of four classic denoising algorithm, which are TV, BLS - GSM, NLM and BM3D, we use simulate data for image denoising to analysis the performance of the four algorithms, experiments demonstrate that the four algorithms can remove the noise, the BM3D algorithm not only have high quality of denosing, but also have the highest efficiency at the same time.

  5. Robust 4D Flow Denoising Using Divergence-Free Wavelet Transform

    PubMed Central

    Ong, Frank; Uecker, Martin; Tariq, Umar; Hsiao, Albert; Alley, Marcus T; Vasanawala, Shreyas S.; Lustig, Michael

    2014-01-01

    Purpose To investigate four-dimensional flow denoising using the divergence-free wavelet (DFW) transform and compare its performance with existing techniques. Theory and Methods DFW is a vector-wavelet that provides a sparse representation of flow in a generally divergence-free field and can be used to enforce “soft” divergence-free conditions when discretization and partial voluming result in numerical nondivergence-free components. Efficient denoising is achieved by appropriate shrinkage of divergence-free wavelet and nondivergence-free coefficients. SureShrink and cycle spinning are investigated to further improve denoising performance. Results DFW denoising was compared with existing methods on simulated and phantom data and was shown to yield better noise reduction overall while being robust to segmentation errors. The processing was applied to in vivo data and was demonstrated to improve visualization while preserving quantifications of flow data. Conclusion DFW denoising of four-dimensional flow data was shown to reduce noise levels in flow data both quantitatively and visually. PMID:24549830

  6. Denoising-enhancing images on elastic manifolds.

    PubMed

    Ratner, Vadim; Zeevi, Yehoshua Y

    2011-08-01

    The conflicting demands for simultaneous low-pass and high-pass processing, required in image denoising and enhancement, still present an outstanding challenge, although a great deal of progress has been made by means of adaptive diffusion-type algorithms. To further advance such processing methods and algorithms, we introduce a family of second-order (in time) partial differential equations. These equations describe the motion of a thin elastic sheet in a damping environment. They are also derived by a variational approach in the context of image processing. The new operator enables better edge preservation in denoising applications by offering an adaptive lowpass filter, which preserves high-frequency components in the pass-band better than the adaptive diffusion filter, while offering slower error propagation across edges. We explore the action of this powerful operator in the context of image processing and exploit for this purpose the wealth of knowledge accumulated in physics and mathematics about the action and behavior of this operator. The resulting methods are further generalized for color and/or texture image processing, by embedding images in multidimensional manifolds. A specific application of the proposed new approach to superresolution is outlined. PMID:21342847

  7. Combining interior and exterior characteristics for remote sensing image denoising

    NASA Astrophysics Data System (ADS)

    Peng, Ni; Sun, Shujin; Wang, Runsheng; Zhong, Ping

    2016-04-01

    Remote sensing image denoising faces many challenges since a remote sensing image usually covers a wide area and thus contains complex contents. Using the patch-based statistical characteristics is a flexible method to improve the denoising performance. There are usually two kinds of statistical characteristics available: interior and exterior characteristics. Different statistical characteristics have their own strengths to restore specific image contents. Combining different statistical characteristics to use their strengths together may have the potential to improve denoising results. This work proposes a method combining statistical characteristics to adaptively select statistical characteristics for different image contents. The proposed approach is implemented through a new characteristics selection criterion learned over training data. Moreover, with the proposed combination method, this work develops a denoising algorithm for remote sensing images. Experimental results show that our method can make full use of the advantages of interior and exterior characteristics for different image contents and thus improve the denoising performance.

  8. 4D image reconstruction for emission tomography

    NASA Astrophysics Data System (ADS)

    Reader, Andrew J.; Verhaeghe, Jeroen

    2014-11-01

    An overview of the theory of 4D image reconstruction for emission tomography is given along with a review of the current state of the art, covering both positron emission tomography and single photon emission computed tomography (SPECT). By viewing 4D image reconstruction as a matter of either linear or non-linear parameter estimation for a set of spatiotemporal functions chosen to approximately represent the radiotracer distribution, the areas of so-called ‘fully 4D’ image reconstruction and ‘direct kinetic parameter estimation’ are unified within a common framework. Many choices of linear and non-linear parameterization of these functions are considered (including the important case where the parameters have direct biological meaning), along with a review of the algorithms which are able to estimate these often non-linear parameters from emission tomography data. The other crucial components to image reconstruction (the objective function, the system model and the raw data format) are also covered, but in less detail due to the relatively straightforward extension from their corresponding components in conventional 3D image reconstruction. The key unifying concept is that maximum likelihood or maximum a posteriori (MAP) estimation of either linear or non-linear model parameters can be achieved in image space after carrying out a conventional expectation maximization (EM) update of the dynamic image series, using a Kullback-Leibler distance metric (comparing the modeled image values with the EM image values), to optimize the desired parameters. For MAP, an image-space penalty for regularization purposes is required. The benefits of 4D and direct reconstruction reported in the literature are reviewed, and furthermore demonstrated with simple simulation examples. It is clear that the future of reconstructing dynamic or functional emission tomography images, which often exhibit high levels of spatially correlated noise, should ideally exploit these 4D

  9. MR image denoising method for brain surface 3D modeling

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

    2014-11-01

    Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

  10. Denoising Two-Photon Calcium Imaging Data

    PubMed Central

    Malik, Wasim Q.; Schummers, James; Sur, Mriganka; Brown, Emery N.

    2011-01-01

    Two-photon calcium imaging is now an important tool for in vivo imaging of biological systems. By enabling neuronal population imaging with subcellular resolution, this modality offers an approach for gaining a fundamental understanding of brain anatomy and physiology. Proper analysis of calcium imaging data requires denoising, that is separating the signal from complex physiological noise. To analyze two-photon brain imaging data, we present a signal plus colored noise model in which the signal is represented as harmonic regression and the correlated noise is represented as an order autoregressive process. We provide an efficient cyclic descent algorithm to compute approximate maximum likelihood parameter estimates by combing a weighted least-squares procedure with the Burg algorithm. We use Akaike information criterion to guide selection of the harmonic regression and the autoregressive model orders. Our flexible yet parsimonious modeling approach reliably separates stimulus-evoked fluorescence response from background activity and noise, assesses goodness of fit, and estimates confidence intervals and signal-to-noise ratio. This refined separation leads to appreciably enhanced image contrast for individual cells including clear delineation of subcellular details and network activity. The application of our approach to in vivo imaging data recorded in the ferret primary visual cortex demonstrates that our method yields substantially denoised signal estimates. We also provide a general Volterra series framework for deriving this and other signal plus correlated noise models for imaging. This approach to analyzing two-photon calcium imaging data may be readily adapted to other computational biology problems which apply correlated noise models. PMID:21687727

  11. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  12. A new method for mobile phone image denoising

    NASA Astrophysics Data System (ADS)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  13. Dual-domain denoising in three dimensional magnetic resonance imaging

    PubMed Central

    Peng, Jing; Zhou, Jiliu; Wu, Xi

    2016-01-01

    Denoising is a crucial preprocessing procedure for three dimensional magnetic resonance imaging (3D MRI). Existing denoising methods are predominantly implemented in a single domain, ignoring information in other domains. However, denoising methods are becoming increasingly complex, making analysis and implementation challenging. The present study aimed to develop a dual-domain image denoising (DDID) algorithm for 3D MRI that encapsulates information from the spatial and transform domains. In the present study, the DDID method was used to distinguish signal from noise in the spatial and frequency domains, after which robust accurate noise estimation was introduced for iterative filtering, which is simple and beneficial for computation. In addition, the proposed method was compared quantitatively and qualitatively with existing methods for synthetic and in vivo MRI datasets. The results of the present study suggested that the novel DDID algorithm performed well and provided competitive results, as compared with existing MRI denoising filters. PMID:27446257

  14. Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework

    PubMed Central

    Guo, Qing; Dong, Fangmin; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi

    2016-01-01

    A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images. PMID:27148597

  15. Controlled Source 4D Seismic Imaging

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Morency, C.; Tromp, J.

    2009-12-01

    Earth's material properties may change after significant tectonic events, e.g., volcanic eruptions, earthquake ruptures, landslides, and hydrocarbon migration. While many studies focus on how to interpret observations in terms of changes in wavespeeds and attenuation, the oil industry is more interested in how we can identify and locate such temporal changes using seismic waves generated by controlled sources. 4D seismic analysis is indeed an important tool to monitor fluid movement in hydrocarbon reservoirs during production, improving fields management. Classic 4D seismic imaging involves comparing images obtained from two subsequent seismic surveys. Differences between the two images tell us where temporal changes occurred. However, when the temporal changes are small, it may be quite hard to reliably identify and characterize the differences between the two images. We propose to back-project residual seismograms between two subsequent surveys using adjoint methods, which results in images highlighting temporal changes. We use the SEG/EAGE salt dome model to illustrate our approach. In two subsequent surveys, the wavespeeds and density within a target region are changed, mimicking possible fluid migration. Due to changes in material properties induced by fluid migration, seismograms recorded in the two surveys differ. By back propagating these residuals, the adjoint images identify the location of the affected region. An important issue involves the nature of model. For instance, are we characterizing only changes in wavespeed, or do we also consider density and attenuation? How many model parameters characterize the model, e.g., is our model isotropic or anisotropic? Is acoustic wave propagation accurate enough or do we need to consider elastic or poroelastic effects? We will investigate how imaging strategies based upon acoustic, elastic and poroelastic simulations affect our imaging capabilities.

  16. Computed tomography perfusion imaging denoising using Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna

    2012-06-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study.

  17. Computed tomography perfusion imaging denoising using gaussian process regression.

    PubMed

    Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna

    2012-06-21

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study. PMID:22617159

  18. Image denoising filter based on patch-based difference refinement

    NASA Astrophysics Data System (ADS)

    Park, Sang Wook; Kang, Moon Gi

    2012-06-01

    In the denoising literature, research based on the nonlocal means (NLM) filter has been done and there have been many variations and improvements regarding weight function and parameter optimization. Here, a NLM filter with patch-based difference (PBD) refinement is presented. PBD refinement, which is the weighted average of the PBD values, is performed with respect to the difference images of all the locations in a refinement kernel. With refined and denoised PBD values, pattern adaptive smoothing threshold and noise suppressed NLM filter weights are calculated. Owing to the refinement of the PBD values, the patterns are divided into flat regions and texture regions by comparing the sorted values in the PBD domain to the threshold value including the noise standard deviation. Then, two different smoothing thresholds are utilized for each region denoising, respectively, and the NLM filter is applied finally. Experimental results of the proposed scheme are shown in comparison with several state-of-the-arts NLM based denoising methods.

  19. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  20. Image denoising with the dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Yaseen, Alauldeen S.; Pavlova, Olga N.; Pavlov, Alexey N.; Hramov, Alexander E.

    2016-04-01

    The purpose of this study is to compare image denoising techniques based on real and complex wavelet-transforms. Possibilities provided by the classical discrete wavelet transform (DWT) with hard and soft thresholding are considered, and influences of the wavelet basis and image resizing are discussed. The quality of image denoising for the standard 2-D DWT and the dual-tree complex wavelet transform (DT-CWT) is studied. It is shown that DT-CWT outperforms 2-D DWT at the appropriate selection of the threshold level.

  1. Blind source separation based x-ray image denoising from an image sequence

    NASA Astrophysics Data System (ADS)

    Yu, Chun-Yu; Li, Yan; Fei, Bin; Li, Wei-Liang

    2015-09-01

    Blind source separation (BSS) based x-ray image denoising from an image sequence is proposed. Without priori knowledge, the useful image signal can be separated from an x-ray image sequence, for original images are supposed as different combinations of stable image signal and random image noise. The BSS algorithms such as fixed-point independent component analysis and second-order statistics singular value decomposition are used and compared with multi-frame averaging which is a common algorithm for improving image's signal-to-noise ratio (SNR). Denoising performance is evaluated in SNR, standard deviation, entropy, and runtime. Analysis indicates that BSS is applicable to image denoising; the denoised image's quality will get better when more frames are included in an x-ray image sequence, but it will cost more time; there should be trade-off between denoising performance and runtime, which means that the number of frames included in an image sequence is enough.

  2. A wavelet multiscale denoising algorithm for magnetic resonance (MR) images

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Fei, Baowei

    2011-02-01

    Based on the Radon transform, a wavelet multiscale denoising method is proposed for MR images. The approach explicitly accounts for the Rician nature of MR data. Based on noise statistics we apply the Radon transform to the original MR images and use the Gaussian noise model to process the MR sinogram image. A translation invariant wavelet transform is employed to decompose the MR 'sinogram' into multiscales in order to effectively denoise the images. Based on the nature of Rician noise we estimate noise variance in different scales. For the final denoised sinogram we apply the inverse Radon transform in order to reconstruct the original MR images. Phantom, simulation brain MR images, and human brain MR images were used to validate our method. The experiment results show the superiority of the proposed scheme over the traditional methods. Our method can reduce Rician noise while preserving the key image details and features. The wavelet denoising method can have wide applications in MRI as well as other imaging modalities.

  3. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  4. [DR image denoising based on Laplace-Impact mixture model].

    PubMed

    Feng, Guo-Dong; He, Xiang-Bin; Zhou, He-Qin

    2009-07-01

    A novel DR image denoising algorithm based on Laplace-Impact mixture model in dual-tree complex wavelet domain is proposed in this paper. It uses local variance to build probability density function of Laplace-Impact model fitted to the distribution of high-frequency subband coefficients well. Within Laplace-Impact framework, this paper describes a novel method for image denoising based on designing minimum mean squared error (MMSE) estimators, which relies on strong correlation between amplitudes of nearby coefficients. The experimental results show that the algorithm proposed in this paper outperforms several state-of-art denoising methods such as Bayes least squared Gaussian scale mixture and Laplace prior. PMID:19938519

  5. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  6. A New Method for Nonlocal Means Image Denoising Using Multiple Images

    PubMed Central

    Wang, Xingzheng; Wang, Haoqian; Yang, Jiangfeng; Zhang, Yongbing

    2016-01-01

    The basic principle of nonlocal means is to denoise a pixel using the weighted average of the neighbourhood pixels, while the weight is decided by the similarity of these pixels. The key issue of the nonlocal means method is how to select similar patches and design the weight of them. There are two main contributions of this paper: The first contribution is that we use two images to denoise the pixel. These two noised images are with the same noise deviation. Instead of using only one image, we calculate the weight from two noised images. After the first denoising process, we get a pre-denoised image and a residual image. The second contribution is combining the nonlocal property between residual image and pre-denoised image. The improved nonlocal means method pays more attention on the similarity than the original one, which turns out to be very effective in eliminating gaussian noise. Experimental results with simulated data are provided. PMID:27459293

  7. Dictionary-based image denoising for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Allner, Sebastian; Mei, Kai; Pfeiffer, Franz; Noël, Peter B.

    2016-03-01

    Compared to conventional computed tomography (CT), dual energy CT allows for improved material decomposition by conducting measurements at two distinct energy spectra. Since radiation exposure is a major concern in clinical CT, there is a need for tools to reduce the noise level in images while preserving diagnostic information. One way to achieve this goal is the application of image-based denoising algorithms after an analytical reconstruction has been performed. We have developed a modified dictionary denoising algorithm for dual energy CT aimed at exploiting the high spatial correlation between between images obtained from different energy spectra. Both the low-and high energy image are partitioned into small patches which are subsequently normalized. Combined patches with improved signal-to-noise ratio are formed by a weighted addition of corresponding normalized patches from both images. Assuming that corresponding low-and high energy image patches are related by a linear transformation, the signal in both patches is added coherently while noise is neglected. Conventional dictionary denoising is then performed on the combined patches. Compared to conventional dictionary denoising and bilateral filtering, our algorithm achieved superior performance in terms of qualitative and quantitative image quality measures. We demonstrate, in simulation studies, that this approach can produce 2d-histograms of the high- and low-energy reconstruction which are characterized by significantly improved material features and separation. Moreover, in comparison to other approaches that attempt denoising without simultaneously using both energy signals, superior similarity to the ground truth can be found with our proposed algorithm.

  8. Wavelet-based ultrasound image denoising: performance analysis and comparison.

    PubMed

    Rizi, F Yousefi; Noubari, H Ahmadi; Setarehdan, S K

    2011-01-01

    Ultrasound images are generally affected by multiplicative speckle noise, which is mainly due to the coherent nature of the scattering phenomenon. Speckle noise filtering is thus a critical pre-processing step in medical ultrasound imaging provided that the diagnostic features of interest are not lost. A comparative study of the performance of alternative wavelet based ultrasound image denoising methods is presented in this article. In particular, the contourlet and curvelet techniques with dual tree complex and real and double density wavelet transform denoising methods were applied to real ultrasound images and results were quantitatively compared. The results show that curvelet-based method performs superior as compared to other methods and can effectively reduce most of the speckle noise content of a given image. PMID:22255196

  9. Analysis and selection of the methods for fruit image denoise

    NASA Astrophysics Data System (ADS)

    Gui, Jiangsheng; Ma, Benxue; Rao, Xiuqin; Ying, Yibin

    2007-09-01

    Applications of machine vision in automated inspection and sorting of fruits have been widely studied by scientists and. Preprocess of the fruit image is needed when it contain much noise. There are many methods for image denoise in literatures and can acquire some nice results, but which will be selected from these methods is a trouble problem. In this research, total variation (TV) and shock filter with diffusion function were introduced, and together with other 6 common used denoise method s for different type noise type were tested. The result demonstrated that when the noise type was Gaussian or random, and SNR of original image was over 8,TV method can achieve the best resume result, when the SNR of original image was under 8, Winner filter can get the best resume result; when the noise type was salt pepper, median filter can achieve the best resume result

  10. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  11. Comparison of de-noising techniques for FIRST images

    SciTech Connect

    Fodor, I K; Kamath, C

    2001-01-22

    Data obtained through scientific observations are often contaminated by noise and artifacts from various sources. As a result, a first step in mining these data is to isolate the signal of interest by minimizing the effects of the contaminations. Once the data has been cleaned or de-noised, data mining can proceed as usual. In this paper, we describe our work in denoising astronomical images from the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey. We are mining this survey to detect radio-emitting galaxies with a bent-double morphology. This task is made difficult by the noise in the images caused by the processing of the sensor data. We compare three different approaches to de-noising: thresholding of wavelet coefficients advocated in the statistical community, traditional Altering methods used in the image processing community, and a simple thresholding scheme proposed by FIRST astronomers. While each approach has its merits and pitfalls, we found that for our purpose, the simple thresholding scheme worked relatively well for the FIRST dataset.

  12. 4-D display of satellite cloud images

    NASA Technical Reports Server (NTRS)

    Hibbard, William L.

    1987-01-01

    A technique has been developed to display GOES satellite cloud images in perspective over a topographical map. Cloud heights are estimated using temperatures from an infrared (IR) satellite image, surface temperature observations, and a climatological model of vertical temperature profiles. Cloud levels are discriminated from each other and from the ground using a pattern recognition algorithm based on the brightness variance technique of Coakley and Bretherton. The cloud regions found by the pattern recognizer are rendered in three-dimensional perspective over a topographical map by an efficient remap of the visible image. The visible shades are mixed with an artificial shade based on the geometry of the cloud-top surface, in order to enhance the texture of the cloud top.

  13. Undecimated Wavelet Transforms for Image De-noising

    SciTech Connect

    Gyaourova, A; Kamath, C; Fodor, I K

    2002-11-19

    A few different approaches exist for computing undecimated wavelet transform. In this work we construct three undecimated schemes and evaluate their performance for image noise reduction. We use standard wavelet based de-noising techniques and compare the performance of our algorithms with the original undecimated wavelet transform, as well as with the decimated wavelet transform. The experiments we have made show that our algorithms have better noise removal/blurring ratio.

  14. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy. PMID:20840902

  15. Efficient bias correction for magnetic resonance image denoising.

    PubMed

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. PMID:23074149

  16. Multiresolution generalized N dimension PCA for ultrasound image denoising

    PubMed Central

    2014-01-01

    Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917

  17. A novel de-noising method for B ultrasound images

    NASA Astrophysics Data System (ADS)

    Tian, Da-Yong; Mo, Jia-qing; Yu, Yin-Feng; Lv, Xiao-Yi; Yu, Xiao; Jia, Zhen-Hong

    2015-12-01

    B ultrasound as a kind of ultrasonic imaging, which has become the indispensable diagnosis method in clinical medicine. However, the presence of speckle noise in ultrasound image greatly reduces the image quality and interferes with the accuracy of the diagnosis. Therefore, how to construct a method which can eliminate the speckle noise effectively, and at the same time keep the image details effectively is the research target of the current ultrasonic image de-noising. This paper is intended to remove the inherent speckle noise of B ultrasound image. The novel algorithm proposed is based on both wavelet transformation of B ultrasound images and data fusion of B ultrasound images, with a smaller mean squared error (MSE) and greater signal to noise ratio (SNR) compared with other algorithms. The results of this study can effectively remove speckle noise from B ultrasound images, and can well preserved the details and edge information which will produce better visual effects.

  18. Real-time image denoising algorithm in teleradiology systems

    NASA Astrophysics Data System (ADS)

    Gupta, Pradeep Kumar; Kanhirodan, Rajan

    2006-02-01

    Denoising of medical images in wavelet domain has potential application in transmission technologies such as teleradiology. This technique becomes all the more attractive when we consider the progressive transmission in a teleradiology system. The transmitted images are corrupted mainly due to noisy channels. In this paper, we present a new real time image denoising scheme based on limited restoration of bit-planes of wavelet coefficients. The proposed scheme exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each sub-band. The desired bit-rate control is achieved by applying the restoration on a limited number of bit-planes subject to the optimal smoothing. The proposed method adapts itself to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with unrestored case, in context of error reduction. It also has capability to adapt to situations where noise level in the image varies and with the changing requirements of medical-experts. The applicability of the proposed approach has implications in restoration of medical images in teleradiology systems. The proposed scheme is computationally efficient.

  19. Denoising portal images by means of wavelet techniques

    NASA Astrophysics Data System (ADS)

    Gonzalez Lopez, Antonio Francisco

    Portal images are used in radiotherapy for the verification of patient positioning. The distinguishing feature of this image type lies in its formation process: the same beam used for patient treatment is used for image formation. The high energy of the photons used in radiotherapy strongly limits the quality of portal images: Low contrast between tissues, low spatial resolution and low signal to noise ratio. This Thesis studies the enhancement of these images, in particular denoising of portal images. The statistical properties of portal images and noise are studied: power spectra, statistical dependencies between image and noise and marginal, joint and conditional distributions in the wavelet domain. Later, various denoising methods are applied to noisy portal images. Methods operating in the wavelet domain are the basis of this Thesis. In addition, the Wiener filter and the non local means filter (NLM), operating in the image domain, are used as a reference. Other topics studied in this Thesis are spatial resolution, wavelet processing and image processing in dosimetry in radiotherapy. In this regard, the spatial resolution of portal imaging systems is studied; a new method for determining the spatial resolution of the imaging equipments in digital radiology is presented; the calculation of the power spectrum in the wavelet domain is studied; reducing uncertainty in film dosimetry is investigated; a method for the dosimetry of small radiation fields with radiochromic film is presented; the optimal signal resolution is determined, as a function of the noise level and the quantization step, in the digitization process of films and the useful optical density range is set, as a function of the required uncertainty level, for a densitometric system. Marginal distributions of portal images are similar to those of natural images. This also applies to the statistical relationships between wavelet coefficients, intra-band and inter-band. These facts result in a better

  20. Noise distribution and denoising of current density images

    PubMed Central

    Beheshti, Mohammadali; Foomany, Farbod H.; Magtibay, Karl; Jaffray, David A.; Krishnan, Sridhar; Nanthakumar, Kumaraswamy; Umapathy, Karthikeyan

    2015-01-01

    Abstract. Current density imaging (CDI) is a magnetic resonance (MR) imaging technique that could be used to study current pathways inside the tissue. The current distribution is measured indirectly as phase changes. The inherent noise in the MR imaging technique degrades the accuracy of phase measurements leading to imprecise current variations. The outcome can be affected significantly, especially at a low signal-to-noise ratio (SNR). We have shown the residual noise distribution of the phase to be Gaussian-like and the noise in CDI images approximated as a Gaussian. This finding matches experimental results. We further investigated this finding by performing comparative analysis with denoising techniques, using two CDI datasets with two different currents (20 and 45 mA). We found that the block-matching and three-dimensional (BM3D) technique outperforms other techniques when applied on current density (J). The minimum gain in noise power by BM3D applied to J compared with the next best technique in the analysis was found to be around 2 dB per pixel. We characterize the noise profile in CDI images and provide insights on the performance of different denoising techniques when applied at two different stages of current density reconstruction. PMID:26158100

  1. 4D microvascular imaging based on ultrafast Doppler tomography.

    PubMed

    Demené, Charlie; Tiran, Elodie; Sieu, Lim-Anna; Bergel, Antoine; Gennisson, Jean Luc; Pernot, Mathieu; Deffieux, Thomas; Cohen, Ivan; Tanter, Mickael

    2016-02-15

    4D ultrasound microvascular imaging was demonstrated by applying ultrafast Doppler tomography (UFD-T) to the imaging of brain hemodynamics in rodents. In vivo real-time imaging of the rat brain was performed using ultrasonic plane wave transmissions at very high frame rates (18,000 frames per second). Such ultrafast frame rates allow for highly sensitive and wide-field-of-view 2D Doppler imaging of blood vessels far beyond conventional ultrasonography. Voxel anisotropy (100 μm × 100 μm × 500 μm) was corrected for by using a tomographic approach, which consisted of ultrafast acquisitions repeated for different imaging plane orientations over multiple cardiac cycles. UFT-D allows for 4D dynamic microvascular imaging of deep-seated vasculature (up to 20 mm) with a very high 4D resolution (respectively 100 μm × 100 μm × 100 μm and 10 ms) and high sensitivity to flow in small vessels (>1 mm/s) for a whole-brain imaging technique without requiring any contrast agent. 4D ultrasound microvascular imaging in vivo could become a valuable tool for the study of brain hemodynamics, such as cerebral flow autoregulation or vascular remodeling after ischemic stroke recovery, and, more generally, tumor vasculature response to therapeutic treatment. PMID:26555279

  2. Denoising Stimulated Raman Spectroscopic Images by Total Variation Minimization

    PubMed Central

    Liao, Chien-Sheng; Choi, Joon Hee; Zhang, Delong; Chan, Stanley H.; Cheng, Ji-Xin

    2016-01-01

    High-speed coherent Raman scattering imaging is opening a new avenue to unveiling the cellular machinery by visualizing the spatio-temporal dynamics of target molecules or intracellular organelles. By extracting signals from the laser at MHz modulation frequency, current stimulated Raman scattering (SRS) microscopy has reached shot noise limited detection sensitivity. The laser-based local oscillator in SRS microscopy not only generates high levels of signal, but also delivers a large shot noise which degrades image quality and spectral fidelity. Here, we demonstrate a denoising algorithm that removes the noise in both spatial and spectral domains by total variation minimization. The signal-to-noise ratio of SRS spectroscopic images was improved by up to 57 times for diluted dimethyl sulfoxide solutions and by 15 times for biological tissues. Weak Raman peaks of target molecules originally buried in the noise were unraveled. Coupling the denoising algorithm with multivariate curve resolution allowed discrimination of fat stores from protein-rich organelles in C. elegans. Together, our method significantly improved detection sensitivity without frame averaging, which can be useful for in vivo spectroscopic imaging. PMID:26955400

  3. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  4. Denoising and deblurring of Fourier transform infrared spectroscopic imaging data

    NASA Astrophysics Data System (ADS)

    Nguyen, Tan H.; Reddy, Rohith K.; Walsh, Michael J.; Schulmerich, Matthew; Popescu, Gabriel; Do, Minh N.; Bhargava, Rohit

    2012-03-01

    Fourier transform infrared (FT-IR) spectroscopic imaging is a powerful tool to obtain chemical information from images of heterogeneous, chemically diverse samples. Significant advances in instrumentation and data processing in the recent past have led to improved instrument design and relatively widespread use of FT-IR imaging, in a variety of systems ranging from biomedical tissue to polymer composites. Various techniques for improving signal to noise ratio (SNR), data collection time and spatial resolution have been proposed previously. In this paper we present an integrated framework that addresses all these factors comprehensively. We utilize the low-rank nature of the data and model the instrument point spread function to denoise data, and then simultaneously deblurr and estimate unknown information from images, using a Bayesian variational approach. We show that more spatial detail and improved image quality can be obtained using the proposed framework. The proposed technique is validated through experiments on a standard USAF target and on prostate tissue specimens.

  5. Denoising of Multi-Modal Images with PCA Self-Cross Bilateral Filter

    NASA Astrophysics Data System (ADS)

    Qiu, Yu; Urahama, Kiichi

    We present the PCA self-cross bilateral filter for denoising multi-modal images. We firstly apply the principal component analysis for input multi-modal images. We next smooth the first principal component with a preliminary filter and use it as a supplementary image for cross bilateral filtering of input images. Among some preliminary filters, the undecimated wavelet transform is useful for effective denoising of various multi-modal images such as color, multi-lighting and medical images.

  6. 4D MR imaging using robust internal respiratory signal

    NASA Astrophysics Data System (ADS)

    Hui, CheukKai; Wen, Zhifei; Stemkens, Bjorn; Tijssen, R. H. N.; van den Berg, C. A. T.; Hwang, Ken-Pin; Beddar, Sam

    2016-05-01

    The purpose of this study is to investigate the feasibility of using internal respiratory (IR) surrogates to sort four-dimensional (4D) magnetic resonance (MR) images. The 4D MR images were constructed by acquiring fast 2D cine MR images sequentially, with each slice scanned for more than one breathing cycle. The 4D volume was then sorted retrospectively using the IR signal. In this study, we propose to use multiple low-frequency components in the Fourier space as well as the anterior body boundary as potential IR surrogates. From these potential IR surrogates, we used a clustering algorithm to identify those that best represented the respiratory pattern to derive the IR signal. A study with healthy volunteers was performed to assess the feasibility of the proposed IR signal. We compared this proposed IR signal with the respiratory signal obtained using respiratory bellows. Overall, 99% of the IR signals matched the bellows signals. The average difference between the end inspiration times in the IR signal and bellows signal was 0.18 s in this cohort of matching signals. For the acquired images corresponding to the other 1% of non-matching signal pairs, the respiratory motion shown in the images was coherent with the respiratory phases determined by the IR signal, but not the bellows signal. This suggested that the IR signal determined by the proposed method could potentially correct the faulty bellows signal. The sorted 4D images showed minimal mismatched artefacts and potential clinical applicability. The proposed IR signal therefore provides a feasible alternative to effectively sort MR images in 4D.

  7. [Ultrasound image de-noising based on nonlinear diffusion of complex wavelet transform].

    PubMed

    Hou, Wen; Wu, Yiquan

    2012-04-01

    Ultrasound images are easily corrupted by speckle noise, which limits its further application in medical diagnoses. An image de-noising method combining dual-tree complex wavelet transform (DT-CWT) with nonlinear diffusion is proposed in this paper. Firstly, an image is decomposed by DT-CWT. Then adaptive-contrast-factor diffusion and total variation diffusion are applied to high-frequency component and low-frequency component, respectively. Finally the image is synthesized. The experimental results are given. The comparisons of the image de-noising results are made with those of the image de-noising methods based on the combination of wavelet shrinkage with total variation diffusion, the combination of wavelet/multiwavelet with nonlinear diffusion. It is shown that the proposed image de-noising method based on DT-CWT and nonlinear diffusion can obtain superior results. It can both remove speckle noise and preserve the original edges and textural features more efficiently. PMID:22616185

  8. Image denoising via Bayesian estimation of local variance with Maxwell density prior

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-10-01

    The need for efficient image denoising methods has grown with the massive production of digital images and movies of all kinds. The distortion of images by additive white Gaussian noise (AWGN) is common during its processing and transmission. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. Indeed, one of the cruxes of the Bayesian image denoising algorithms is to estimate the local variance of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with Maxwell density prior for local observed variance and Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by analytical and computational tractability. The experimental results show that the proposed method yields good denoising results.

  9. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  10. Phase and amplitude binning for 4D-CT imaging.

    PubMed

    Abdelnour, A F; Nehmeh, S A; Pan, T; Humm, J L; Vernon, P; Schöder, H; Rosenzweig, K E; Mageras, G S; Yorke, E; Larson, S M; Erdi, Y E

    2007-06-21

    We compare the consistency and accuracy of two image binning approaches used in 4D-CT imaging. One approach, phase binning (PB), assigns each breathing cycle 2pi rad, within which the images are grouped. In amplitude binning (AB), the images are assigned bins according to the breathing signal's full amplitude. To quantitate both approaches we used a NEMA NU2-2001 IEC phantom oscillating in the axial direction and at random frequencies and amplitudes, approximately simulating a patient's breathing. 4D-CT images were obtained using a four-slice GE Lightspeed CT scanner operating in cine mode. We define consistency error as a measure of ability to correctly bin over repeated cycles in the same field of view. Average consistency error mue+/-sigmae in PB ranged from 18%+/-20% to 30%+/-35%, while in AB the error ranged from 11%+/-14% to 20%+/-24%. In PB nearly all bins contained sphere slices. AB was more accurate, revealing empty bins where no sphere slices existed. As a proof of principle, we present examples of two non-small cell lung carcinoma patients' 4D-CT lung images binned by both approaches. While AB can lead to gaps in the coronal images, depending on the patient's breathing pattern, PB exhibits no gaps but suffers visible artifacts due to misbinning, yielding images that cover a relatively large amplitude range. AB was more consistent, though often resulted in gaps when no data existed due to patients' breathing pattern. We conclude AB is more accurate than PB. This has important consequences to treatment planning and diagnosis. PMID:17664557

  11. 4D Light Field Imaging System Using Programmable Aperture

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam

    2012-01-01

    Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need

  12. 4D remote sensing image coding with JPEG2000

    NASA Astrophysics Data System (ADS)

    Muñoz-Gómez, Juan; Bartrina-Rapesta, Joan; Blanes, Ian; Jiménez-Rodríguez, Leandro; Aulí-Llinàs, Francesc; Serra-Sagristà, Joan

    2010-08-01

    Multicomponent data have become popular in several scientific fields such as forest monitoring, environmental studies, or sea water temperature detection. Nowadays, this multicomponent data can be collected more than one time per year for the same region. This generates different instances in time of multicomponent data, also called 4D-Data (1D Temporal + 1D Spectral + 2D Spatial). For multicomponent data, it is important to take into account inter-band redundancy to produce a more compact representation of the image by packing the energy into fewer number of bands, thus enabling a higher compression performance. The principal decorrelators used to compact the inter-band correlation redundancy are the Karhunen Loeve Transform (KLT) and Discrete Wavelet Transform (DWT). Because of the Temporal Dimension added, the inter-band redundancy among different multicomponent images is increased. In this paper we analyze the influence of the Temporal Dimension (TD) and the Spectral Dimension (SD) in 4D-Data in terms of coding performance for JPEG2000, because it has support to apply different decorrelation stages and transforms to the components through the different dimensions. We evaluate the influence to perform different decorrelators techniques to the different dimensions. Also we will assess the performance of the two main decorrelation techniques, KLT and DWT. Experimental results are provided, showing rate-distortion performances encoding 4D-Data using KLT and WT techniques to the different dimensions TD and SD.

  13. Exploiting the self-similarity in ERP images by nonlocal means for single-trial denoising.

    PubMed

    Strauss, Daniel J; Teuber, Tanja; Steidl, Gabriele; Corona-Strauss, Farah I

    2013-07-01

    Event related potentials (ERPs) represent a noninvasive and widely available means to analyze neural correlates of sensory and cognitive processing. Recent developments in neural and cognitive engineering proposed completely new application fields of this well-established measurement technique when using an advanced single-trial processing. We have recently shown that 2-D diffusion filtering methods from image processing can be used for the denoising of ERP single-trials in matrix representations, also called ERP images. In contrast to conventional 1-D transient ERP denoising techniques, the 2-D restoration of ERP images allows for an integration of regularities over multiple stimulations into the denoising process. Advanced anisotropic image restoration methods may require directional information for the ERP denoising process. This is especially true if there is a lack of a priori knowledge about possible traces in ERP images. However due to the use of event related experimental paradigms, ERP images are characterized by a high degree of self-similarity over the individual trials. In this paper, we propose the simple and easy to apply nonlocal means method for ERP image denoising in order to exploit this self-similarity rather than focusing on the edge-based extraction of directional information. Using measured and simulated ERP data, we compare our method to conventional approaches in ERP denoising. It is concluded that the self-similarity in ERP images can be exploited for single-trial ERP denoising by the proposed approach. This method might be promising for a variety of evoked and event-related potential applications, including nonstationary paradigms such as changing exogeneous stimulus characteristics or endogenous states during the experiment. As presented, the proposed approach is for the a posteriori denoising of single-trial sequences. PMID:23060344

  14. On high-order denoising models and fast algorithms for vector-valued images.

    PubMed

    Brito-Loeza, Carlos; Chen, Ke

    2010-06-01

    Variational techniques for gray-scale image denoising have been deeply investigated for many years; however, little research has been done for the vector-valued denoising case and the very few existent works are all based on total-variation regularization. It is known that total-variation models for denoising gray-scaled images suffer from staircasing effect and there is no reason to suggest this effect is not transported into the vector-valued models. High-order models, on the contrary, do not present staircasing. In this paper, we introduce three high-order and curvature-based denoising models for vector-valued images. Their properties are analyzed and a fast multigrid algorithm for the numerical solution is provided. AMS subject classifications: 68U10, 65F10, 65K10. PMID:20172828

  15. 4D XCAT phantom for multimodality imaging research

    SciTech Connect

    Segars, W. P.; Sturgeon, G.; Mendonca, S.; Grimes, Jason; Tsui, B. M. W.

    2010-09-15

    Purpose: The authors develop the 4D extended cardiac-torso (XCAT) phantom for multimodality imaging research. Methods: Highly detailed whole-body anatomies for the adult male and female were defined in the XCAT using nonuniform rational B-spline (NURBS) and subdivision surfaces based on segmentation of the Visible Male and Female anatomical datasets from the National Library of Medicine as well as patient datasets. Using the flexibility of these surfaces, the Visible Human anatomies were transformed to match body measurements and organ volumes for a 50th percentile (height and weight) male and female. The desired body measurements for the models were obtained using the PEOPLESIZE program that contains anthropometric dimensions categorized from 1st to the 99th percentile for US adults. The desired organ volumes were determined from ICRP Publication 89 [ICRP, ''Basic anatomical and physiological data for use in radiological protection: reference values,'' ICRP Publication 89 (International Commission on Radiological Protection, New York, NY, 2002)]. The male and female anatomies serve as standard templates upon which anatomical variations may be modeled in the XCAT through user-defined parameters. Parametrized models for the cardiac and respiratory motions were also incorporated into the XCAT based on high-resolution cardiac- and respiratory-gated multislice CT data. To demonstrate the usefulness of the phantom, the authors show example simulation studies in PET, SPECT, and CT using publicly available simulation packages. Results: As demonstrated in the pilot studies, the 4D XCAT (which includes thousands of anatomical structures) can produce realistic imaging data when combined with accurate models of the imaging process. With the flexibility of the NURBS surface primitives, any number of different anatomies, cardiac or respiratory motions or patterns, and spatial resolutions can be simulated to perform imaging research. Conclusions: With the ability to produce

  16. 4D XCAT phantom for multimodality imaging research

    PubMed Central

    Segars, W. P.; Sturgeon, G.; Mendonca, S.; Grimes, Jason; Tsui, B. M. W.

    2010-01-01

    Purpose: The authors develop the 4D extended cardiac-torso (XCAT) phantom for multimodality imaging research. Methods: Highly detailed whole-body anatomies for the adult male and female were defined in the XCAT using nonuniform rational B-spline (NURBS) and subdivision surfaces based on segmentation of the Visible Male and Female anatomical datasets from the National Library of Medicine as well as patient datasets. Using the flexibility of these surfaces, the Visible Human anatomies were transformed to match body measurements and organ volumes for a 50th percentile (height and weight) male and female. The desired body measurements for the models were obtained using the PEOPLESIZE program that contains anthropometric dimensions categorized from 1st to the 99th percentile for US adults. The desired organ volumes were determined from ICRP Publication 89 [ICRP, ‘‘Basic anatomical and physiological data for use in radiological protection: reference values,” ICRP Publication 89 (International Commission on Radiological Protection, New York, NY, 2002)]. The male and female anatomies serve as standard templates upon which anatomical variations may be modeled in the XCAT through user-defined parameters. Parametrized models for the cardiac and respiratory motions were also incorporated into the XCAT based on high-resolution cardiac- and respiratory-gated multislice CT data. To demonstrate the usefulness of the phantom, the authors show example simulation studies in PET, SPECT, and CT using publicly available simulation packages. Results: As demonstrated in the pilot studies, the 4D XCAT (which includes thousands of anatomical structures) can produce realistic imaging data when combined with accurate models of the imaging process. With the flexibility of the NURBS surface primitives, any number of different anatomies, cardiac or respiratory motions or patterns, and spatial resolutions can be simulated to perform imaging research. Conclusions: With the ability to produce

  17. Hybrid regularizers-based adaptive anisotropic diffusion for image denoising.

    PubMed

    Liu, Kui; Tan, Jieqing; Ai, Liefu

    2016-01-01

    To eliminate the staircasing effect for total variation filter and synchronously avoid the edges blurring for fourth-order PDE filter, a hybrid regularizers-based adaptive anisotropic diffusion is proposed for image denoising. In the proposed model, the [Formula: see text]-norm is considered as the fidelity term and the regularization term is composed of a total variation regularization and a fourth-order filter. The two filters can be adaptively selected according to the diffusion function. When the pixels locate at the edges, the total variation filter is selected to filter the image, which can preserve the edges. When the pixels belong to the flat regions, the fourth-order filter is adopted to smooth the image, which can eliminate the staircase artifacts. In addition, the split Bregman and relaxation approach are employed in our numerical algorithm to speed up the computation. Experimental results demonstrate that our proposed model outperforms the state-of-the-art models cited in the paper in both the qualitative and quantitative evaluations. PMID:27047730

  18. Empirical mode decomposition based background removal and de-noising in polarization interference imaging spectrometer.

    PubMed

    Zhang, Chunmin; Ren, Wenyi; Mu, Tingkui; Fu, Lili; Jia, Chenling

    2013-02-11

    Based on empirical mode decomposition (EMD), the background removal and de-noising procedures of the data taken by polarization interference imaging interferometer (PIIS) are implemented. Through numerical simulation, it is discovered that the data processing methods are effective. The assumption that the noise mostly exists in the first intrinsic mode function is verified, and the parameters in the EMD thresholding de-noising methods is determined. In comparison, the wavelet and windowed Fourier transform based thresholding de-noising methods are introduced. The de-noised results are evaluated by the SNR, spectral resolution and peak value of the de-noised spectrums. All the methods are used to suppress the effect from the Gaussian and Poisson noise. The de-noising efficiency is higher for the spectrum contaminated by Gaussian noise. The interferogram obtained by the PIIS is processed by the proposed methods. Both the interferogram without background and noise free spectrum are obtained effectively. The adaptive and robust EMD based methods are effective to the background removal and de-noising in PIIS. PMID:23481716

  19. Estimating Myocardial Motion by 4D Image Warping

    PubMed Central

    Sundar, Hari; Litt, Harold; Shen, Dinggang

    2009-01-01

    A method for spatio-temporally smooth and consistent estimation of cardiac motion from MR cine sequences is proposed. Myocardial motion is estimated within a 4-dimensional (4D) registration framework, in which all 3D images obtained at different cardiac phases are simultaneously registered. This facilitates spatio-temporally consistent estimation of motion as opposed to other registration-based algorithms which estimate the motion by sequentially registering one frame to another. To facilitate image matching, an attribute vector (AV) is constructed for each point in the image, and is intended to serve as a “morphological signature” of that point. The AV includes intensity, boundary, and geometric moment invariants (GMIs). Hierarchical registration of two image sequences is achieved by using the most distinctive points for initial registration of two sequences and gradually adding less-distinctive points to refine the registration. Experimental results on real data demonstrate good performance of the proposed method for cardiac image registration and motion estimation. The motion estimation is validated via comparisons with motion estimates obtained from MR images with myocardial tagging. PMID:20379351

  20. Edge-preserving image denoising via group coordinate descent on the GPU

    PubMed Central

    McGaffin, Madison G.; Fessler, Jeffrey A.

    2015-01-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel one-dimensional pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates inplace and only store the noisy data, denoised image and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation (TV). Both algorithms use the majorize-minimize (MM) framework to solve the one-dimensional pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time. PMID:25675454

  1. TU-C-BRD-01: Image Guided SBRT I: Multi-Modality 4D Imaging

    SciTech Connect

    Cai, J; Mageras, G; Pan, T

    2014-06-15

    Motion management is one of the critical technical challenges for radiation therapy. 4D imaging has been rapidly adopted as essential tool to assess organ motion associated with respiratory breathing. A variety of 4D imaging techniques have been developed and are currently under development based on different imaging modalities such as CT, MRI, PET, and CBCT. Each modality provides specific and complementary information about organ and tumor respiratory motion. Effective use of each different technique or combined use of different techniques can introduce a comprehensive management of tumor motion. Specifically, these techniques have afforded tremendous opportunities to better define and delineate tumor volumes, more accurately perform patient positioning, and effectively apply highly conformal therapy techniques such as IMRT and SBRT. Successful implementation requires good understanding of not only each technique, including unique features, limitations, artifacts, imaging acquisition and process, but also how to systematically apply the information obtained from different imaging modalities using proper tools such as deformable image registration. Furthermore, it is important to understand the differences in the effects of breathing variation between different imaging modalities. A comprehensive motion management strategy using multi-modality 4D imaging has shown promise in improving patient care, but at the same time faces significant challenges. This session will focuses on the current status and advances in imaging respiration-induced organ motion with different imaging modalities: 4D-CT, 4D-MRI, 4D-PET, and 4D-CBCT/DTS. Learning Objectives: Understand the need and role of multimodality 4D imaging in radiation therapy. Understand the underlying physics behind each 4D imaging technique. Recognize the advantages and limitations of each 4D imaging technique.

  2. Patch-based and multiresolution optimum bilateral filters for denoising images corrupted by Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kishan, Harini; Seelamantula, Chandra Sekhar

    2015-09-01

    We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques.

  3. Segmentation based denoising of PET images: an iterative approach via regional means and affinity propagation.

    PubMed

    Xu, Ziyue; Bagci, Ulas; Seidel, Jurgen; Thomasson, David; Solomon, Jeff; Mollura, Daniel J

    2014-01-01

    Delineation and noise removal play a significant role in clinical quantification of PET images. Conventionally, these two tasks are considered independent, however, denoising can improve the performance of boundary delineation by enhancing SNR while preserving the structural continuity of local regions. On the other hand, we postulate that segmentation can help denoising process by constraining the smoothing criteria locally. Herein, we present a novel iterative approach for simultaneous PET image denoising and segmentation. The proposed algorithm uses generalized Anscombe transformation priori to non-local means based noise removal scheme and affinity propagation based delineation. For nonlocal means denoising, we propose a new regional means approach where we automatically and efficiently extract the appropriate subset of the image voxels by incorporating the class information from affinity propagation based segmentation. PET images after denoising are further utilized for refinement of the segmentation in an iterative manner. Qualitative and quantitative results demonstrate that the proposed framework successfully removes the noise from PET images while preserving the structures, and improves the segmentation accuracy. PMID:25333180

  4. A new study on mammographic image denoising using multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju

    2015-12-01

    Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.

  5. Improved extreme value weighted sparse representational image denoising with random perturbation

    NASA Astrophysics Data System (ADS)

    Xuan, Shibin; Han, Yulan

    2015-11-01

    Research into the removal of mixed noise is a hot topic in the field of image denoising. Currently, weighted encoding with sparse nonlocal regularization represents an excellent mixed noise removal method. To make the fitting function closer to the requirements of a robust estimation technique, an extreme value technique is used that allows the fitting function to satisfy three conditions of robust estimation on a larger interval. Moreover, a random disturbance sequence is integrated into the denoising model to prevent the iterative solving process from falling into local optima. A radon transform-based noise detection algorithm and an adaptive median filter are used to obtain a high-quality initial solution for the iterative procedure of the image denoising model. Experimental results indicate that this improved method efficiently enhances the weighted encoding with a sparse nonlocal regularization model. The proposed method can effectively remove mixed noise from corrupted images, while better preserving the edges and details of the processed image.

  6. Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction.

    PubMed

    Holan, Scott H; Viator, John A

    2008-06-21

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples. PMID:18495977

  7. A multi-scale non-local means algorithm for image de-noising

    NASA Astrophysics Data System (ADS)

    Nercessian, Shahan; Panetta, Karen A.; Agaian, Sos S.

    2012-06-01

    A highly studied problem in image processing and the field of electrical engineering in general is the recovery of a true signal from its noisy version. Images can be corrupted by noise during their acquisition or transmission stages. As noisy images are visually very poor in quality, and complicate further processing stages of computer vision systems, it is imperative to develop algorithms which effectively remove noise in images. In practice, it is a difficult task to effectively remove the noise while simultaneously retaining the edge structures within the image. Accordingly, many de-noising algorithms have been considered attempt to intelligent smooth the image while still preserving its details. Recently, a non-local means (NLM) de-noising algorithm was introduced, which exploited the redundant nature of images to achieve image de-noising. The algorithm was shown to outperform current de-noising standards, including Gaussian filtering, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding. However, the NLM algorithm was developed in the spatial domain, and therefore, does not leverage the benefit that multi-scale transforms provide a framework in which signals can be better distinguished by noise. Accordingly, in this paper, a multi-scale NLM (MS-NLM) algorithm is proposed, which combines the advantage of the NLM algorithm and multi-scale image processing techniques. Experimental results via computer simulations illustrate that the MS-NLM algorithm outperforms the NLM, both visually and quantitatively.

  8. Denoising of PET images by combining wavelets and curvelets for improved preservation of resolution and quantitation.

    PubMed

    Le Pogam, A; Hanzouli, H; Hatt, M; Cheze Le Rest, C; Visvikis, D

    2013-12-01

    Denoising of Positron Emission Tomography (PET) images is a challenging task due to the inherent low signal-to-noise ratio (SNR) of the acquired data. A pre-processing denoising step may facilitate and improve the results of further steps such as segmentation, quantification or textural features characterization. Different recent denoising techniques have been introduced and most state-of-the-art methods are based on filtering in the wavelet domain. However, the wavelet transform suffers from some limitations due to its non-optimal processing of edge discontinuities. More recently, a new multi scale geometric approach has been proposed, namely the curvelet transform. It extends the wavelet transform to account for directional properties in the image. In order to address the issue of resolution loss associated with standard denoising, we considered a strategy combining the complementary wavelet and curvelet transforms. We compared different figures of merit (e.g. SNR increase, noise decrease in homogeneous regions, resolution loss, and intensity bias) on simulated and clinical datasets with the proposed combined approach and the wavelet-only and curvelet-only filtering techniques. The three methods led to an increase of the SNR. Regarding the quantitative accuracy however, the wavelet and curvelet only denoising approaches led to larger biases in the intensity and the contrast than the proposed combined algorithm. This approach could become an alternative solution to filters currently used after image reconstruction in clinical systems such as the Gaussian filter. PMID:23837964

  9. MR images denoising using DCT-based unbiased nonlocal means filter

    NASA Astrophysics Data System (ADS)

    Zheng, Xiuqing; Hu, Jinrong; Zhou, Jiuliu

    2013-03-01

    The non-local means (NLM) filter has been proven to be an efficient feature-preserved denoising method and can be applied to remove noise in the magnetic resonance (MR) images. To suppress noise more efficiently, we present a novel NLM filter by using a low-pass filtered and low dimensional version of neighborhood for calculating the similarity weights. The discrete cosine transform (DCT) is used as a smoothing kernel, allowing both improvements in similarity estimation and computational speed-up. Experimental results show that the proposed filter achieves better denoising performance in MR Images compared to others filters, such as recently proposed NLM filter and unbiased NLM (UNLM) filter.

  10. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  11. Total variation-regularized weighted nuclear norm minimization for hyperspectral image mixed denoising

    NASA Astrophysics Data System (ADS)

    Wu, Zhaojun; Wang, Qiang; Wu, Zhenghua; Shen, Yi

    2016-01-01

    Many nuclear norm minimization (NNM)-based methods have been proposed for hyperspectral image (HSI) mixed denoising due to the low-rank (LR) characteristics of clean HSI. However, the NNM-based methods regularize each eigenvalue equally, which is unsuitable for the denoising problem, where each eigenvalue stands for special physical meaning and should be regularized differently. However, the NNM-based methods only exploit the high spectral correlation, while ignoring the local structure of HSI and resulting in spatial distortions. To address these problems, a total variation (TV)-regularized weighted nuclear norm minimization (TWNNM) method is proposed. To obtain the desired denoising performance, two issues are included. First, to exploit the high spectral correlation, the HSI is restricted to be LR, and different eigenvalues are minimized with different weights based on the WNNM. Second, to preserve the local structure of HSI, the TV regularization is incorporated, and the alternating direction method of multipliers is used to solve the resulting optimization problem. Both simulated and real data experiments demonstrate that the proposed TWNNM approach produces superior denoising results for the mixed noise case in comparison with several state-of-the-art denoising methods.

  12. Dual tree complex wavelet transform based denoising of optical microscopy images.

    PubMed

    Bal, Ufuk

    2012-12-01

    Photon shot noise is the main noise source of optical microscopy images and can be modeled by a Poisson process. Several discrete wavelet transform based methods have been proposed in the literature for denoising images corrupted by Poisson noise. However, the discrete wavelet transform (DWT) has disadvantages such as shift variance, aliasing, and lack of directional selectivity. To overcome these problems, a dual tree complex wavelet transform is used in our proposed denoising algorithm. Our denoising algorithm is based on the assumption that for the Poisson noise case threshold values for wavelet coefficients can be estimated from the approximation coefficients. Our proposed method was compared with one of the state of the art denoising algorithms. Better results were obtained by using the proposed algorithm in terms of image quality metrics. Furthermore, the contrast enhancement effect of the proposed method on collagen fıber images is examined. Our method allows fast and efficient enhancement of images obtained under low light intensity conditions. PMID:23243573

  13. Subject-specific patch-based denoising for contrast-enhanced cardiac MR images

    NASA Astrophysics Data System (ADS)

    Ma, Lorraine; Ebrahimi, Mehran; Pop, Mihaela

    2016-03-01

    Many patch-based techniques in imaging, e.g., Non-local means denoising, require tuning parameters to yield optimal results. In real-world applications, e.g., denoising of MR images, ground truth is not generally available and the process of choosing an appropriate set of parameters is a challenge. Recently, Zhu et al. proposed a method to define an image quality measure, called Q, that does not require ground truth. In this manuscript, we evaluate the effect of various parameters of the NL-means denoising on this quality metric Q. Our experiments are based on the late-gadolinium enhancement (LGE) cardiac MR images that are inherently noisy. Our described exhaustive evaluation approach can be used in tuning parameters of patch-based schemes. Even in the case that an estimation of optimal parameters is provided using another existing approach, our described method can be used as a secondary validation step. Our preliminary results suggest that denoising parameters should be case-specific rather than generic.

  14. 2D/4D marker-free tumor tracking using 4D CBCT as the reference image

    PubMed Central

    Wang, Mengjiao; Rit, Simon; Delmon, Vivien; Wang, Guangzhi

    2014-01-01

    Tumor motion caused by respiration is an important issue in image guided radiotherapy. A 2D/4D matching method between 4D volumes derived from cone beam computed tomography (CBCT) and 2D fluoroscopic images was implemented to track the tumor motion without the use of implanted markers. In this method, firstly, 3DCBCT and phase-rebinned 4DCBCT are reconstructed from cone beam acquisition. Secondly, 4DCBCT volumes and streak free 3DCBCT volume are combined to improve the image quality of the DRRs. Finally, the 2D/4D matching problem is converted into a 2D/2D matching between incoming projections and DRR images from each phase of the 4DCBCT. The diaphragm is used as a target surrogate for matching instead of using the tumor position directly. This relies on the assumption that if a patient has the same breathing phase and diaphragm position as the reference 4DCBCT, then the tumor position is the same. From the matching results, the phase information, diaphragm position and tumor position at the time of each incoming projection acquisition can be derived. The accuracy of this method was verified using 16 candidate datasets, representing lung and liver applications and 1-minute and 2-minute acquisitions. The criteria for the eligibility of datasets were described: 11 eligible datasets were selected to verify the accuracy of diaphragm tracking, and one eligible dataset was chosen to verify the accuracy of tumor tracking. Diaphragm matching accuracy was 1.88±1.35mm in the isocenter plane, the 2D tumor tracking accuracy was 2.13±1.26mm in the isocenter plane. These features make this method feasible for real-time marker-free tumor motion tracking purpose. PMID:24710793

  15. 2D/4D marker-free tumor tracking using 4D CBCT as the reference image

    NASA Astrophysics Data System (ADS)

    Wang, Mengjiao; Sharp, Gregory C.; Rit, Simon; Delmon, Vivien; Wang, Guangzhi

    2014-05-01

    Tumor motion caused by respiration is an important issue in image-guided radiotherapy. A 2D/4D matching method between 4D volumes derived from cone beam computed tomography (CBCT) and 2D fluoroscopic images was implemented to track the tumor motion without the use of implanted markers. In this method, firstly, 3DCBCT and phase-rebinned 4DCBCT are reconstructed from cone beam acquisition. Secondly, 4DCBCT volumes and a streak-free 3DCBCT volume are combined to improve the image quality of the digitally reconstructed radiographs (DRRs). Finally, the 2D/4D matching problem is converted into a 2D/2D matching between incoming projections and DRR images from each phase of the 4DCBCT. The diaphragm is used as a target surrogate for matching instead of using the tumor position directly. This relies on the assumption that if a patient has the same breathing phase and diaphragm position as the reference 4DCBCT, then the tumor position is the same. From the matching results, the phase information, diaphragm position and tumor position at the time of each incoming projection acquisition can be derived. The accuracy of this method was verified using 16 candidate datasets, representing lung and liver applications and one-minute and two-minute acquisitions. The criteria for the eligibility of datasets were described: 11 eligible datasets were selected to verify the accuracy of diaphragm tracking, and one eligible dataset was chosen to verify the accuracy of tumor tracking. The diaphragm matching accuracy was 1.88 ± 1.35 mm in the isocenter plane and the 2D tumor tracking accuracy was 2.13 ± 1.26 mm in the isocenter plane. These features make this method feasible for real-time marker-free tumor motion tracking purposes.

  16. Biomedical image and signal de-noising using dual tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Rizi, F. Yousefi; Noubari, H. Ahmadi; Setarehdan, S. K.

    2011-10-01

    Dual tree complex wavelet transform(DTCWT) is a form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The purposes of de-noising are reducing noise level and improving signal to noise ratio (SNR) without distorting the signal or image. This paper proposes a method for removing white Gaussian noise from ECG signals and biomedical images. The discrete wavelet transform (DWT) is very valuable in a large scope of de-noising problems. However, it has limitations such as oscillations of the coefficients at a singularity, lack of directional selectivity in higher dimensions, aliasing and consequent shift variance. The complex wavelet transform CWT strategy that we focus on in this paper is Kingsbury's and Selesnick's dual tree CWT (DTCWT) which outperforms the critically decimated DWT in a range of applications, such as de-noising. Each complex wavelet is oriented along one of six possible directions, and the magnitude of each complex wavelet has a smooth bell-shape. In the final part of this paper, we present biomedical image and signal de-noising by the means of thresholding magnitude of the wavelet coefficients.

  17. Translation invariant directional framelet transform combined with Gabor filters for image denoising.

    PubMed

    Shi, Yan; Yang, Xiaoyuan; Guo, Yuhua

    2014-01-01

    This paper is devoted to the study of a directional lifting transform for wavelet frames. A nonsubsampled lifting structure is developed to maintain the translation invariance as it is an important property in image denoising. Then, the directionality of the lifting-based tight frame is explicitly discussed, followed by a specific translation invariant directional framelet transform (TIDFT). The TIDFT has two framelets ψ1, ψ2 with vanishing moments of order two and one respectively, which are able to detect singularities in a given direction set. It provides an efficient and sparse representation for images containing rich textures along with properties of fast implementation and perfect reconstruction. In addition, an adaptive block-wise orientation estimation method based on Gabor filters is presented instead of the conventional minimization of residuals. Furthermore, the TIDFT is utilized to exploit the capability of image denoising, incorporating the MAP estimator for multivariate exponential distribution. Consequently, the TIDFT is able to eliminate the noise effectively while preserving the textures simultaneously. Experimental results show that the TIDFT outperforms some other frame-based denoising methods, such as contourlet and shearlet, and is competitive to the state-of-the-art denoising approaches. PMID:24215934

  18. The Application of Wavelet-Domain Hidden Markov Tree Model in Diabetic Retinal Image Denoising

    PubMed Central

    Cui, Dong; Liu, Minmin; Hu, Lei; Liu, Keju; Guo, Yongxin; Jiao, Qing

    2015-01-01

    The wavelet-domain Hidden Markov Tree Model can properly describe the dependence and correlation of fundus angiographic images’ wavelet coefficients among scales. Based on the construction of the fundus angiographic images Hidden Markov Tree Models and Gaussian Mixture Models, this paper applied expectation-maximum algorithm to estimate the wavelet coefficients of original fundus angiographic images and the Bayesian estimation to achieve the goal of fundus angiographic images denoising. As is shown in the experimental result, compared with the other algorithms as mean filter and median filter, this method effectively improved the peak signal to noise ratio of fundus angiographic images after denoising and preserved the details of vascular edge in fundus angiographic images. PMID:26628926

  19. Denoising of hyperspectral images by best multilinear rank approximation of a tensor

    NASA Astrophysics Data System (ADS)

    Marin-McGee, Maider; Velez-Reyes, Miguel

    2010-04-01

    The hyperspectral image cube can be modeled as a three dimensional array. Tensors and the tools of multilinear algebra provide a natural framework to deal with this type of mathematical object. Singular value decomposition (SVD) and its variants have been used by the HSI community for denoising of hyperspectral imagery. Denoising of HSI using SVD is achieved by finding a low rank approximation of a matrix representation of the hyperspectral image cube. This paper investigates similar concepts in hyperspectral denoising by using a low multilinear rank approximation the given HSI tensor representation. The Best Multilinear Rank Approximation (BMRA) of a given tensor A is to find a lower multilinear rank tensor B that is as close as possible to A in the Frobenius norm. Different numerical methods to compute the BMRA using Alternating Least Square (ALS) method and Newton's Methods over product of Grassmann manifolds are presented. The effect of the multilinear rank, the numerical method used to compute the BMRA, and different parameter choices in those methods are studied. Results show that comparable results are achievable with both ALS and Newton type methods. Also, classification results using the filtered tensor are better than those obtained either with denoising using SVD or MNF.

  20. 4D flow imaging: current status to future clinical applications.

    PubMed

    Markl, Michael; Schnell, Susanne; Barker, Alex J

    2014-05-01

    4D flow MRI permits a comprehensive in-vivo assessment of three-directional blood flow within 3-dimensional vascular structures throughout the cardiac cycle. Given the large coverage permitted from a 4D flow acquisition, the distribution of vessel wall and flow parameters along an entire vessel of interest can thus be derived from a single measurement without being dependent on multiple predefined 2D acquisitions. In addition to qualitative 3D visualizations of complex cardiac and vascular flow patterns, quantitative flow analysis can be performed and is complemented by the ability to compute sophisticated hemodynamic parameters, such as wall shear stress or 3D pressure difference maps. These metrics can provide information previously unavailable with conventional modalities regarding the impact of cardiovascular disease or therapy on global and regional changes in hemodynamics. This review provides an introduction to the methodological aspects of 4D flow MRI to assess vascular hemodynamics and describes its potential for the assessment and understanding of altered hemodynamics in the presence of cardiovascular disease. PMID:24700368

  1. R-L Method and BLS-GSM Denoising for Penumbra Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Mei; Li, Yang; Sheng, Liang; Li, Chunhua; Wei, Fuli; Peng, Bodong

    2013-12-01

    When neutron yield is very low, reconstruction of coding penumbra image is rather difficult. In this paper, low-yield (109) 14 MeV neutron penumbra imaging was simulated by Monte Carlo method. The Richardson Lucy (R-L) iteration method was proposed to incorporated with Bayesian least square-Gaussian scale mixture model (BLS-GSM) wavelet denoising for the simulated image. Optimal number of R-L iterations was gotten by a large number of tests. The results show that compared with Wiener method and median filter denoising, this method is better in restraining background noise, the correlation coefficient Rsr between the reconstructed and the real images is larger, and the reconstruction result is better.

  2. A New Image Denoising Algorithm that Preserves Structures of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Bressert, Eli; Edmonds, P.; Kowal Arcand, K.

    2007-05-01

    We have processed numerous x-ray data sets using several well-known algorithms such as Gaussian and adaptive smoothing for public related image releases. These algorithms are used to denoise/smooth images and retain the overall structure of observed objects. Recently, a new PDE algorithm and program, provided by Dr. David Tschumperle and referred to as GREYCstoration, has been tested and is in the progress of being implemented into the Chandra EPO imaging group. Results of GREYCstoration will be presented and compared to the currently used methods for x-ray and multiple wavelength images. What demarcates Tschumperle's algorithm from the current algorithms used by the EPO imaging group is its ability to preserve the main structures of an image strongly, while reducing noise. In addition to denoising images, GREYCstoration can be used to erase artifacts accumulated during observation and mosaicing stages. GREYCstoration produces results that are comparable and in some cases more preferable than the current denoising/smoothing algorithms. From our early stages of testing, the results of the new algorithm will provide insight on the algorithm's initial capabilities on multiple wavelength astronomy data sets.

  3. Projection domain denoising method based on dictionary learning for low-dose CT image reconstruction.

    PubMed

    Zhang, Haiyan; Zhang, Liyi; Sun, Yunshan; Zhang, Jingyu

    2015-01-01

    Reducing X-ray tube current is one of the widely used methods for decreasing the radiation dose. Unfortunately, the signal-to-noise ratio (SNR) of the projection data degrades simultaneously. To improve the quality of reconstructed images, a dictionary learning based penalized weighted least-squares (PWLS) approach is proposed for sinogram denoising. The weighted least-squares considers the statistical characteristic of noise and the penalty models the sparsity of sinogram based on dictionary learning. Then reconstruct CT image using filtered back projection (FBP) algorithm from the denoised sinogram. The proposed method is particularly suitable for the projection data with low SNR. Experimental results show that the proposed method can get high-quality CT images when the signal to noise ratio of projection data declines sharply. PMID:26409424

  4. Denoising of brain MRI images using modified PDE based on pixel similarity

    NASA Astrophysics Data System (ADS)

    Jin, Renchao; Song, Enmin; Zhang, Lijuan; Min, Zhifang; Xu, Xiangyang; Huang, Chih-Cheng

    2008-03-01

    Although various image denoising methods such as PDE-based algorithms have made remarkable progress in the past years, the trade-off between noise reduction and edge preservation is still an interesting and difficult problem in the field of image processing and analysis. A new image denoising algorithm, using a modified PDE model based on pixel similarity, is proposed to deal with the problem. The pixel similarity measures the similarity between two pixels. Then the neighboring consistency of the center pixel can be calculated. Informally, if a pixel is not consistent enough with its surrounding pixels, it can be considered as a noise, but an extremely strong inconsistency suggests an edge. The pixel similarity is a probability measure, its value is between 0 and 1. According to the neighboring consistency of the pixel, a diffusion control factor can be determined by a simple thresholding rule. The factor is combined into the primary partial differential equation as an adjusting factor for controlling the speed of diffusion for different type of pixels. An evaluation of the proposed algorithm on the simulated brain MRI images was carried out. The initial experimental results showed that the new algorithm can smooth the MRI images better while keeping the edges better and achieve higher peak signal to noise ratio (PSNR) comparing with several existing denoising algorithms.

  5. OPTICAL COHERENCE TOMOGRAPHY HEART TUBE IMAGE DENOISING BASED ON CONTOURLET TRANSFORM.

    PubMed

    Guo, Qing; Sun, Shuifa; Dong, Fangmin; Gao, Bruce Z; Wang, Rui

    2012-01-01

    Optical Coherence Tomography(OCT) gradually becomes a very important imaging technology in the Biomedical field for its noninvasive, nondestructive and real-time properties. However, the interpretation and application of the OCT images are limited by the ubiquitous noise. In this paper, a denoising algorithm based on contourlet transform for the OCT heart tube image is proposed. A bivariate function is constructed to model the joint probability density function (pdf) of the coefficient and its cousin in contourlet domain. A bivariate shrinkage function is deduced to denoise the image by the maximum a posteriori (MAP) estimation. Three metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and equivalent number of look (ENL), are used to evaluate the denoised image using the proposed algorithm. The results show that the signal-to-noise ratio is improved while the edges of object are preserved by the proposed algorithm. Systemic comparisons with other conventional algorithms, such as mean filter, median filter, RKT filter, Lee filter, as well as bivariate shrinkage function for wavelet-based algorithm are conducted. The advantage of the proposed algorithm over these methods is illustrated. PMID:25364626

  6. Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function.

    PubMed

    Lahmiri, Salim

    2016-03-01

    Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications. PMID:27222723

  7. Image Denoising With Edge-Preserving and Segmentation Based on Mask NHA.

    PubMed

    Hosotani, Fumitaka; Inuzuka, Yuya; Hasegawa, Masaya; Hirobayashi, Shigeki; Misawa, Tadanobu

    2015-12-01

    In this paper, we propose a zero-mean white Gaussian noise removal method using a high-resolution frequency analysis. It is difficult to separate an original image component from a noise component when using discrete Fourier transform or discrete cosine transform for analysis because sidelobes occur in the results. The 2D non-harmonic analysis (2D NHA) is a high-resolution frequency analysis technique that improves noise removal accuracy because of its sidelobe reduction feature. However, spectra generated by NHA are distorted, because of which the signal of the image is non-stationary. In this paper, we analyze each region with a homogeneous texture in the noisy image. Non-uniform regions that occur due to segmentation are analyzed by an extended 2D NHA method called Mask NHA. We conducted an experiment using a simulation image, and found that Mask NHA denoising attains a higher peak signal-to-noise ratio (PSNR) value than the state-of-the-art methods if a suitable segmentation result can be obtained from the input image, even though parameter optimization was incomplete. This experimental result exhibits the upper limit on the value of PSNR in our Mask NHA denoising method. The performance of Mask NHA denoising is expected to approach the limit of PSNR by improving the segmentation method. PMID:26513792

  8. A New Adaptive Diffusive Function for Magnetic Resonance Imaging Denoising Based on Pixel Similarity

    PubMed Central

    Heydari, Mostafa; Karami, Mohammad Reza

    2015-01-01

    Although there are many methods for image denoising, but partial differential equation (PDE) based denoising attracted much attention in the field of medical image processing such as magnetic resonance imaging (MRI). The main advantage of PDE-based denoising approach is laid in its ability to smooth image in a nonlinear way, which effectively removes the noise, as well as preserving edge through anisotropic diffusion controlled by the diffusive function. This function was first introduced by Perona and Malik (P-M) in their model. They proposed two functions that are most frequently used in PDE-based methods. Since these functions consider only the gradient information of a diffused pixel, they cannot remove noise in noisy images with low signal-to-noise (SNR). In this paper we propose a modified diffusive function with fractional power that is based on pixel similarity to improve P-M model for low SNR. We also will show that our proposed function will stabilize the P-M method. As experimental results show, our proposed function that is modified version of P-M function effectively improves the SNR and preserves edges more than P-M functions in low SNR. PMID:26955563

  9. Creation of 4D imaging data using open source image registration software

    NASA Astrophysics Data System (ADS)

    Wong, Kenneth H.; Ibanez, Luis; Popa, Teo; Cleary, Kevin

    2006-03-01

    4D images (3 spatial dimensions plus time) using CT or MRI will play a key role in radiation medicine as techniques for respiratory motion compensation become more widely available. Advance knowledge of the motion of a tumor and its surrounding anatomy will allow the creation of highly conformal dose distributions in organs such as the lung, liver, and pancreas. However, many of the current investigations into 4D imaging rely on synchronizing the image acquisition with an external respiratory signal such as skin motion, tidal flow, or lung volume, which typically requires specialized hardware and modifications to the scanner. We propose a novel method for 4D image acquisition that does not require any specific gating equipment and is based solely on open source image registration algorithms. Specifically, we use the Insight Toolkit (ITK) to compute the normalized mutual information (NMI) between images taken at different times and use that value as an index of respiratory phase. This method has the advantages of (1) being able to be implemented without any hardware modification to the scanner, and (2) basing the respiratory phase on changes in internal anatomy rather than external signal. We have demonstrated the capabilities of this method with CT fluoroscopy data acquired from a swine model.

  10. Image Pretreatment Tools I: Algorithms for Map Denoising and Background Subtraction Methods.

    PubMed

    Cannistraci, Carlo Vittorio; Alessio, Massimo

    2016-01-01

    One of the critical steps in two-dimensional electrophoresis (2-DE) image pre-processing is the denoising, that might aggressively affect either spot detection or pixel-based methods. The Median Modified Wiener Filter (MMWF), a new nonlinear adaptive spatial filter, resulted to be a good denoising approach to use in practice with 2-DE. MMWF is suitable for global denoising, and contemporary for the removal of spikes and Gaussian noise, being its best setting invariant on the type of noise. The second critical step rises because of the fact that 2-DE gel images may contain high levels of background, generated by the laboratory experimental procedures, that must be subtracted for accurate measurements of the proteomic optical density signals. Here we discuss an efficient mathematical method for background estimation, that is suitable to work even before the 2-DE image spot detection, and it is based on the 3D mathematical morphology (3DMM) theory. PMID:26611410

  11. Generalized non-local means filtering for image denoising

    NASA Astrophysics Data System (ADS)

    Dolui, Sudipto; Salgado Patarroyo, Iván. C.; Michailovich, Oleg V.

    2014-02-01

    Non-local means (NLM) filtering has been shown to outperform alternative denoising methodologies under the model of additive white Gaussian noise contamination. Recently, several theoretical frameworks have been developed to extend this class of algorithms to more general types of noise statistics. However, many of these frameworks are specifically designed for a single noise contamination model, and are far from optimal across varying noise statistics. The NLM filtering techniques rely on the definition of a similarity measure, which quantifies the similarity of two neighbourhoods along with their respective centroids. The key to the unification of the NLM filter for different noise statistics lies in the definition of a universal similarity measure which is guaranteed to provide favourable performance irrespective of the statistics of the noise. Accordingly, the main contribution of this work is to provide a rigorous statistical framework to derive such a universal similarity measure, while highlighting some of its theoretical and practical favourable characteristics. Additionally, the closed form expressions of the proposed similarity measure are provided for a number of important noise scenarios and the practical utility of the proposed similarity measure is demonstrated through numerical experiments.

  12. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time. PMID:26405887

  13. The effective image denoising method for MEMS based IR image arrays

    NASA Astrophysics Data System (ADS)

    Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin; Liu, Ming; Hui, Mei; Zhou, Xiaoxiao

    2008-12-01

    MEMS have become viable systems to utilize for uncooled infrared imaging in recent years. They offer advantages due to their simplicity, low cost and scalability to high-resolution FPAs without prohibitive increase in cost. An uncooled thermal detector array with low NETD is designed and fabricated using MEMS bimaterial microcantilever structures that bend in response to thermal change. The IR images of objects obtained by these FPAs are readout by an optical method. For the IR images, processed by a sparse representation-based image denoising and inpainting algorithm, which generalizing the K-Means clustering process, for adapting dictionaries in order to achieve sparse signal representations. The processed image quality is improved obviously. Great compute and analysis have been realized by using the discussed algorithm to the simulated data and in applications on real data. The experimental results demonstrate, better RMSE and highest Peak Signal-to-Noise Ratio (PSNR) compared with traditional methods can be obtained. At last we discuss the factors that determine the ultimate performance of the FPA. And we indicated that one of the unique advantages of the present approach is the scalability to larger imaging arrays.

  14. Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT

    SciTech Connect

    Wang, Jing; Gu, Xuejun

    2013-10-15

    Purpose: Image reconstruction and motion model estimation in four-dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4D-CBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR).Methods: The proposed SMEIR algorithm consists of two alternating steps: (1) model-based iterative image reconstruction to obtain a motion-compensated primary CBCT (m-pCBCT) and (2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction technique (SART) coupled with total variation minimization. During the forward- and backprojection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The quality of reconstructed 4D images and the accuracy of tumor motion trajectory are assessed by comparing with those resulting from conventional sequential 4D-CBCT reconstructions (FDK and total variation minimization) and motion estimation (demons algorithm). The performance of the SMEIR algorithm is further evaluated by reconstructing a lung cancer patient 4D-CBCT.Results: Image quality of 4D-CBCT is greatly improved by the SMEIR algorithm in both phantom and patient studies. When all projections are used to reconstruct a 3D-CBCT by FDK, motion

  15. Adaptive 4D MR Imaging Using Navigator-Based Respiratory Signal for MRI-Guided Therapy

    PubMed Central

    Tokuda, Junichi; Morikawa, Shigehiro; Haque, Hasnine A.; Tsukamoto, Tetsuji; Matsumiya, Kiyoshi; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi

    2010-01-01

    For real-time 3D visualization of respiratory organ motion for MRI-guided therapy, a new adaptive 4D MR imaging method based on navigator echo and multiple gating windows was developed. This method was designed to acquire a time series of volumetric 3D images of a cyclically moving organ, enabling therapy to be guided by synchronizing the 4D image with the actual organ motion in real time. The proposed method was implemented in an open-configuration 0.5T clinical MR scanner. To evaluate the feasibility and determine optimal imaging conditions, studies were conducted with a phantom, volunteers, and a patient. In the phantom study the root mean square (RMS) position error in the 4D image of the cyclically moving phantom was 1.9 mm and the imaging time was ≈10 min when the 4D image had six frames. In the patient study, 4D images were successfully acquired under clinical conditions and a liver tumor was discriminated in the series of frames. The image quality was affected by the relations among the encoding direction, the slice orientation, and the direction of motion of the target organ. In conclusion, this study has shown that the proposed method is feasible and capable of providing a real-time dynamic 3D atlas for surgical navigation with sufficient accuracy and image quality. PMID:18429011

  16. Impact of 4D image quality on the accuracy of target definition.

    PubMed

    Nielsen, Tine Bjørn; Hansen, Christian Rønn; Westberg, Jonas; Hansen, Olfred; Brink, Carsten

    2016-03-01

    Delineation accuracy of target shape and position depends on the image quality. This study investigates whether the image quality on standard 4D systems has an influence comparable to the overall delineation uncertainty. A moving lung target was imaged using a dynamic thorax phantom on three different 4D computed tomography (CT) systems and a 4D cone beam CT (CBCT) system using pre-defined clinical scanning protocols. Peak-to-peak motion and target volume were registered using rigid registration and automatic delineation, respectively. A spatial distribution of the imaging uncertainty was calculated as the distance deviation between the imaged target and the true target shape. The measured motions were smaller than actual motions. There were volume differences of the imaged target between respiration phases. Imaging uncertainties of >0.4 cm were measured in the motion direction which showed that there was a large distortion of the imaged target shape. Imaging uncertainties of standard 4D systems are of similar size as typical GTV-CTV expansions (0.5-1 cm) and contribute considerably to the target definition uncertainty. Optimising and validating 4D systems is recommended in order to obtain the most optimal imaged target shape. PMID:26577711

  17. Statistics of Natural Stochastic Textures and Their Application in Image Denoising.

    PubMed

    Zachevsky, Ido; Zeevi, Yehoshua Y Josh

    2016-05-01

    Natural stochastic textures (NSTs), characterized by their fine details, are prone to corruption by artifacts, introduced during the image acquisition process by the combined effect of blur and noise. While many successful algorithms exist for image restoration and enhancement, the restoration of natural textures and textured images based on suitable statistical models has yet to be further improved. We examine the statistical properties of NST using three image databases. We show that the Gaussian distribution is suitable for many NST, while other natural textures can be properly represented by a model that separates the image into two layers; one of these layers contains the structural elements of smooth areas and edges, while the other contains the statistically Gaussian textural details. Based on these statistical properties, an algorithm for the denoising of natural images containing NST is proposed, using patch-based fractional Brownian motion model and regularization by means of anisotropic diffusion. It is illustrated that this algorithm successfully recovers both missing textural details and structural attributes that characterize natural images. The algorithm is compared with classical as well as the state-of-the-art denoising algorithms. PMID:27045423

  18. Enhanced optical coherence tomography imaging using a histogram-based denoising algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Keo-Sik; Park, Hyoung-Jun; Kang, Hyun Seo

    2015-11-01

    A histogram-based denoising algorithm was developed to effectively reduce ghost artifact noise and enhance the quality of an optical coherence tomography (OCT) imaging system used to guide surgical instruments. The noise signal is iteratively detected by comparing the histogram of the ensemble average of all A-scans, and the ghost artifacts included in the noisy signal are removed separately from the raw signals using the polynomial curve fitting method. The devised algorithm was simulated with various noisy OCT images, and >87% of the ghost artifact noise was removed despite different locations. Our results show the feasibility of selectively and effectively removing ghost artifact noise.

  19. Image denoising with 2D scale-mixing complex wavelet transforms.

    PubMed

    Remenyi, Norbert; Nicolis, Orietta; Nason, Guy; Vidakovic, Brani

    2014-12-01

    This paper introduces an image denoising procedure based on a 2D scale-mixing complex-valued wavelet transform. Both the minimal (unitary) and redundant (maximum overlap) versions of the transform are used. The covariance structure of white noise in wavelet domain is established. Estimation is performed via empirical Bayesian techniques, including versions that preserve the phase of the complex-valued wavelet coefficients and those that do not. The new procedure exhibits excellent quantitative and visual performance, which is demonstrated by simulation on standard test images. PMID:25312931

  20. Real-time wavelet denoising with edge enhancement for medical x-ray imaging

    NASA Astrophysics Data System (ADS)

    Luo, Gaoyong; Osypiw, David; Hudson, Chris

    2006-02-01

    X-ray image visualized in real-time plays an important role in clinical applications. The real-time system design requires that images with the highest perceptual quality be acquired while minimizing the x-ray dose to the patient, which can result in severe noise that must be reduced. The approach based on the wavelet transform has been widely used for noise reduction. However, by removing noise, high frequency components belonging to edges that hold important structural information of an image are also removed, which leads to blurring the features. This paper presents a new method of x-ray image denoising based on fast lifting wavelet thresholding for general noise reduction and spatial filtering for further denoising by using a derivative model to preserve edges. General denoising is achieved by estimating the level of the contaminating noise and employing an adaptive thresholding scheme with variance analysis. The soft thresholding scheme is to remove the overall noise including that attached to edges. A new edge identification method of using approximation of spatial gradient at each pixel location is developed together with a spatial filter to smooth noise in the homogeneous areas but preserve important structures. Fine noise reduction is only applied to the non-edge parts, such that edges are preserved and enhanced. Experimental results demonstrate that the method performs well both visually and in terms of quantitative performance measures for clinical x-ray images contaminated by natural and artificial noise. The proposed algorithm with fast computation and low complexity provides a potential solution for real-time applications.

  1. A non-gradient-based energy minimization approach to the image denoising problem

    NASA Astrophysics Data System (ADS)

    Lukić, Tibor; Žunić, Joviša

    2014-09-01

    A common approach to denoising images is to minimize an energy function combining a quadratic data fidelity term with a total variation-based regularization. The total variation, comprising the gradient magnitude function, originally comes from mathematical analysis and is defined on a continuous domain only. When working in a discrete domain (e.g. when dealing with digital images), the accuracy in the gradient computation is limited by the applied image resolution. In this paper we propose a new approach, where the gradient magnitude function is replaced with an operator with similar properties (i.e. it also expresses the intensity variation in a neighborhood of the considered point), but is concurrently applicable in both continuous and discrete space. This operator is the shape elongation measure, one of the shape descriptors intensively used in shape-based image processing and computer vision tasks. The experiments provided in this paper confirm the capability of the proposed approach for providing high-quality reconstructions. Based on the performance comparison of a number of test images, we can say that the new method outperforms the energy minimization-based denoising methods often used in the literature for method comparison.

  2. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    PubMed Central

    Hou, Wenguang; Zhang, Xuming; Ding, Mingyue

    2013-01-01

    Speckle suppression plays an important role in improving ultrasound (US) image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM) provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU-) based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm. PMID:24348747

  3. Entropy-based straight kernel filter for echocardiography image denoising.

    PubMed

    Rajalaxmi, S; Nirmala, S

    2014-10-01

    A new filter has been proposed with the aim of eliminating speckle noise from 2D echocardiography images. This speckle noise has to be eliminated to avoid the pseudo prediction of the underlying anatomical facts. The proposed filter uses entropy parameter to measure the disorganized occurrence of noise pixel in each row and column and to increase the image visibility. Straight kernels with 3 pixels each are chosen for the filtering process, and the filter is slided over the image to eliminate speckle. The peak signal-to-noise ratio (PSNR) is obtained in the range of 147 dB, and the root mean square error (RMSE) is very low of approximately 0.15. The proposed filter is implemented on 36 echocardiography images, and the filter has the competence to illuminate the actual anatomical facts without degrading the edges. PMID:24838117

  4. Modeling diffusion-weighted MRI as a spatially variant Gaussian mixture: Application to image denoising

    PubMed Central

    Gonzalez, Juan Eugenio Iglesias; Thompson, Paul M.; Zhao, Aishan; Tu, Zhuowen

    2011-01-01

    Purpose: This work describes a spatially variant mixture model constrained by a Markov random field to model high angular resolution diffusion imaging (HARDI) data. Mixture models suit HARDI well because the attenuation by diffusion is inherently a mixture. The goal is to create a general model that can be used in different applications. This study focuses on image denoising and segmentation (primarily the former). Methods: HARDI signal attenuation data are used to train a Gaussian mixture model in which the mean vectors and covariance matrices are assumed to be independent of spatial locations, whereas the mixture weights are allowed to vary at different lattice positions. Spatial smoothness of the data is ensured by imposing a Markov random field prior on the mixture weights. The model is trained in an unsupervised fashion using the expectation maximization algorithm. The number of mixture components is determined using the minimum message length criterion from information theory. Once the model has been trained, it can be fitted to a noisy diffusion MRI volume by maximizing the posterior probability of the underlying noiseless data in a Bayesian framework, recovering a denoised version of the image. Moreover, the fitted probability maps of the mixture components can be used as features for posterior image segmentation. Results: The model-based denoising algorithm proposed here was compared on real data with three other approaches that are commonly used in the literature: Gaussian filtering, anisotropic diffusion, and Rician-adapted nonlocal means. The comparison shows that, at low signal-to-noise ratio, when these methods falter, our algorithm considerably outperforms them. When tractography is performed on the model-fitted data rather than on the noisy measurements, the quality of the output improves substantially. Finally, ventricle and caudate nucleus segmentation experiments also show the potential usefulness of the mixture probability maps for

  5. Blind Deblurring and Denoising of Images Corrupted by Unidirectional Object Motion Blur and Sensor Noise.

    PubMed

    Zhang, Yi; Hirakawa, Keigo

    2016-09-01

    Low light photography suffers from blur and noise. In this paper, we propose a novel method to recover a dense estimate of spatially varying blur kernel as well as a denoised and deblurred image from a single noisy and object motion blurred image. A proposed method takes the advantage of the sparse representation of double discrete wavelet transform-a generative model of image blur that simplifies the wavelet analysis of a blurred image-and the Bayesian perspective of modeling the prior distribution of the latent sharp wavelet coefficient and the likelihood function that makes the noise handling explicit. We demonstrate the effectiveness of the proposed method on moderate noise and severely blurred images using simulated and real camera data. PMID:27337717

  6. Experimental and theoretical analysis of wavelet-based denoising filter for echocardiographic images.

    PubMed

    Kang, S C; Hong, S H

    2001-01-01

    One of the most significant features of diagnostic echocardiographic images is to reduce speckle noise and make better image quality. In this paper we proposed a simple and effective filter design for image denoising and contrast enhancement based on multiscale wavelet denoising method. Wavelet threshold algorithms replace wavelet coefficients with small magnitude by zero and keep or shrink the other coefficients. This is basically a local procedure, since wavelet coefficients characterize the local regularity of a function. After we estimate distribution of noise within echocardiographic image, then apply to fitness Wavelet threshold algorithm. A common way of the estimating the speckle noise level in coherent imaging is to calculate the mean-to-standard-deviation ratio of the pixel intensity, often termed the Equivalent Number of Looks(ENL), over a uniform image area. Unfortunately, we found this measure not very robust mainly because of the difficulty to identify a uniform area in a real image. For this reason, we will only use here the S/MSE ratio and which corresponds to the standard SNR in case of additivie noise. We have simulated some echocardiographic images by specialized hardware for real-time application;processing of a 512*512 images takes about 1 min. Our experiments show that the optimal threshold level depends on the spectral content of the image. High spectral content tends to over-estimate the noise standard deviation estimation performed at the finest level of the DWT. As a result, a lower threshold parameter is required to get the optimal S/MSE. The standard WCS theory predicts a threshold that depends on the number of signal samples only. PMID:11604864

  7. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. PMID:27084318

  8. An MRI denoising method using image data redundancy and local SNR estimation.

    PubMed

    Golshan, Hosein M; Hasanzadeh, Reza P R; Yousefzadeh, Shahrokh C

    2013-09-01

    This paper presents an LMMSE-based method for the three-dimensional (3D) denoising of MR images assuming a Rician noise model. Conventionally, the LMMSE method estimates the noise-less signal values using the observed MR data samples within local neighborhoods. This is not an efficient procedure to deal with this issue while the 3D MR data intrinsically includes many similar samples that can be used to improve the estimation results. To overcome this problem, we model MR data as random fields and establish a principled way which is capable of choosing the samples not only from a local neighborhood but also from a large portion of the given data. To follow the similar samples within the MR data, an effective similarity measure based on the local statistical moments of images is presented. The parameters of the proposed filter are automatically chosen from the estimated local signal-to-noise ratio. To further enhance the denoising performance, a recursive version of the introduced approach is also addressed. The proposed filter is compared with related state-of-the-art filters using both synthetic and real MR datasets. The experimental results demonstrate the superior performance of our proposal in removing the noise and preserving the anatomical structures of MR images. PMID:23668996

  9. MMW and THz images denoising based on adaptive CBM3D

    NASA Astrophysics Data System (ADS)

    Dai, Li; Zhang, Yousai; Li, Yuanjiang; Wang, Haoxiang

    2014-04-01

    Over the past decades, millimeter wave and terahertz radiation has received a lot of interest due to advances in emission and detection technologies which allowed the widely application of the millimeter wave and terahertz imaging technology. This paper focuses on solving the problem of this sort of images existing stripe noise, block effect and other interfered information. A new kind of nonlocal average method is put forward. Suitable level Gaussian noise is added to resonate with the image. Adaptive color block-matching 3D filtering is used to denoise. Experimental results demonstrate that it improves the visual effect and removes interference at the same time, making the analysis of the image and target detection more easily.

  10. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    PubMed

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells. PMID:23458301

  11. Geometric moment based nonlocal-means filter for ultrasound image denoising

    NASA Astrophysics Data System (ADS)

    Dou, Yangchao; Zhang, Xuming; Ding, Mingyue; Chen, Yimin

    2011-06-01

    It is inevitable that there is speckle noise in ultrasound image. Despeckling is the important process. The original nonlocal means (NLM) filter can remove speckle noise and protect the texture information effectively when the image corruption degree is relatively low. But when the noise in the image is strong, NLM will produce fictitious texture information, which has the disadvantageous influence on its denoising performance. In this paper, a novel nonlocal means (NLM) filter is proposed. We introduce geometric moments into the NLM filter. Though geometric moments are not orthogonal moments, it is popular by its concision, and its restoration ability is not yet proved. Results on synthetic data and real ultrasound image show that the proposed method can get better despeckling performance than other state-of-the-art method.

  12. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10–40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  13. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR).

    PubMed

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378

  14. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    SciTech Connect

    Yan, Hao; Folkerts, Michael; Jiang, Steve B. E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun E-mail: steve.jiang@UTSouthwestern.edu; Zhen, Xin; Li, Yongbao; Pan, Tinsu; Cervino, Laura

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  15. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    SciTech Connect

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frederic

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a

  16. Research on infrared-image denoising algorithm based on the noise analysis of the detector

    NASA Astrophysics Data System (ADS)

    Liu, Songtao; Zhou, Xiaodong; Shen, Tongsheng; Han, Yanli

    2005-01-01

    Since the conventional denoising algorithms have not considered the influence of certain concrete detector, they are not very effective to remove various noises contained in the low signal-to-noise ration infrared image. In this paper, a new thinking for infrared image denoising is proposed, which is based on the noise analyses of detector with an example of L model infrared multi-element detector. According to the noise analyses of this detector, the emphasis is placed on how to filter white noise and fractal noise in the preprocessing phase. Wavelet analysis is a good tool for analyzing 1/f process. 1/f process can be viewed as white noise approximately since its wavelet coefficients are stationary and uncorrelated. So if wavelet transform is adopted, the problem of removing white noise and fraction noise is simplified as the only one problem, i.e., removing white noise. To address this problem, a new wavelet domain adaptive wiener filtering algorithm is presented. From the viewpoint of quantitative and qualitative analyses, the filtering effect of our method is compared with those of traditional median filter, mean filter and wavelet thresholding algorithm in detail. The results show that our method can reduce various noises effectively and raise the ratio of signal-to-noise evidently.

  17. Feature Guided Motion Artifact Reduction with Structure-Awareness in 4D CT Images

    PubMed Central

    Han, Dongfeng; Bayouth, John; Song, Qi; Bhatia, Sudershan; Sonka, Milan; Wu, Xiaodong

    2011-01-01

    In this paper, we propose a novel method to reduce the magnitude of 4D CT artifacts by stitching two images with a data-driven regularization constrain, which helps preserve the local anatomy structures. Our method first computes an interface seam for the stitching in the overlapping region of the first image, which passes through the “smoothest” region, to reduce the structure complexity along the stitching interface. Then, we compute the displacements of the seam by matching the corresponding interface seam in the second image. We use sparse 3D features as the structure cues to guide the seam matching, in which a regularization term is incorporated to keep the structure consistency. The energy function is minimized by solving a multiple-label problem in Markov Random Fields with an anatomical structure preserving regularization term. The displacements are propagated to the rest of second image and the two image are stitched along the interface seams based on the computed displacement field. The method was tested on both simulated data and clinical 4D CT images. The experiments on simulated data demonstrated that the proposed method was able to reduce the landmark distance error on average from 2.9 mm to 1.3 mm, outperforming the registration-based method by about 55%. For clinical 4D CT image data, the image quality was evaluated by three medical experts, and all identified much fewer artifacts from the resulting images by our method than from those by the compared method. PMID:22058647

  18. Graph-based retrospective 4D image construction from free-breathing MRI slice acquisitions

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Ciesielski, Krzysztof C.; McDonough, Joseph M.; Mong, Andrew; Campbell, Robert M.

    2014-03-01

    4D or dynamic imaging of the thorax has many potential applications [1, 2]. CT and MRI offer sufficient speed to acquire motion information via 4D imaging. However they have different constraints and requirements. For both modalities both prospective and retrospective respiratory gating and tracking techniques have been developed [3, 4]. For pediatric imaging, x-ray radiation becomes a primary concern and MRI remains as the de facto choice. The pediatric subjects we deal with often suffer from extreme malformations of their chest wall, diaphragm, and/or spine, as such patient cooperation needed by some of the gating and tracking techniques are difficult to realize without causing patient discomfort. Moreover, we are interested in the mechanical function of their thorax in its natural form in tidal breathing. Therefore free-breathing MRI acquisition is the ideal modality of imaging for these patients. In our set up, for each coronal (or sagittal) slice position, slice images are acquired at a rate of about 200-300 ms/slice over several natural breathing cycles. This produces typically several thousands of slices which contain both the anatomic and dynamic information. However, it is not trivial to form a consistent and well defined 4D volume from these data. In this paper, we present a novel graph-based combinatorial optimization solution for constructing the best possible 4D scene from such data entirely in the digital domain. Our proposed method is purely image-based and does not need breath holding or any external surrogates or instruments to record respiratory motion or tidal volume. Both adult and children patients' data are used to illustrate the performance of the proposed method. Experimental results show that the reconstructed 4D scenes are smooth and consistent spatially and temporally, agreeing with known shape and motion of the lungs.

  19. Gaussian mixture model-based gradient field reconstruction for infrared image detail enhancement and denoising

    NASA Astrophysics Data System (ADS)

    Zhao, Fan; Zhao, Jian; Zhao, Wenda; Qu, Feng

    2016-05-01

    Infrared images are characterized by low signal-to-noise ratio and low contrast. Therefore, the edge details are easily immerged in the background and noise, making it much difficult to achieve infrared image edge detail enhancement and denoising. This article proposes a novel method of Gaussian mixture model-based gradient field reconstruction, which enhances image edge details while suppressing noise. First, by analyzing the gradient histogram of noisy infrared image, Gaussian mixture model is adopted to simulate the distribution of the gradient histogram, and divides the image information into three parts corresponding to faint details, noise and the edges of clear targets, respectively. Then, the piecewise function is constructed based on the characteristics of the image to increase gradients of faint details and suppress gradients of noise. Finally, anisotropic diffusion constraint is added while visualizing enhanced image from the transformed gradient field to further suppress noise. The experimental results show that the method possesses unique advantage of effectively enhancing infrared image edge details and suppressing noise as well, compared with the existing methods. In addition, it can be used to effectively enhance other types of images such as the visible and medical images.

  20. 4D rotational x-ray imaging of wrist joint dynamic motion

    SciTech Connect

    Carelsen, Bart; Bakker, Niels H.; Strackee, Simon D.; Boon, Sjirk N.; Maas, Mario; Sabczynski, Joerg; Grimbergen, Cornelis A.; Streekstra, Geert J.

    2005-09-15

    Current methods for imaging joint motion are limited to either two-dimensional (2D) video fluoroscopy, or to animated motions from a series of static three-dimensional (3D) images. 3D movement patterns can be detected from biplane fluoroscopy images matched with computed tomography images. This involves several x-ray modalities and sophisticated 2D to 3D matching for the complex wrist joint. We present a method for the acquisition of dynamic 3D images of a moving joint. In our method a 3D-rotational x-ray (3D-RX) system is used to image a cyclically moving joint. The cyclic motion is synchronized to the x-ray acquisition to yield multiple sets of projection images, which are reconstructed to a series of time resolved 3D images, i.e., four-dimensional rotational x ray (4D-RX). To investigate the obtained image quality parameters the full width at half maximum (FWHM) of the point spread function (PSF) via the edge spread function and the contrast to noise ratio between air and phantom were determined on reconstructions of a bullet and rod phantom, using 4D-RX as well as stationary 3D-RX images. The CNR in volume reconstructions based on 251 projection images in the static situation and on 41 and 34 projection images of a moving phantom were 6.9, 3.0, and 2.9, respectively. The average FWHM of the PSF of these same images was, respectively, 1.1, 1.7, and 2.2 mm orthogonal to the motion and parallel to direction of motion 0.6, 0.7, and 1.0 mm. The main deterioration of 4D-RX images compared to 3D-RX images is due to the low number of projection images used and not to the motion of the object. Using 41 projection images seems the best setting for the current system. Experiments on a postmortem wrist show the feasibility of the method for imaging 3D dynamic joint motion. We expect that 4D-RX will pave the way to improved assessment of joint disorders by detection of 3D dynamic motion patterns in joints.

  1. Population of anatomically variable 4D XCAT adult phantoms for imaging research and optimization

    SciTech Connect

    Segars, W. P.; Bond, Jason; Frush, Jack; Hon, Sylvia; Eckersley, Chris; Samei, E.; Williams, Cameron H.; Frush, D.; Feng Jianqiao; Tward, Daniel J.; Ratnanather, J. T.; Miller, M. I.

    2013-04-15

    Purpose: The authors previously developed the 4D extended cardiac-torso (XCAT) phantom for multimodality imaging research. The XCAT consisted of highly detailed whole-body models for the standard male and female adult, including the cardiac and respiratory motions. In this work, the authors extend the XCAT beyond these reference anatomies by developing a series of anatomically variable 4D XCAT adult phantoms for imaging research, the first library of 4D computational phantoms. Methods: The initial anatomy of each phantom was based on chest-abdomen-pelvis computed tomography data from normal patients obtained from the Duke University database. The major organs and structures for each phantom were segmented from the corresponding data and defined using nonuniform rational B-spline surfaces. To complete the body, the authors manually added on the head, arms, and legs using the original XCAT adult male and female anatomies. The structures were scaled to best match the age and anatomy of the patient. A multichannel large deformation diffeomorphic metric mapping algorithm was then used to calculate the transform from the template XCAT phantom (male or female) to the target patient model. The transform was applied to the template XCAT to fill in any unsegmented structures within the target phantom and to implement the 4D cardiac and respiratory models in the new anatomy. Each new phantom was refined by checking for anatomical accuracy via inspection of the models. Results: Using these methods, the authors created a series of computerized phantoms with thousands of anatomical structures and modeling cardiac and respiratory motions. The database consists of 58 (35 male and 23 female) anatomically variable phantoms in total. Like the original XCAT, these phantoms can be combined with existing simulation packages to simulate realistic imaging data. Each new phantom contains parameterized models for the anatomy and the cardiac and respiratory motions and can, therefore, serve

  2. Population of anatomically variable 4D XCAT adult phantoms for imaging research and optimization

    PubMed Central

    Segars, W. P.; Bond, Jason; Frush, Jack; Hon, Sylvia; Eckersley, Chris; Williams, Cameron H.; Feng, Jianqiao; Tward, Daniel J.; Ratnanather, J. T.; Miller, M. I.; Frush, D.; Samei, E.

    2013-01-01

    Purpose: The authors previously developed the 4D extended cardiac-torso (XCAT) phantom for multimodality imaging research. The XCAT consisted of highly detailed whole-body models for the standard male and female adult, including the cardiac and respiratory motions. In this work, the authors extend the XCAT beyond these reference anatomies by developing a series of anatomically variable 4D XCAT adult phantoms for imaging research, the first library of 4D computational phantoms. Methods: The initial anatomy of each phantom was based on chest–abdomen–pelvis computed tomography data from normal patients obtained from the Duke University database. The major organs and structures for each phantom were segmented from the corresponding data and defined using nonuniform rational B-spline surfaces. To complete the body, the authors manually added on the head, arms, and legs using the original XCAT adult male and female anatomies. The structures were scaled to best match the age and anatomy of the patient. A multichannel large deformation diffeomorphic metric mapping algorithm was then used to calculate the transform from the template XCAT phantom (male or female) to the target patient model. The transform was applied to the template XCAT to fill in any unsegmented structures within the target phantom and to implement the 4D cardiac and respiratory models in the new anatomy. Each new phantom was refined by checking for anatomical accuracy via inspection of the models. Results: Using these methods, the authors created a series of computerized phantoms with thousands of anatomical structures and modeling cardiac and respiratory motions. The database consists of 58 (35 male and 23 female) anatomically variable phantoms in total. Like the original XCAT, these phantoms can be combined with existing simulation packages to simulate realistic imaging data. Each new phantom contains parameterized models for the anatomy and the cardiac and respiratory motions and can, therefore

  3. Edge preserved enhancement of medical images using adaptive fusion-based denoising by shearlet transform and total variation algorithm

    NASA Astrophysics Data System (ADS)

    Gupta, Deep; Anand, Radhey Shyam; Tyagi, Barjeev

    2013-10-01

    Edge preserved enhancement is of great interest in medical images. Noise present in medical images affects the quality, contrast resolution, and most importantly, texture information and can make post-processing difficult also. An enhancement approach using an adaptive fusion algorithm is proposed which utilizes the features of shearlet transform (ST) and total variation (TV) approach. In the proposed method, three different denoised images processed with TV method, shearlet denoising, and edge information recovered from the remnant of the TV method and processed with the ST are fused adaptively. The result of enhanced images processed with the proposed method helps to improve the visibility and detectability of medical images. For the proposed method, different weights are evaluated from the different variance maps of individual denoised image and the edge extracted information from the remnant of the TV approach. The performance of the proposed method is evaluated by conducting various experiments on both the standard images and different medical images such as computed tomography, magnetic resonance, and ultrasound. Experiments show that the proposed method provides an improvement not only in noise reduction but also in the preservation of more edges and image details as compared to the others.

  4. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    NASA Astrophysics Data System (ADS)

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain; Messaoudi, Cedric; Marco, Sergio

    2015-01-01

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  5. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    SciTech Connect

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain E-mail: cedric.messaoudi@curie.fr Messaoudi, Cedric E-mail: cedric.messaoudi@curie.fr Marco, Sergio E-mail: cedric.messaoudi@curie.fr

    2015-01-13

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  6. From Wheatstone to Cameron and beyond: overview in 3-D and 4-D imaging technology

    NASA Astrophysics Data System (ADS)

    Gilbreath, G. Charmaine

    2012-02-01

    This paper reviews three-dimensional (3-D) and four-dimensional (4-D) imaging technology, from Wheatstone through today, with some prognostications for near future applications. This field is rich in variety, subject specialty, and applications. A major trend, multi-view stereoscopy, is moving the field forward to real-time wide-angle 3-D reconstruction as breakthroughs in parallel processing and multi-processor computers enable very fast processing. Real-time holography meets 4-D imaging reconstruction at the goal of achieving real-time, interactive, 3-D imaging. Applications to telesurgery and telemedicine as well as to the needs of the defense and intelligence communities are also discussed.

  7. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  8. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Mayer, Markus A.; Boretsky, Adam R.; van Kuijk, Frederik J.; Motamedi, Massoud

    2012-11-01

    Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained.

  9. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    PubMed Central

    Mayer, Markus A.; Boretsky, Adam R.; van Kuijk, Frederik J.; Motamedi, Massoud

    2012-01-01

    Abstract. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained. PMID:23117804

  10. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform.

    PubMed

    Chitchian, Shahab; Mayer, Markus A; Boretsky, Adam R; van Kuijk, Frederik J; Motamedi, Massoud

    2012-11-01

    ABSTRACT. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained. PMID:23117804

  11. Robust segmentation of 4D cardiac MRI-tagged images via spatio-temporal propagation

    NASA Astrophysics Data System (ADS)

    Qian, Zhen; Huang, Xiaolei; Metaxas, Dimitris N.; Axel, Leon

    2005-04-01

    In this paper we present a robust method for segmenting and tracking cardiac contours and tags in 4D cardiac MRI tagged images via spatio-temporal propagation. Our method is based on two main techniques: the Metamorphs Segmentation for robust boundary estimation, and the tunable Gabor filter bank for tagging lines enhancement, removal and myocardium tracking. We have developed a prototype system based on the integration of these two techniques, and achieved efficient, robust segmentation and tracking with minimal human interaction.

  12. Denoising of B{sub 1}{sup +} field maps for noise-robust image reconstruction in electrical properties tomography

    SciTech Connect

    Michel, Eric; Hernandez, Daniel; Cho, Min Hyoung; Lee, Soo Yeol

    2014-10-15

    Purpose: To validate the use of adaptive nonlinear filters in reconstructing conductivity and permittivity images from the noisy B{sub 1}{sup +} maps in electrical properties tomography (EPT). Methods: In EPT, electrical property images are computed by taking Laplacian of the B{sub 1}{sup +} maps. To mitigate the noise amplification in computing the Laplacian, the authors applied adaptive nonlinear denoising filters to the measured complex B{sub 1}{sup +} maps. After the denoising process, they computed the Laplacian by central differences. They performed EPT experiments on phantoms and a human brain at 3 T along with corresponding EPT simulations on finite-difference time-domain models. They evaluated the EPT images comparing them with the ones obtained by previous EPT reconstruction methods. Results: In both the EPT simulations and experiments, the nonlinear filtering greatly improved the EPT image quality when evaluated in terms of the mean and standard deviation of the electrical property values at the regions of interest. The proposed method also improved the overall similarity between the reconstructed conductivity images and the true shapes of the conductivity distribution. Conclusions: The nonlinear denoising enabled us to obtain better-quality EPT images of the phantoms and the human brain at 3 T.

  13. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  14. 4-D Cardiac MR Image Analysis: Left and Right Ventricular Morphology and Function

    PubMed Central

    Wahle, Andreas; Johnson, Ryan K.; Scholz, Thomas D.; Sonka, Milan

    2010-01-01

    In this study, a combination of active shape model (ASM) and active appearance model (AAM) was used to segment the left and right ventricles of normal and Tetralogy of Fallot (TOF) hearts on 4-D (3-D+time) MR images. For each ventricle, a 4-D model was first used to achieve robust preliminary segmentation on all cardiac phases simultaneously and a 3-D model was then applied to each phase to improve local accuracy while maintaining the overall robustness of the 4-D segmentation. On 25 normal and 25 TOF hearts, in comparison to the expert traced independent standard, our comprehensive performance assessment showed subvoxel segmentation accuracy, high overlap ratios, good ventricular volume correlations, and small percent volume differences. Following 4-D segmentation, novel quantitative shape and motion features were extracted using shape information, volume-time and dV/dt curves, analyzed and used for disease status classification. Automated discrimination between normal/TOF subjects achieved 90%–100% sensitivity and specificity. The features obtained from TOF hearts show higher variability compared to normal subjects, suggesting their potential use as disease progression indicators. The abnormal shape and motion variations of the TOF hearts were accurately captured by both the segmentation and feature characterization. PMID:19709962

  15. Analysis of free breathing motion using artifact reduced 4D CT image data

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Werner, Rene; Frenzel, Thorsten; Lu, Wei; Low, Daniel; Handels, Heinz

    2007-03-01

    The mobility of lung tumors during the respiratory cycle is a source of error in radiotherapy treatment planning. Spatiotemporal CT data sets can be used for studying the motion of lung tumors and inner organs during the breathing cycle. We present methods for the analysis of respiratory motion using 4D CT data in high temporal resolution. An optical flow based reconstruction method was used to generate artifact-reduced 4D CT data sets of lung cancer patients. The reconstructed 4D CT data sets were segmented and the respiratory motion of tumors and inner organs was analyzed. A non-linear registration algorithm is used to calculate the velocity field between consecutive time frames of the 4D data. The resulting velocity field is used to analyze trajectories of landmarks and surface points. By this technique, the maximum displacement of any surface point is calculated, and regions with large respiratory motion are marked. To describe the tumor mobility the motion of the lung tumor center in three orthogonal directions is displayed. Estimated 3D appearance probabilities visualize the movement of the tumor during the respiratory cycle in one static image. Furthermore, correlations between trajectories of the skin surface and the trajectory of the tumor center are determined and skin regions are identified which are suitable for prediction of the internal tumor motion. The results of the motion analysis indicate that the described methods are suitable to gain insight into the spatiotemporal behavior of anatomical and pathological structures during the respiratory cycle.

  16. 4D scanning transmission ultrafast electron microscopy: Single-particle imaging and spectroscopy.

    PubMed

    Ortalan, Volkan; Zewail, Ahmed H

    2011-07-20

    We report the development of 4D scanning transmission ultrafast electron microscopy (ST-UEM). The method was demonstrated in the imaging of silver nanowires and gold nanoparticles. For the wire, the mechanical motion and shape morphological dynamics were imaged, and from the images we obtained the resonance frequency and the dephasing time of the motion. Moreover, we demonstrate here the simultaneous acquisition of dark-field images and electron energy loss spectra from a single gold nanoparticle, which is not possible with conventional methods. The local probing capabilities of ST-UEM open new avenues for probing dynamic processes, from single isolated to embedded nanostructures, without being affected by the heterogeneous processes of ensemble-averaged dynamics. Such methodology promises to have wide-ranging applications in materials science and in single-particle biological imaging. PMID:21615171

  17. 3D and 4D Seismic Imaging in the Oilfield; the state of the art

    NASA Astrophysics Data System (ADS)

    Strudley, A.

    2005-05-01

    Seismic imaging in the oilfield context has seen enormous changes over the last 20 years driven by a combination of improved subsurface illumination (2D to 3D), increased computational power and improved physical understanding. Today Kirchhoff Pre-stack migration (in time or depth) is the norm with anisotropic parameterisation and finite difference methods being increasingly employed. In the production context Time-Lapse (4D) Seismic is of growing importance as a tool for monitoring reservoir changes to facilitate increased productivity and recovery. In this paper we present an overview of state of the art technology in 3D and 4D seismic and look at future trends. Pre-stack Kirchhoff migration in time or depth is the imaging tool of choice for the majority of contemporary 3D datasets. Recent developments in 3D pre-stack imaging have been focussed around finite difference solutions to the acoustic wave equation, the so-called Wave Equation Migration methods (WEM). Application of finite difference solutions to imaging is certainly not new, however 3D pre-stack migration using these schemes is a relatively recent development driven by the need for imaging complex geologic structures such as sub salt, and facilitated by increased computational resources. Finally there are a class of imaging methods referred to as beam migration. These methods may be based on either the wave equation or rays, but all operate on a localised (in space and direction) part of the wavefield. These methods offer a bridge between the computational efficiency of Kirchhoff schemes and the improved image quality of WEM methods. Just as 3D seismic has had a radical impact on the quality of the static model of the reservoir, 4D seismic is having a dramatic impact on the dynamic model. Repeat shooting of seismic surveys after a period of production (typically one to several years) reveals changes in pressure and saturation through changes in the seismic response. The growth in interest in 4D seismic

  18. Automated Lung Segmentation and Image Quality Assessment for Clinical 3-D/4-D-Computed Tomography

    PubMed Central

    Li, Guang

    2014-01-01

    4-D-computed tomography (4DCT) provides not only a new dimension of patient-specific information for radiation therapy planning and treatment, but also a challenging scale of data volume to process and analyze. Manual analysis using existing 3-D tools is unable to keep up with vastly increased 4-D data volume, automated processing and analysis are thus needed to process 4DCT data effectively and efficiently. In this paper, we applied ideas and algorithms from image/signal processing, computer vision, and machine learning to 4DCT lung data so that lungs can be reliably segmented in a fully automated manner, lung features can be visualized and measured on the fly via user interactions, and data quality classifications can be computed in a robust manner. Comparisons of our results with an established treatment planning system and calculation by experts demonstrated negligible discrepancies (within ±2%) for volume assessment but one to two orders of magnitude performance enhancement. An empirical Fourier-analysis-based quality measure-delivered performances closely emulating human experts. Three machine learners are inspected to justify the viability of machine learning techniques used to robustly identify data quality of 4DCT images in the scalable manner. The resultant system provides a toolkit that speeds up 4-D tasks in the clinic and facilitates clinical research to improve current clinical practice. PMID:25621194

  19. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry

    NASA Astrophysics Data System (ADS)

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian

    2014-03-01

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.

  20. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry.

    PubMed

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian

    2014-03-21

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications. PMID:24584079

  1. Application of adaptive kinetic modelling for bias propagation reduction in direct 4D image reconstruction

    NASA Astrophysics Data System (ADS)

    Kotasidis, F. A.; Matthews, J. C.; Reader, A. J.; Angelis, G. I.; Zaidi, H.

    2014-10-01

    Parametric imaging in thoracic and abdominal PET can provide additional parameters more relevant to the pathophysiology of the system under study. However, dynamic data in the body are noisy due to the limiting counting statistics leading to suboptimal kinetic parameter estimates. Direct 4D image reconstruction algorithms can potentially improve kinetic parameter precision and accuracy in dynamic PET body imaging. However, construction of a common kinetic model is not always feasible and in contrast to post-reconstruction kinetic analysis, errors in poorly modelled regions may spatially propagate to regions which are well modelled. To reduce error propagation from erroneous model fits, we implement and evaluate a new approach to direct parameter estimation by incorporating a recently proposed kinetic modelling strategy within a direct 4D image reconstruction framework. The algorithm uses a secondary more general model to allow a less constrained model fit in regions where the kinetic model does not accurately describe the underlying kinetics. A portion of the residuals then is adaptively included back into the image whilst preserving the primary model characteristics in other well modelled regions using a penalty term that trades off the models. Using fully 4D simulations based on dynamic [15O]H2O datasets, we demonstrate reduction in propagation-related bias for all kinetic parameters. Under noisy conditions, reductions in bias due to propagation are obtained at the cost of increased noise, which in turn results in increased bias and variance of the kinetic parameters. This trade-off reflects the challenge of separating the residuals arising from poor kinetic modelling fits from the residuals arising purely from noise. Nonetheless, the overall root mean square error is reduced in most regions and parameters. Using the adaptive 4D image reconstruction improved model fits can be obtained in poorly modelled regions, leading to reduced errors potentially propagating

  2. Application of adaptive kinetic modelling for bias propagation reduction in direct 4D image reconstruction.

    PubMed

    Kotasidis, F A; Matthews, J C; Reader, A J; Angelis, G I; Zaidi, H

    2014-10-21

    Parametric imaging in thoracic and abdominal PET can provide additional parameters more relevant to the pathophysiology of the system under study. However, dynamic data in the body are noisy due to the limiting counting statistics leading to suboptimal kinetic parameter estimates. Direct 4D image reconstruction algorithms can potentially improve kinetic parameter precision and accuracy in dynamic PET body imaging. However, construction of a common kinetic model is not always feasible and in contrast to post-reconstruction kinetic analysis, errors in poorly modelled regions may spatially propagate to regions which are well modelled. To reduce error propagation from erroneous model fits, we implement and evaluate a new approach to direct parameter estimation by incorporating a recently proposed kinetic modelling strategy within a direct 4D image reconstruction framework. The algorithm uses a secondary more general model to allow a less constrained model fit in regions where the kinetic model does not accurately describe the underlying kinetics. A portion of the residuals then is adaptively included back into the image whilst preserving the primary model characteristics in other well modelled regions using a penalty term that trades off the models. Using fully 4D simulations based on dynamic [(15)O]H2O datasets, we demonstrate reduction in propagation-related bias for all kinetic parameters. Under noisy conditions, reductions in bias due to propagation are obtained at the cost of increased noise, which in turn results in increased bias and variance of the kinetic parameters. This trade-off reflects the challenge of separating the residuals arising from poor kinetic modelling fits from the residuals arising purely from noise. Nonetheless, the overall root mean square error is reduced in most regions and parameters. Using the adaptive 4D image reconstruction improved model fits can be obtained in poorly modelled regions, leading to reduced errors potentially propagating

  3. Segmentation of confocal Raman microspectroscopic imaging data using edge-preserving denoising and clustering.

    PubMed

    Alexandrov, Theodore; Lasch, Peter

    2013-06-18

    Over the past decade, confocal Raman microspectroscopic (CRM) imaging has matured into a useful analytical tool to obtain spatially resolved chemical information on the molecular composition of biological samples and has found its way into histopathology, cytology, and microbiology. A CRM imaging data set is a hyperspectral image in which Raman intensities are represented as a function of three coordinates: a spectral coordinate λ encoding the wavelength and two spatial coordinates x and y. Understanding CRM imaging data is challenging because of its complexity, size, and moderate signal-to-noise ratio. Spatial segmentation of CRM imaging data is a way to reveal regions of interest and is traditionally performed using nonsupervised clustering which relies on spectral domain-only information with the main drawback being the high sensitivity to noise. We present a new pipeline for spatial segmentation of CRM imaging data which combines preprocessing in the spectral and spatial domains with k-means clustering. Its core is the preprocessing routine in the spatial domain, edge-preserving denoising (EPD), which exploits the spatial relationships between Raman intensities acquired at neighboring pixels. Additionally, we propose to use both spatial correlation to identify Raman spectral features colocalized with defined spatial regions and confidence maps to assess the quality of spatial segmentation. For CRM data acquired from midsagittal Syrian hamster ( Mesocricetus auratus ) brain cryosections, we show how our pipeline benefits from the complex spatial-spectral relationships inherent in the CRM imaging data. EPD significantly improves the quality of spatial segmentation that allows us to extract the underlying structural and compositional information contained in the Raman microspectra. PMID:23701523

  4. Quantitative 4D Transcatheter Intraarterial Perfusion MR Imaging as a Method to Standardize Angiographic Chemoembolization Endpoints

    PubMed Central

    Jin, Brian; Wang, Dingxin; Lewandowski, Robert J.; Ryu, Robert K.; Sato, Kent T.; Larson, Andrew C.; Salem, Riad; Omary, Reed A.

    2011-01-01

    PURPOSE We aimed to test the hypothesis that subjective angiographic endpoints during transarterial chemoembolization (TACE) of hepatocellular carcinoma (HCC) exhibit consistency and correlate with objective intraprocedural reductions in tumor perfusion as determined by quantitative four dimensional (4D) transcatheter intraarterial perfusion (TRIP) magnetic resonance (MR) imaging. MATERIALS AND METHODS This prospective study was approved by the institutional review board. Eighteen consecutive patients underwent TACE in a combined MR/interventional radiology (MR-IR) suite. Three board-certified interventional radiologists independently graded the angiographic endpoint of each procedure based on a previously described subjective angiographic chemoembolization endpoint (SACE) scale. A consensus SACE rating was established for each patient. Patients underwent quantitative 4D TRIP-MR imaging immediately before and after TACE, from which mean whole tumor perfusion (Fρ) was calculated. Consistency of SACE ratings between observers was evaluated using the intraclass correlation coefficient (ICC). The relationship between SACE ratings and intraprocedural TRIP-MR imaging perfusion changes was evaluated using Spearman’s rank correlation coefficient. RESULTS The SACE rating scale demonstrated very good consistency among all observers (ICC = 0.80). The consensus SACE rating was significantly correlated with both absolute (r = 0.54, P = 0.022) and percent (r = 0.85, P < 0.001) intraprocedural perfusion reduction. CONCLUSION The SACE rating scale demonstrates very good consistency between raters, and significantly correlates with objectively measured intraprocedural perfusion reductions during TACE. These results support the use of the SACE scale as a standardized alternative method to quantitative 4D TRIP-MR imaging to classify patients based on embolic endpoints of TACE. PMID:22021520

  5. The study of integration about measurable image and 4D production

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun

    2008-12-01

    In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.

  6. Tracking the motion trajectories of junction structures in 4D CT images of the lung

    NASA Astrophysics Data System (ADS)

    Xiong, Guanglei; Chen, Chuangzhen; Chen, Jianzhou; Xie, Yaoqin; Xing, Lei

    2012-08-01

    Respiratory motion poses a major challenge in lung radiotherapy. Based on 4D CT images, a variety of intensity-based deformable registration techniques have been proposed to study the pulmonary motion. However, the accuracy achievable with these approaches can be sub-optimal because the deformation is defined globally in space. Therefore, the accuracy of the alignment of local structures may be compromised. In this work, we propose a novel method to detect a large collection of natural junction structures in the lung and use them as the reliable markers to track the lung motion. Specifically, detection of the junction centers and sizes is achieved by analysis of local shape profiles on one segmented image. To track the temporal trajectory of a junction, the image intensities within a small region of interest surrounding the center are selected as its signature. Under the assumption of the cyclic motion, we describe the trajectory by a closed B-spline curve and search for the control points by maximizing a metric of combined correlation coefficients. Local extrema are suppressed by improving the initial conditions using random walks from pair-wise optimizations. Several descriptors are introduced to analyze the motion trajectories. Our method was applied to 13 real 4D CT images. More than 700 junctions in each case are detected with an average positive predictive value of greater than 90%. The average tracking error between automated and manual tracking is sub-voxel and smaller than the published results using the same set of data.

  7. MCAT to XCAT: The Evolution of 4-D Computerized Phantoms for Imaging Research

    PubMed Central

    Paul Segars, W.; Tsui, Benjamin M. W.

    2012-01-01

    Recent work in the development of computerized phantoms has focused on the creation of ideal “hybrid” models that seek to combine the realism of a patient-based voxelized phantom with the flexibility of a mathematical or stylized phantom. We have been leading the development of such computerized phantoms for use in medical imaging research. This paper will summarize our developments dating from the original four-dimensional (4-D) Mathematical Cardiac-Torso (MCAT) phantom, a stylized model based on geometric primitives, to the current 4-D extended Cardiac-Torso (XCAT) and Mouse Whole-Body (MOBY) phantoms, hybrid models of the human and laboratory mouse based on state-of-the-art computer graphics techniques. This paper illustrates the evolution of computerized phantoms toward more accurate models of anatomy and physiology. This evolution was catalyzed through the introduction of nonuniform rational b-spline (NURBS) and subdivision (SD) surfaces, tools widely used in computer graphics, as modeling primitives to define a more ideal hybrid phantom. With NURBS and SD surfaces as a basis, we progressed from a simple geometrically based model of the male torso (MCAT) containing only a handful of structures to detailed, whole-body models of the male and female (XCAT) anatomies (at different ages from newborn to adult), each containing more than 9000 structures. The techniques we applied for modeling the human body were similarly used in the creation of the 4-D MOBY phantom, a whole-body model for the mouse designed for small animal imaging research. From our work, we have found the NURBS and SD surface modeling techniques to be an efficient and flexible way to describe the anatomy and physiology for realistic phantoms. Based on imaging data, the surfaces can accurately model the complex organs and structures in the body, providing a level of realism comparable to that of a voxelized phantom. In addition, they are very flexible. Like stylized models, they can easily be

  8. Uniform distribution of projection data for improved reconstruction quality of 4D EPR imaging

    PubMed Central

    Ahmad, Rizwan; Vikram, Deepti S.; Clymer, Bradley; Potter, Lee C.; Deng, Yuanmu; Srinivasan, Parthasarathy; Zweier, Jay L.; Kuppusamy, Periannan

    2008-01-01

    In continuous wave (CW) electron paramagnetic resonance imaging (EPRI), high quality of reconstruction in a limited acquisition time is a high priority. It has been shown for the case of 3D EPRI, that a uniform distribution of the projection data generally enhances reconstruction quality. In this work, we have suggested two data acquisition techniques for which the gradient orientations are more evenly distributed over the 4D acquisition space as compared to the existing methods. The first sampling technique is based on equal solid angle partitioning of 4D space, while the second technique is based on Fekete points estimation in 4D to generate a more uniform distribution of data. After acquisition, filtered backprojection (FBP) is applied to carryout the reconstruction in a single stage. The single-stage reconstruction improves the spatial resolution by eliminating the necessity of data interpolation in multi-stage reconstructions. For the proposed data distributions, the simulations and experimental results indicate a higher fidelity to the true object configuration. Using the uniform distribution, we expect about 50% reduction in the acquisition time over the traditional method of equal linear angle acquisition. PMID:17562375

  9. Directional denoising and line enhancement for device segmentation in real time fluoroscopic imaging

    NASA Astrophysics Data System (ADS)

    Wagner, Martin; Royalty, Kevin; Oberstar, Erick; Strother, Charles; Mistretta, Charles

    2015-03-01

    Purpose: The purpose of this work is to improve the segmentation of interventional devices (e.g. guidewires) in fluoroscopic images. This is required for the real time 3D reconstruction from two angiographic views where noise can cause severe reconstruction artifacts and incomplete reconstruction. The proposed method reduces the noise while enhancing the thin line structures of the device in images with subtracted background. Methods: A two-step approach is presented here. The first step estimates, for each pixel and a given number of directions, a measure for the probability that the point is part of a line segment in the corresponding direction. This can be done efficiently using binary masks. In the second step, a directional filter kernel is applied for pixel that are assumed to be part of a line. For all other pixels a mean filter is used. Results: The proposed algorithm was able to achieve an average contrast to noise ratio (CNR) of 6.3 compared to the bilateral filter with 5.8. For the device segmentation using global thresholding the number of missing or wrong pixels is reduced to 25 % compared to 40 % using the bilateral approach. Conclusion: The proposed algorithm is a simple and efficient approach, which can easily be parallelized for the use on modern graphics processing units. It improves the segmentation results of the device compared to other denoising methods, and therefore reduces artifacts and increases the quality of the reconstruction without increasing the delay in real time applications notably.

  10. Randomized denoising autoencoders for smaller and efficient imaging based AD clinical trials

    PubMed Central

    Ithapu, Vamsi K.; Singh, Vikas; Okonkwo, Ozioma; Johnson, Sterling C.

    2015-01-01

    There is growing body of research devoted to designing imaging-based biomarkers that identify Alzheimer’s disease (AD) in its prodromal stage using statistical machine learning methods. Recently several authors investigated how clinical trials for AD can be made more efficient (i.e., smaller sample size) using predictive measures from such classification methods. In this paper, we explain why predictive measures given by such SVM type objectives may be less than ideal for use in the setting described above. We give a solution based on a novel deep learning model, randomized denoising autoencoders (rDA), which regresses on training labels y while also accounting for the variance, a property which is very useful for clinical trial design. Our results give strong improvements in sample size estimates over strategies based on multi-kernel learning. Also, rDA predictions appear to more accurately correlate to stages of disease. Separately, our formulation empirically shows how deep architectures can be applied in the large d, small n regime — the default situation in medical imaging. This result is of independent interest. PMID:25485413

  11. [Possibilities of 4D ultrasonography in imaging of the pelvic floor structures].

    PubMed

    Dlouhá, K; Krofta, L

    2011-12-01

    Technological boom of the last decades brought urogynaecologists and other specialists new possibilities in imaging of the pelvic floor structures which may substantially add to search for etiology of pelvic floor dysfunction. Magnetic resonance imaging (MRI) is an expensive, less accessible method and may pose certain dyscomphort to the patient. 3D/4D ultrasonography overcomes these disadvantages and brings new possibilities especially in dynamic, real time imaging and consequently enables focus on functional anatomy of complex of muscles and fascial structures of the pelvic floor. With 3D/4D ultrasound we can visualise urethra and surrounding structures, levator ani and urogenital hiatus, its changes during muscle contraction and Valsalva manévre. This method has great potential in diagnostics of pelvic organ prolapse, it may bring new knowledge of factors contributing to loss of integrity of pelvic floor structures resulting in prolapse and incontinence. Studies exist which describe changes in urogenital hiatus after vaginal delivery, further studies of large numbers of patients during longer period of time are though necessary so that conclusions can be drawn for clinical praxis. PMID:22312840

  12. 4D in vivo imaging of subpleural lung parenchyma by swept source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Meissner, S.; Tabuchi, A.; Mertens, M.; Homann, H.; Walther, J.; Kuebler, W. M.; Koch, E.

    2009-07-01

    In this feasibility study we present a method for 4D imaging of healthy and injured subpleural lung tissue in a mouse model. We used triggered swept source optical coherence tomography with an A-scan frequency of 20 kHz to image murine subpleural alveoli during the ventilation cycle. The data acquisition was gated to the pulmonary airway pressure to take one B-scan in each ventilation cycle for different pressure levels. The acquired B-scans were combined offline to one C-scan for each pressure level. Due to the high acquisition rate of the used optical coherence tomography system, we are also able to perform OCT Doppler imaging of the alveolar arterioles. We demonstrated that OCT is a useful tool to investigate the alveolar dynamics in spatial dimensions and to analyze the alveolar blood flow by using Doppler OCT.

  13. A novel method for image denoising of fluorescence molecular imaging based on fuzzy C-Means clustering

    NASA Astrophysics Data System (ADS)

    An, Yu; Liu, Jie; Ye, Jinzuo; Mao, Yamin; Yang, Xin; Jiang, Shixin; Chi, Chongwei; Tian, Jie

    2015-03-01

    As an important molecular imaging modality, fluorescence molecular imaging (FMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorophore, FMI can noninvasively obtain the distribution of fluorophore in-vivo. However, due to the fact that the spectrum of fluorescence is in the section of the visible light range, there are mass of autofluorescence on the surface of the bio-tissues, which is a major disturbing factor in FMI. Meanwhile, the high-level of dark current for charge-coupled device (CCD) camera and other influencing factor can also produce a lot of background noise. In this paper, a novel method for image denoising of FMI based on fuzzy C-Means clustering (FCM) is proposed, because the fluorescent signal is the major component of the fluorescence images, and the intensity of autofluorescence and other background signals is relatively lower than the fluorescence signal. First, the fluorescence image is smoothed by sliding-neighborhood operations to initially eliminate the noise. Then, the wavelet transform (WLT) is performed on the fluorescence images to obtain the major component of the fluorescent signals. After that, the FCM method is adopt to separate the major component and background of the fluorescence images. Finally, the proposed method was validated using the original data obtained by in vivo implanted fluorophore experiment, and the results show that our proposed method can effectively obtain the fluorescence signal while eliminate the background noise, which could increase the quality of fluorescence images.

  14. Image-domain motion compensated time resolved 4D cardiac CT

    NASA Astrophysics Data System (ADS)

    Taguchi, Katsuyuki; Sun, Zhihui; Segars, W. Paul; Fishman, Elliot K.; Tsui, Benjamin M. W.

    2007-03-01

    Two major problems with the current electrocardiogram-gated cardiac computed tomography (CT) imaging technique are a large patient radiation dose (10-15 mSv) and insufficient temporal resolution (83-165 ms). Our long-term goal is to develop new time resolved and low dose cardiac CT imaging techniques that consist of image reconstruction algorithms and estimation methods of the time-dependent motion vector field (MVF) of the heart from the acquired CT data. Toward this goal, we developed a method that estimates the 2D components of the MVF from a sequence of cardiac CT images and used it to "reconstruct" cardiac images at rapidly moving phases. First, two sharp image frames per heart beat (cycle) obtained at slow motion phases (i.e., mid-diastole and end-systole) were chosen. Nodes were coarsely placed among images; and the temporal motion of each node was modeled by B-splines. Our cost function consisted of 3 terms: mean-squared-error with the block-matching, and smoothness constraints in space and time. The time-dependent MVF was estimated by minimizing the cost function. We then warped images at slow motion phases using the estimated vector fields to "reconstruct" images at rapidly moving phase. The warping algorithm was evaluated using true time-dependent motion vector fields and images both provided by the NCAT phantom program. Preliminary results from ongoing quantitative and qualitative evaluation using the 4D NCAT phantom and patient data are encouraging. Major motion artifact is much reduced. We conclude the new image-based motion estimation technique is an important step toward the development of the new cardiac CT imaging techniques.

  15. Using 4D Cardiovascular Magnetic Resonance Imaging to Validate Computational Fluid Dynamics: A Case Study

    PubMed Central

    Biglino, Giovanni; Cosentino, Daria; Steeden, Jennifer A.; De Nova, Lorenzo; Castelli, Matteo; Ntsinjana, Hopewell; Pennati, Giancarlo; Taylor, Andrew M.; Schievano, Silvia

    2015-01-01

    Computational fluid dynamics (CFD) can have a complementary predictive role alongside the exquisite visualization capabilities of 4D cardiovascular magnetic resonance (CMR) imaging. In order to exploit these capabilities (e.g., for decision-making), it is necessary to validate computational models against real world data. In this study, we sought to acquire 4D CMR flow data in a controllable, experimental setup and use these data to validate a corresponding computational model. We applied this paradigm to a case of congenital heart disease, namely, transposition of the great arteries (TGA) repaired with arterial switch operation. For this purpose, a mock circulatory loop compatible with the CMR environment was constructed and two detailed aortic 3D models (i.e., one TGA case and one normal aortic anatomy) were tested under realistic hemodynamic conditions, acquiring 4D CMR flow. The same 3D domains were used for multi-scale CFD simulations, whereby the remainder of the mock circulatory system was appropriately summarized with a lumped parameter network. Boundary conditions of the simulations mirrored those measured in vitro. Results showed a very good quantitative agreement between experimental and computational models in terms of pressure (overall maximum % error = 4.4% aortic pressure in the control anatomy) and flow distribution data (overall maximum % error = 3.6% at the subclavian artery outlet of the TGA model). Very good qualitative agreement could also be appreciated in terms of streamlines, throughout the cardiac cycle. Additionally, velocity vectors in the ascending aorta revealed less symmetrical flow in the TGA model, which also exhibited higher wall shear stress in the anterior ascending aorta. PMID:26697416

  16. Using 4D Cardiovascular Magnetic Resonance Imaging to Validate Computational Fluid Dynamics: A Case Study.

    PubMed

    Biglino, Giovanni; Cosentino, Daria; Steeden, Jennifer A; De Nova, Lorenzo; Castelli, Matteo; Ntsinjana, Hopewell; Pennati, Giancarlo; Taylor, Andrew M; Schievano, Silvia

    2015-01-01

    Computational fluid dynamics (CFD) can have a complementary predictive role alongside the exquisite visualization capabilities of 4D cardiovascular magnetic resonance (CMR) imaging. In order to exploit these capabilities (e.g., for decision-making), it is necessary to validate computational models against real world data. In this study, we sought to acquire 4D CMR flow data in a controllable, experimental setup and use these data to validate a corresponding computational model. We applied this paradigm to a case of congenital heart disease, namely, transposition of the great arteries (TGA) repaired with arterial switch operation. For this purpose, a mock circulatory loop compatible with the CMR environment was constructed and two detailed aortic 3D models (i.e., one TGA case and one normal aortic anatomy) were tested under realistic hemodynamic conditions, acquiring 4D CMR flow. The same 3D domains were used for multi-scale CFD simulations, whereby the remainder of the mock circulatory system was appropriately summarized with a lumped parameter network. Boundary conditions of the simulations mirrored those measured in vitro. Results showed a very good quantitative agreement between experimental and computational models in terms of pressure (overall maximum % error = 4.4% aortic pressure in the control anatomy) and flow distribution data (overall maximum % error = 3.6% at the subclavian artery outlet of the TGA model). Very good qualitative agreement could also be appreciated in terms of streamlines, throughout the cardiac cycle. Additionally, velocity vectors in the ascending aorta revealed less symmetrical flow in the TGA model, which also exhibited higher wall shear stress in the anterior ascending aorta. PMID:26697416

  17. Quantifying the impact of respiratory-gated 4D CT acquisition on thoracic image quality: A digital phantom study

    SciTech Connect

    Bernatowicz, K. Knopf, A.; Lomax, A.; Keall, P.; Kipritidis, J.; Mishra, P.

    2015-01-15

    Purpose: Prospective respiratory-gated 4D CT has been shown to reduce tumor image artifacts by up to 50% compared to conventional 4D CT. However, to date no studies have quantified the impact of gated 4D CT on normal lung tissue imaging, which is important in performing dose calculations based on accurate estimates of lung volume and structure. To determine the impact of gated 4D CT on thoracic image quality, the authors developed a novel simulation framework incorporating a realistic deformable digital phantom driven by patient tumor motion patterns. Based on this framework, the authors test the hypothesis that respiratory-gated 4D CT can significantly reduce lung imaging artifacts. Methods: Our simulation framework synchronizes the 4D extended cardiac torso (XCAT) phantom with tumor motion data in a quasi real-time fashion, allowing simulation of three 4D CT acquisition modes featuring different levels of respiratory feedback: (i) “conventional” 4D CT that uses a constant imaging and couch-shift frequency, (ii) “beam paused” 4D CT that interrupts imaging to avoid oversampling at a given couch position and respiratory phase, and (iii) “respiratory-gated” 4D CT that triggers acquisition only when the respiratory motion fulfills phase-specific displacement gating windows based on prescan breathing data. Our framework generates a set of ground truth comparators, representing the average XCAT anatomy during beam-on for each of ten respiratory phase bins. Based on this framework, the authors simulated conventional, beam-paused, and respiratory-gated 4D CT images using tumor motion patterns from seven lung cancer patients across 13 treatment fractions, with a simulated 5.5 cm{sup 3} spherical lesion. Normal lung tissue image quality was quantified by comparing simulated and ground truth images in terms of overall mean square error (MSE) intensity difference, threshold-based lung volume error, and fractional false positive/false negative rates. Results

  18. Evaluation of Non-Local Means Based Denoising Filters for Diffusion Kurtosis Imaging Using a New Phantom

    PubMed Central

    Zhou, Min-Xiong; Yan, Xu; Xie, Hai-Bin; Zheng, Hui; Xu, Dongrong; Yang, Guang

    2015-01-01

    Image denoising has a profound impact on the precision of estimated parameters in diffusion kurtosis imaging (DKI). This work first proposes an approach to constructing a DKI phantom that can be used to evaluate the performance of denoising algorithms in regard to their abilities of improving the reliability of DKI parameter estimation. The phantom was constructed from a real DKI dataset of a human brain, and the pipeline used to construct the phantom consists of diffusion-weighted (DW) image filtering, diffusion and kurtosis tensor regularization, and DW image reconstruction. The phantom preserves the image structure while minimizing image noise, and thus can be used as ground truth in the evaluation. Second, we used the phantom to evaluate three representative algorithms of non-local means (NLM). Results showed that one scheme of vector-based NLM, which uses DWI data with redundant information acquired at different b-values, produced the most reliable estimation of DKI parameters in terms of Mean Square Error (MSE), Bias and standard deviation (Std). The result of the comparison based on the phantom was consistent with those based on real datasets. PMID:25643162

  19. Segmentation of 4D cardiac images: investigation on statistical shape models.

    PubMed

    Renno, Markus S; Shang, Yan; Sweeney, James; Dossel, Olaf

    2006-01-01

    The purpose of this research was two-fold: (1) to investigate the properties of statistical shape models constructed from manually segmented cardiac ventricular chambers to confirm the validity of an automatic 4-dimensional (4D) segmentation model that uses gradient vector flow (GVF) images of the original data and (2) to develop software to further automate the steps necessary in active shape model (ASM) training. These goals were achieved by first constructing ASMs from manually segmented ventricular models by allowing the user to cite entire datasets for processing using a GVF-based landmarking procedure and principal component analysis (PCA) to construct the statistical shape model. The statistical shape model of one dataset was used to regulate the segmentation of another dataset according to its GVF, and these results were then analyzed and found to accurately represent the original cardiac data when compared to the manual segmentation results as the golden standard. PMID:17947007

  20. brainR: Interactive 3 and 4D Images of High Resolution Neuroimage Data

    PubMed Central

    Muschelli, John; Sweeney, Elizabeth; Crainiceanu, Ciprian

    2016-01-01

    We provide software tools for displaying and publishing interactive 3-dimensional (3D) and 4-dimensional (4D) figures to html webpages, with examples of high-resolution brain imaging. Our framework is based in the R statistical software using the rgl package, a 3D graphics library. We build on this package to allow manipulation of figures including rotation and translation, zooming, coloring of brain substructures, adjusting transparency levels, and addition/or removal of brain structures. The need for better visualization tools of ultra high dimensional data is ever present; we are providing a clean, simple, web-based option. We also provide a package (brainR) for users to readily implement these tools. PMID:27330829

  1. SU-C-9A-06: The Impact of CT Image Used for Attenuation Correction in 4D-PET

    SciTech Connect

    Cui, Y; Bowsher, J; Yan, S; Cai, J; Das, S; Yin, F

    2014-06-01

    Purpose: To evaluate the appropriateness of using 3D non-gated CT image for attenuation correction (AC) in a 4D-PET (gated PET) imaging protocol used in radiotherapy treatment planning simulation. Methods: The 4D-PET imaging protocol in a Siemens PET/CT simulator (Biograph mCT, Siemens Medical Solutions, Hoffman Estates, IL) was evaluated. CIRS Dynamic Thorax Phantom (CIRS Inc., Norfolk, VA) with a moving glass sphere (8 mL) in the middle of its thorax portion was used in the experiments. The glass was filled with {sup 18}F-FDG and was in a longitudinal motion derived from a real patient breathing pattern. Varian RPM system (Varian Medical Systems, Palo Alto, CA) was used for respiratory gating. Both phase-gating and amplitude-gating methods were tested. The clinical imaging protocol was modified to use three different CT images for AC in 4D-PET reconstruction: first is to use a single-phase CT image to mimic actual clinical protocol (single-CT-PET); second is to use the average intensity projection CT (AveIP-CT) derived from 4D-CT scanning (AveIP-CT-PET); third is to use 4D-CT image to do the phase-matched AC (phase-matching- PET). Maximum SUV (SUVmax) and volume of the moving target (glass sphere) with threshold of 40% SUVmax were calculated for comparison between 4D-PET images derived with different AC methods. Results: The SUVmax varied 7.3%±6.9% over the breathing cycle in single-CT-PET, compared to 2.5%±2.8% in AveIP-CT-PET and 1.3%±1.2% in phasematching PET. The SUVmax in single-CT-PET differed by up to 15% from those in phase-matching-PET. The target volumes measured from single- CT-PET images also presented variations up to 10% among different phases of 4D PET in both phase-gating and amplitude-gating experiments. Conclusion: Attenuation correction using non-gated CT in 4D-PET imaging is not optimal process for quantitative analysis. Clinical 4D-PET imaging protocols should consider phase-matched 4D-CT image if available to achieve better accuracy.

  2. A patient specific 4D MRI liver motion model based on sparse imaging and registration

    NASA Astrophysics Data System (ADS)

    Noorda, Y. H.; Bartels, L. W.; van Stralen, Marijn; Pluim, J. P. W.

    2013-03-01

    Introduction: Image-guided minimally invasive procedures are becoming increasingly popular. Currently, High-Intensity Focused Ultrasound (HIFU) treatment of lesions in mobile organs, such as the liver, is in development. A requirement for such treatment is automatic motion tracking, such that the position of the lesion can be followed in real time. We propose a 4D liver motion model, which can be used during planning of this procedure. During treatment, the model can serve as a motion predictor. In a similar fashion, this model could be used for radiotherapy treatment of the liver. Method: The model is built by acquiring 2D dynamic sagittal MRI data at six locations in the liver. By registering these dynamics to a 3D MRI liver image, 2D deformation fields are obtained at every location. The 2D fields are ordered according to the position of the liver at that specific time point, such that liver motion during an average breathing period can be simulated. This way, a sparse deformation field is created over time. This deformation field is finally interpolated over the entire volume, yielding a 4D motion model. Results: The accuracy of the model is evaluated by comparing unseen slices to the slice predicted by the model at that specific location and phase in the breathing cycle. The mean Dice coefficient of the liver regions was 0.90. The mean misalignment of the vessels was 1.9 mm. Conclusion: The model is able to predict patient specific deformations of the liver and can predict regular motion accurately.

  3. 3D and 4D magnetic susceptibility tomography based on complex MR images

    DOEpatents

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  4. Segmentation of 4D cardiac computer tomography images using active shape models

    NASA Astrophysics Data System (ADS)

    Leiner, Barba-J.; Olveres, Jimena; Escalante-Ramírez, Boris; Arámbula, Fernando; Vallejo, Enrique

    2012-06-01

    This paper describes a segmentation method for time series of 3D cardiac images based on deformable models. The goal of this work is to extend active shape models (ASM) of tree-dimensional objects to the problem of 4D (3D + time) cardiac CT image modeling. The segmentation is achieved by constructing a point distribution model (PDM) that encodes the spatio-temporal variability of a training set, i.e., the principal modes of variation of the temporal shapes are computed using some statistical parameters. An active search is used in the segmentation process where an initial approximation of the spatio-temporal shape is given and the gray level information in the neighborhood of the landmarks is analyzed. The starting shape is able to deform so as to better fit the data, but in the range allowed by the point distribution model. Several time series consisting of eleven 3D images of cardiac CT are employed for the method validation. Results are compared with manual segmentation made by an expert. The proposed application can be used for clinical evaluation of the left ventricle mechanical function. Likewise, the results can be taken as the first step of processing for optic flow estimation algorithms.

  5. Dynamic real-time 4D cardiac MDCT image display using GPU-accelerated volume rendering.

    PubMed

    Zhang, Qi; Eagleson, Roy; Peters, Terry M

    2009-09-01

    Intraoperative cardiac monitoring, accurate preoperative diagnosis, and surgical planning are important components of minimally-invasive cardiac therapy. Retrospective, electrocardiographically (ECG) gated, multidetector computed tomographical (MDCT), four-dimensional (3D + time), real-time, cardiac image visualization is an important tool for the surgeon in such procedure, particularly if the dynamic volumetric image can be registered to, and fused with the actual patient anatomy. The addition of stereoscopic imaging provides a more intuitive environment by adding binocular vision and depth cues to structures within the beating heart. In this paper, we describe the design and implementation of a comprehensive stereoscopic 4D cardiac image visualization and manipulation platform, based on the opacity density radiation model, which exploits the power of modern graphics processing units (GPUs) in the rendering pipeline. In addition, we present a new algorithm to synchronize the phases of the dynamic heart to clinical ECG signals, and to calculate and compensate for latencies in the visualization pipeline. A dynamic multiresolution display is implemented to enable the interactive selection and emphasis of volume of interest (VOI) within the entire contextual cardiac volume and to enhance performance, and a novel color and opacity adjustment algorithm is designed to increase the uniformity of the rendered multiresolution image of heart. Our system provides a visualization environment superior to noninteractive software-based implementations, but with a rendering speed that is comparable to traditional, but inferior quality, volume rendering approaches based on texture mapping. This retrospective ECG-gated dynamic cardiac display system can provide real-time feedback regarding the suspected pathology, function, and structural defects, as well as anatomical information such as chamber volume and morphology. PMID:19467840

  6. An innovative detector concept for hybrid 4D-PET/MRI imaging

    NASA Astrophysics Data System (ADS)

    Cerello, P.; Pennazio, F.; Bisogni, M. G.; Marino, N.; Marzocca, C.; Peroni, C.; Wheadon, R.; Del Guerra, A.

    2013-02-01

    The importance of a high-quality hybrid imaging, providing morphological and functional information with only one acquisition session, is widely acknowledged by the scientific community. The historical limitations to the quality of PET images are related to the unsatisfactory measurement of the depth of interaction (DOI) in the crystals and of the time of flight (TOF), that cause a parallax error and an unfavorable signal to background condition in the image reconstruction process, respectively. The 4DMPET project is developing a high performance PET block-detector featuring 4D image reconstruction capabilities. The detector module is based on a fast scintillating continuous crystal coupled on both sides to arrays of Silicon PhotoMultipliers (SiPM). The SiPMs collect the scintillation light and provide the trigger signal, the time and the energy released in the crystal at the pixel level. The photon depth of interaction (DOI) is reconstructed by measuring the cluster size asymmetry on the two faces of the crystal, thus obtaining a comparable spatial resolution in the three coordinates and removing the parallax error. The event position along the line of response can be measured with high precision by means of TOF techniques. We discuss the module design concept and the results of the detailed Monte Carlo detector simulation, which inspire the architectural solutions selected for the layout and the front-end The expected resolution for 3D spatial coordinates of the interaction point in the crystal (1 mm) and the TOF (about 110 ps) would provide a substantial improvement of the image quality. 4DMPET aims at building a prototype block detector demonstrating that the proposed layout meets the expected performance and is suitable for designing a detector focused on a specific application.

  7. 4D medical image computing and visualization of lung tumor mobility in spatio-temporal CT image data.

    PubMed

    Handels, Heinz; Werner, René; Schmidt, Rainer; Frenzel, Thorsten; Lu, Wei; Low, Daniel; Ehrhardt, Jan

    2007-12-01

    The development of 4D CT imaging has introduced the possibility of measuring breathing motion of tumors and inner organs. Conformal thoracic radiation therapy relies on a quantitative understanding of the position of lungs, lung tumors, and other organs during radiation delivery. Using 4D CT data sets, medical image computing and visualization methods were developed to visualize different aspects of lung and lung tumor mobility during the breathing cycle and to extract quantitative motion parameters. A non-linear registration method was applied to estimate the three-dimensional motion field and to compute 3D point trajectories. Specific visualization techniques were used to display the resulting motion field, the tumor's appearance probabilities during a breathing cycle as well as the volume covered by the moving tumor. Furthermore, trajectories of the tumor center-of-mass and organ specific landmarks were computed for the quantitative analysis of tumor and organ motion. The analysis of 4D data sets of seven patients showed that tumor mobility differs significantly between the patients depending on the individual breathing pattern and tumor location. PMID:17602865

  8. ADMIRE: a locally adaptive single-image, non-uniformity correction and denoising algorithm: application to uncooled IR camera

    NASA Astrophysics Data System (ADS)

    Tendero, Y.; Gilles, J.

    2012-06-01

    We propose a new way to correct for the non-uniformity (NU) and the noise in uncooled infrared-type images. This method works on static images, needs no registration, no camera motion and no model for the non uniformity. The proposed method uses an hybrid scheme including an automatic locally-adaptive contrast adjustment and a state-of-the-art image denoising method. It permits to correct for a fully non-linear NU and the noise efficiently using only one image. We compared it with total variation on real raw and simulated NU infrared images. The strength of this approach lies in its simplicity, low computational cost. It needs no test-pattern or calibration and produces no "ghost-artefact".

  9. A flexible patch based approach for combined denoising and contrast enhancement of digital X-ray images.

    PubMed

    Irrera, Paolo; Bloch, Isabelle; Delplanque, Maurice

    2016-02-01

    Denoising and contrast enhancement play key roles in optimizing the trade-off between image quality and X-ray dose. However, these tasks present multiple challenges raised by noise level, low visibility of fine anatomical structures, heterogeneous conditions due to different exposure parameters, and patient characteristics. This work proposes a new method to address these challenges. We first introduce a patch-based filter adapted to the properties of the noise corrupting X-ray images. The filtered images are then used as oracles to define non parametric noise containment maps that, when applied in a multiscale contrast enhancement framework, allow optimizing the trade-off between improvement of the visibility of anatomical structures and noise reduction. A significant amount of tests on both phantoms and clinical images has shown that the proposed method is better suited than others for visual inspection for diagnosis, even when compared to an algorithm used to process low dose images in clinical routine. PMID:26716719

  10. A deformable phantom for 4D radiotherapy verification: Design and image registration evaluation

    SciTech Connect

    Serban, Monica; Heath, Emily; Stroian, Gabriela; Collins, D. Louis; Seuntjens, Jan

    2008-03-15

    peak inhale. The SI displacement of the landmarks varied between 94% and 3% of the piston excursion for positions closer and farther away from the piston, respectively. The reproducibility of the phantom deformation was within the image resolution (0.7x0.7x1.25 mm{sup 3}). Vector average registration accuracy based on point landmarks was found to be 0.5 (0.4 SD) mm. The tumor and lung mean 3D DTA obtained from triangulated surfaces were 0.4 (0.1 SD) mm and 1.0 (0.8 SD) mm, respectively. This phantom is capable of reproducibly emulating the physically realistic lung features and deformations and has a wide range of potential applications, including four-dimensional (4D) imaging, evaluation of deformable registration accuracy, 4D planning and dose delivery.

  11. Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras.

    PubMed

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2010-10-25

    In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field. PMID:21164605

  12. Application of 4D resistivity image profiling to detect DNAPLs plume.

    NASA Astrophysics Data System (ADS)

    Liu, H.; Yang, C.; Tsai, Y.

    2008-12-01

    In July 1993, the soil and groundwater of the factory of Taiwan , Miaoli was found to be contaminated by dichloroethane, chlorobenzene and other hazardous solvents. The contaminants were termed to be dense non-aqueous phase liquids (DNAPLs). The contaminated site was neglected for the following years until May 1998, the Environment Protection Agency of Miaoli ordered the company immediately take an action for treatment of the contaminated site. Excavating and exposing the contaminated soil was done at the previous waste DNAPL dumped area. In addition, more than 53 wells were drilled around the pool with a maximum depth of 12 m where a clayey layer was found. Continuous pumping the groundwater and monitoring the concentration of residual DNAPL contained in the well water samples have done in different stages of remediation. However, it is suspected that the DNAPL has existed for a long time, therefore the contaminants might dilute but remnants of a DNAPL plume that are toxic to humans still remain in the soil and migrate to deeper aquifers. A former contaminated site was investigated using the 2D, 3D and 4D resisitivity image technique, with aims of determining buried contaminant geometry. This paper emphasizes the use of resistivity image profiling (RIP) method to map the limit of this DNAPL waste disposal site where the records of operations are not variations. A significant change in resistivity values was detected between known polluted and non-polluted subsurface; a high resistivity value implies that the subsurface was contaminated by DNAPL plume. The results of the survey serve to provide insight into the sensitivity of RIP method for detecting DNAPL plumes within the shallow subsurface, and help to provide valuable information related to monitoring the possible migration path of DNAPL plume in the past. According to the formerly studies in this site, affiliation by excavates with pumps water remediation had very long time, Therefore this research was used

  13. Quantifying the image quality and dose reduction of respiratory triggered 4D cone-beam computed tomography with patient-measured breathing

    NASA Astrophysics Data System (ADS)

    Cooper, Benjamin J.; O'Brien, Ricky T.; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J.

    2015-12-01

    Respiratory triggered four dimensional cone-beam computed tomography (RT 4D CBCT) is a novel technique that uses a patient’s respiratory signal to drive the image acquisition with the goal of imaging dose reduction without degrading image quality. This work investigates image quality and dose using patient-measured respiratory signals for RT 4D CBCT simulations. Studies were performed that simulate a 4D CBCT image acquisition using both the novel RT 4D CBCT technique and a conventional 4D CBCT technique. A set containing 111 free breathing lung cancer patient respiratory signal files was used to create 111 pairs of RT 4D CBCT and conventional 4D CBCT image sets from realistic simulations of a 4D CBCT system using a Rando phantom and the digital phantom, XCAT. Each of these image sets were compared to a ground truth dataset from which a mean absolute pixel difference (MAPD) metric was calculated to quantify the degradation of image quality. The number of projections used in each simulation was counted and was assumed as a surrogate for imaging dose. Based on 111 breathing traces, when comparing RT 4D CBCT with conventional 4D CBCT, the average image quality was reduced by 7.6% (Rando study) and 11.1% (XCAT study). However, the average imaging dose reduction was 53% based on needing fewer projections (617 on average) than conventional 4D CBCT (1320 projections). The simulation studies have demonstrated that the RT 4D CBCT method can potentially offer a 53% saving in imaging dose on average compared to conventional 4D CBCT in simulation studies using a wide range of patient-measured breathing traces with a minimal impact on image quality.

  14. SU-E-J-183: Quantifying the Image Quality and Dose Reduction of Respiratory Triggered 4D Cone-Beam Computed Tomography with Patient- Measured Breathing

    SciTech Connect

    Cooper, B; OBrien, R; Kipritidis, J; Keall, P

    2014-06-01

    Purpose: Respiratory triggered four dimensional cone-beam computed tomography (RT 4D CBCT) is a novel technique that uses a patient's respiratory signal to drive the image acquisition with the goal of imaging dose reduction without degrading image quality. This work investigates image quality and dose using patient-measured respiratory signals for RT 4D CBCT simulations instead of synthetic sinusoidal signals used in previous work. Methods: Studies were performed that simulate a 4D CBCT image acquisition using both the novel RT 4D CBCT technique and a conventional 4D CBCT technique from a database of oversampled Rando phantom CBCT projections. A database containing 111 free breathing lung cancer patient respiratory signal files was used to create 111 RT 4D CBCT and 111 conventional 4D CBCT image datasets from realistic simulations of a 4D RT CBCT system. Each of these image datasets were compared to a ground truth dataset from which a root mean square error (RMSE) metric was calculated to quantify the degradation of image quality. The number of projections used in each simulation is counted and was assumed as a surrogate for imaging dose. Results: Based on 111 breathing traces, when comparing RT 4D CBCT with conventional 4D CBCT the average image quality was reduced by 7.6%. However, the average imaging dose reduction was 53% based on needing fewer projections (617 on average) than conventional 4D CBCT (1320 projections). Conclusion: The simulation studies using a wide range of patient breathing traces have demonstrated that the RT 4D CBCT method can potentially offer a substantial saving of imaging dose of 53% on average compared to conventional 4D CBCT in simulation studies with a minimal impact on image quality. A patent application (PCT/US2012/048693) has been filed which is related to this work.

  15. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    SciTech Connect

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-19

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described.

  16. 4D cone-beam CT imaging for guidance in radiation therapy: setup verification by use of implanted fiducial markers

    NASA Astrophysics Data System (ADS)

    Jin, Peng; van Wieringen, Niek; Hulshof, Maarten C. C. M.; Bel, Arjan; Alderliesten, Tanja

    2016-03-01

    The use of 4D cone-beam computed tomography (CBCT) and fiducial markers for guidance during radiation therapy of mobile tumors is challenging due to the trade-off between image quality, imaging dose, and scanning time. We aimed to investigate the visibility of markers and the feasibility of marker-based 4D registration and manual respiration-induced marker motion quantification for different CBCT acquisition settings. A dynamic thorax phantom and a patient with implanted gold markers were included. For both the phantom and patient, the peak-to-peak amplitude of marker motion in the cranial-caudal direction ranged from 5.3 to 14.0 mm, which did not affect the marker visibility and the associated marker-based registration feasibility. While using a medium field of view (FOV) and the same total imaging dose as is applied for 3D CBCT scanning in our clinic, it was feasible to attain an improved marker visibility by reducing the imaging dose per projection and increasing the number of projection images. For a small FOV with a shorter rotation arc but similar total imaging dose, streak artifacts were reduced due to using a smaller sampling angle. Additionally, the use of a small FOV allowed reducing total imaging dose and scanning time (~2.5 min) without losing the marker visibility. In conclusion, by using 4D CBCT with identical or lower imaging dose and a reduced gantry speed, it is feasible to attain sufficient marker visibility for marker-based 4D setup verification. Moreover, regardless of the settings, manual marker motion quantification can achieve a high accuracy with the error <1.2 mm.

  17. Radiation Dose Reduction in Pediatric Body CT Using Iterative Reconstruction and a Novel Image-Based Denoising Method

    PubMed Central

    Yu, Lifeng; Fletcher, Joel G.; Shiung, Maria; Thomas, Kristen B.; Matsumoto, Jane M.; Zingula, Shannon N.; McCollough, Cynthia H.

    2016-01-01

    OBJECTIVE The objective of this study was to evaluate the radiation dose reduction potential of a novel image-based denoising technique in pediatric abdominopelvic and chest CT examinations and compare it with a commercial iterative reconstruction method. MATERIALS AND METHODS Data were retrospectively collected from 50 (25 abdominopelvic and 25 chest) clinically indicated pediatric CT examinations. For each examination, a validated noise-insertion tool was used to simulate half-dose data, which were reconstructed using filtered back-projection (FBP) and sinogram-affirmed iterative reconstruction (SAFIRE) methods. A newly developed denoising technique, adaptive nonlocal means (aNLM), was also applied. For each of the 50 patients, three pediatric radiologists evaluated four datasets: full dose plus FBP, half dose plus FBP, half dose plus SAFIRE, and half dose plus aNLM. For each examination, the order of preference for the four datasets was ranked. The organ-specific diagnosis and diagnostic confidence for five primary organs were recorded. RESULTS The mean (± SD) volume CT dose index for the full-dose scan was 5.3 ± 2.1 mGy for abdominopelvic examinations and 2.4 ± 1.1 mGy for chest examinations. For abdominopelvic examinations, there was no statistically significant difference between the half dose plus aNLM dataset and the full dose plus FBP dataset (3.6 ± 1.0 vs 3.6 ± 0.9, respectively; p = 0.52), and aNLM performed better than SAFIRE. For chest examinations, there was no statistically significant difference between the half dose plus SAFIRE and the full dose plus FBP (4.1 ± 0.6 vs 4.2 ± 0.6, respectively; p = 0.67), and SAFIRE performed better than aNLM. For all organs, there was more than 85% agreement in organ-specific diagnosis among the three half-dose configurations and the full dose plus FBP configuration. CONCLUSION Although a novel image-based denoising technique performed better than a commercial iterative reconstruction method in pediatric

  18. TH-E-BRF-02: 4D-CT Ventilation Image-Based IMRT Plans Are Dosimetrically Comparable to SPECT Ventilation Image-Based Plans

    SciTech Connect

    Kida, S; Bal, M; Kabus, S; Loo, B; Keall, P; Yamamoto, T

    2014-06-15

    Purpose: An emerging lung ventilation imaging method based on 4D-CT can be used in radiotherapy to selectively avoid irradiating highly-functional lung regions, which may reduce pulmonary toxicity. Efforts to validate 4DCT ventilation imaging have been focused on comparison with other imaging modalities including SPECT and xenon CT. The purpose of this study was to compare 4D-CT ventilation image-based functional IMRT plans with SPECT ventilation image-based plans as reference. Methods: 4D-CT and SPECT ventilation scans were acquired for five thoracic cancer patients in an IRB-approved prospective clinical trial. The ventilation images were created by quantitative analysis of regional volume changes (a surrogate for ventilation) using deformable image registration of the 4D-CT images. A pair of 4D-CT ventilation and SPECT ventilation image-based IMRT plans was created for each patient. Regional ventilation information was incorporated into lung dose-volume objectives for IMRT optimization by assigning different weights on a voxel-by-voxel basis. The objectives and constraints of the other structures in the plan were kept identical. The differences in the dose-volume metrics have been evaluated and tested by a paired t-test. SPECT ventilation was used to calculate the lung functional dose-volume metrics (i.e., mean dose, V20 and effective dose) for both 4D-CT ventilation image-based and SPECT ventilation image-based plans. Results: Overall there were no statistically significant differences in any dose-volume metrics between the 4D-CT and SPECT ventilation imagebased plans. For example, the average functional mean lung dose of the 4D-CT plans was 26.1±9.15 (Gy), which was comparable to 25.2±8.60 (Gy) of the SPECT plans (p = 0.89). For other critical organs and PTV, nonsignificant differences were found as well. Conclusion: This study has demonstrated that 4D-CT ventilation image-based functional IMRT plans are dosimetrically comparable to SPECT ventilation image

  19. Enhanced Terahertz Imaging of Small Forced Delamination in Woven Glass Fibre-reinforced Composites with Wavelet De-noising

    NASA Astrophysics Data System (ADS)

    Dong, Junliang; Locquet, Alexandre; Citrin, D. S.

    2016-03-01

    Terahertz (THz) reflection imaging is applied to characterize a woven glass fibre-reinforced composite laminate with a small region of forced delamination. The forced delamination is created by inserting a disk of 25- μ m-thick Upilex film, which is below the THz axial resolution, resulting in one featured echo with small amplitude in the reflected THz pulses. Low-amplitude components of the temporal signal due to ambient water vapor produce features of comparable amplitude with features associated with the THz pulse reflected off the interfaces of the delamination and suppress the contrast of THz C- and B-scans. Wavelet shrinkage de-noising is performed to remove water-vapor features, leading to enhanced THz C- and B-scans to locate the delamination in three dimensions with high contrast.

  20. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm.

    PubMed

    Molaei, Mehdi; Sheng, Jian

    2014-12-29

    Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  1. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm

    PubMed Central

    Molaei, Mehdi; Sheng, Jian

    2014-01-01

    Abstract: Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  2. Applying an animal model to quantify the uncertainties of an image-based 4D-CT algorithm

    NASA Astrophysics Data System (ADS)

    Pierce, Greg; Wang, Kevin; Battista, Jerry; Lee, Ting-Yim

    2012-06-01

    The purpose of this paper is to use an animal model to quantify the spatial displacement uncertainties and test the fundamental assumptions of an image-based 4D-CT algorithm in vivo. Six female Landrace cross pigs were ventilated and imaged using a 64-slice CT scanner (GE Healthcare) operating in axial cine mode. The breathing amplitude pattern of the pigs was varied by periodically crimping the ventilator gas return tube during the image acquisition. The image data were used to determine the displacement uncertainties that result from matching CT images at the same respiratory phase using normalized cross correlation (NCC) as the matching criteria. Additionally, the ability to match the respiratory phase of a 4.0 cm subvolume of the thorax to a reference subvolume using only a single overlapping 2D slice from the two subvolumes was tested by varying the location of the overlapping matching image within the subvolume and examining the effect this had on the displacement relative to the reference volume. The displacement uncertainty resulting from matching two respiratory images using NCC ranged from 0.54 ± 0.10 mm per match to 0.32 ± 0.16 mm per match in the lung of the animal. The uncertainty was found to propagate in quadrature, increasing with number of NCC matches performed. In comparison, the minimum displacement achievable if two respiratory images were matched perfectly in phase ranged from 0.77 ± 0.06 to 0.93 ± 0.06 mm in the lung. The assumption that subvolumes from separate cine scan could be matched by matching a single overlapping 2D image between to subvolumes was validated. An in vivo animal model was developed to test an image-based 4D-CT algorithm. The uncertainties associated with using NCC to match the respiratory phase of two images were quantified and the assumption that a 4.0 cm 3D subvolume can by matched in respiratory phase by matching a single 2D image from the 3D subvolume was validated. The work in this paper shows the image-based 4D

  3. SU-D-17A-04: The Impact of Audiovisual Biofeedback On Image Quality During 4D Functional and Anatomic Imaging: Results of a Prospective Clinical Trial

    SciTech Connect

    Keall, P; Pollock, S; Yang, J; Diehn, M; Berger, J; Graves, E; Loo, B; Yamamoto, T

    2014-06-01

    Purpose: The ability of audiovisual (AV) biofeedback to improve breathing regularity has not previously been investigated for functional imaging studies. The purpose of this study was to investigate the impact of AV biofeedback on 4D-PET and 4D-CT image quality in a prospective clinical trial. We hypothesized that motion blurring in 4D-PET images and the number of artifacts in 4D-CT images are reduced using AV biofeedback. Methods: AV biofeedback is a real-time, interactive and personalized system designed to help a patient self-regulate his/her breathing using a patient-specific representative waveform and musical guides. In an IRB-approved prospective clinical trial, 4D-PET and 4D-CT images of 10 lung cancer patients were acquired with AV biofeedback (AV) and free breathing (FB). The 4D-PET images in 6 respiratory bins were analyzed for motion blurring by: (1) decrease of GTVPET and (2) increase of SUVmax in 4-DPET compared to 3D-PET. The 4D-CT images were analyzed for artifacts by: (1) comparing normalized cross correlation-based scores (NCCS); and (2) quantifying a visual assessment score (VAS). A two-tailed paired t-test was used to test the hypotheses. Results: The impact of AV biofeedback on 4D-PET and 4D-CT images varied widely between patients, suggesting inconsistent patient comprehension and capability. Overall, the 4D-PET decrease of GTVPET was 2.0±3.0cm3 with AV and 2.3±3.9cm{sup 3} for FB (p=0.61). The 4D-PET increase of SUVmax was 1.6±1.0 with AV and 1.1±0.8 with FB (p=0.002). The 4D-CT NCCS were 0.65±0.27 with AV and 0.60±0.32 for FB (p=0.32). The 4D-CT VAS was 0.0±2.7 (p=ns). Conclusion: A 10-patient study demonstrated a statistically significant reduction of motion blurring of AV over FB for 1/2 functional 4D-PET imaging metrics. No difference between AV and FB was found for 2 anatomic 4D-CT imaging metrics. Future studies will focus on optimizing the human-computer interface and including patient training sessions for improved

  4. Effects of quantum noise in 4D-CT on deformable image registration and derived ventilation data

    NASA Astrophysics Data System (ADS)

    Latifi, Kujtim; Huang, Tzung-Chi; Feygelman, Vladimir; Budzevich, Mikalai M.; Moros, Eduardo G.; Dilling, Thomas J.; Stevens, Craig W.; van Elmpt, Wouter; Dekker, Andre; Zhang, Geoffrey G.

    2013-11-01

    Quantum noise is common in CT images and is a persistent problem in accurate ventilation imaging using 4D-CT and deformable image registration (DIR). This study focuses on the effects of noise in 4D-CT on DIR and thereby derived ventilation data. A total of six sets of 4D-CT data with landmarks delineated in different phases, called point-validated pixel-based breathing thorax models (POPI), were used in this study. The DIR algorithms, including diffeomorphic morphons (DM), diffeomorphic demons (DD), optical flow and B-spline, were used to register the inspiration phase to the expiration phase. The DIR deformation matrices (DIRDM) were used to map the landmarks. Target registration errors (TRE) were calculated as the distance errors between the delineated and the mapped landmarks. Noise of Gaussian distribution with different standard deviations (SD), from 0 to 200 Hounsfield Units (HU) in amplitude, was added to the POPI models to simulate different levels of quantum noise. Ventilation data were calculated using the ΔV algorithm which calculates the volume change geometrically based on the DIRDM. The ventilation images with different added noise levels were compared using Dice similarity coefficient (DSC). The root mean square (RMS) values of the landmark TRE over the six POPI models for the four DIR algorithms were stable when the noise level was low (SD <150 HU) and increased with added noise when the level is higher. The most accurate DIR was DD with a mean RMS of 1.5 ± 0.5 mm with no added noise and 1.8 ± 0.5 mm with noise (SD = 200 HU). The DSC values between the ventilation images with and without added noise decreased with the noise level, even when the noise level was relatively low. The DIR algorithm most robust with respect to noise was DM, with mean DSC = 0.89 ± 0.01 and 0.66 ± 0.02 for the top 50% ventilation volumes, as compared between 0 added noise and SD = 30 and 200 HU, respectively. Although the landmark TRE were stable with low noise, the

  5. On the automated definition of mobile target volumes from 4D-CT images for stereotactic body radiotherapy

    SciTech Connect

    Zhang Tiezhi; Orton, Nigel P.; Tome, Wolfgang A.

    2005-11-15

    Stereotactic body radiotherapy (SBRT) can be used to treat small lesions in the chest. A vacuum-based immobilization system is used in our clinic for SBRT, and a motion envelope is used in treatment planning. The purpose of this study is to automatically derive motion envelopes using deformable image registration of 4D-CT images, and to assess the effect of abdominal pressure on the motion envelopes. 4D-CT scans at ten phases were acquired prior to treatment for both free and restricted breathing using a vacuum-based immobilization system that includes an abdominal pressure pillow. To study the stability of the motion envelope over the course of treatment, a mid-treatment 4D-CT scan was obtained after delivery of the third fraction for two patients. The planning target volume excluding breathing motion (PTV{sub ex}) was defined on the image set at full exhalation phase and transformed into all other phases using displacement maps from deformable image registration. The motion envelope was obtained as the union of PTV{sub ex} masks of all phases. The ratios of the motion envelope to PTV{sub ex} volume ranged from 1.3 to 2.5. When pressure was applied, the ratios were reduced by as much as 29% compared to free breathing for some patients, but increased by up to 9% for others. The abdominal pressure pillow has more motion restriction effects on the anterior/inferior region of the lung. For one of the two patients for whom the 4D-CT scan was repeated at mid-treatment, the motion envelope was reproducible. However, for the other patient the tumor location and lung motion pattern significantly changed due to changes in the anatomy surrounding the tumor during the course of treatment, indicating that an image-guided approach to SBRT may increase the efficacy of this treatment.

  6. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation–maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation–maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  7. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction.

    PubMed

    Karakatsanis, Nicolas A; Casey, Michael E; Lodge, Martin A; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible (18)F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published (18)F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  8. Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.

    2007-03-01

    In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.

  9. SU-E-J-02: 4D Digital Tomosynthesis Based On Algebraic Image Reconstruction and Total-Variation Minimization for the Improvement of Image Quality

    SciTech Connect

    Kim, D; Kang, S; Kim, T; Suh, T; Kim, S

    2014-06-01

    Purpose: In this paper, we implemented the four-dimensional (4D) digital tomosynthesis (DTS) imaging based on algebraic image reconstruction technique and total-variation minimization method in order to compensate the undersampled projection data and improve the image quality. Methods: The projection data were acquired as supposed the cone-beam computed tomography system in linear accelerator by the Monte Carlo simulation and the in-house 4D digital phantom generation program. We performed 4D DTS based upon simultaneous algebraic reconstruction technique (SART) among the iterative image reconstruction technique and total-variation minimization method (TVMM). To verify the effectiveness of this reconstruction algorithm, we performed systematic simulation studies to investigate the imaging performance. Results: The 4D DTS algorithm based upon the SART and TVMM seems to give better results than that based upon the existing method, or filtered-backprojection. Conclusion: The advanced image reconstruction algorithm for the 4D DTS would be useful to validate each intra-fraction motion during radiation therapy. In addition, it will be possible to give advantage to real-time imaging for the adaptive radiation therapy. This research was supported by Leading Foreign Research Institute Recruitment Program (Grant No.2009-00420) and Basic Atomic Energy Research Institute (BAERI); (Grant No. 2009-0078390) through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP)

  10. SU-E-J-157: Improving the Quality of T2-Weighted 4D Magnetic Resonance Imaging for Clinical Evaluation

    SciTech Connect

    Du, D; Mutic, S; Hu, Y; Caruthers, S; Glide-Hurst, C; Low, D

    2014-06-01

    Purpose: To develop an imaging technique that enables us to acquire T2- weighted 4D Magnetic Resonance Imaging (4DMRI) with sufficient spatial coverage, temporal resolution and spatial resolution for clinical evaluation. Methods: T2-weighed 4DMRI images were acquired from a healthy volunteer using a respiratory amplitude triggered T2-weighted Turbo Spin Echo sequence. 10 respiratory states were used to equally sample the respiratory range based on amplitude (0%, 20%i, 40%i, 60%i, 80%i, 100%, 80%e, 60%e, 40%e and 20%e). To avoid frequent scanning halts, a methodology was devised that split 10 respiratory states into two packages in an interleaved manner and packages were acquired separately. Sixty 3mm sagittal slices at 1.5mm in-plane spatial resolution were acquired to offer good spatial coverage and reasonable spatial resolution. The in-plane field of view was 375mm × 260mm with nominal scan time of 3 minutes 42 seconds. Acquired 2D images at the same respiratory state were combined to form the 3D image set corresponding to that respiratory state and reconstructed in the coronal view to evaluate whether all slices were at the same respiratory state. 3D image sets of 10 respiratory states represented a complete 4D MRI image set. Results: T2-weighted 4DMRI image were acquired in 10 minutes which was within clinical acceptable range. Qualitatively, the acquired MRI images had good image quality for delineation purposes. There were no abrupt position changes in reconstructed coronal images which confirmed that all sagittal slices were in the same respiratory state. Conclusion: We demonstrated it was feasible to acquire T2-weighted 4DMRI image set within a practical amount of time (10 minutes) that had good temporal resolution (10 respiratory states), spatial resolution (1.5mm × 1.5mm × 3.0mm) and spatial coverage (60 slices) for future clinical evaluation.

  11. Noise-induced systematic errors in ratio imaging: serious artefacts and correction with multi-resolution denoising.

    PubMed

    Wang, Yu-Li

    2007-11-01

    Ratio imaging is playing an increasingly important role in modern cell biology. Combined with ratiometric dyes or fluorescence resonance energy transfer (FRET) biosensors, the approach allows the detection of conformational changes and molecular interactions in living cells. However, the approach is conducted increasingly under limited signal-to-noise ratio (SNR), where noise from multiple images can easily accumulate and lead to substantial uncertainty in ratio values. This study demonstrates that a far more serious concern is systematic errors that generate artificially high ratio values at low SNR. Thus, uneven SNR alone may lead to significant variations in ratios among different regions of a cell. Although correct average ratios may be obtained by applying conventional noise reduction filters, such as a Gaussian filter before calculating the ratio, these filters have a limited performance at low SNR and are prone to artefacts such as generating discrete domains not found in the correct ratio image. Much more reliable restoration may be achieved with multi-resolution denoising filters that take into account the actual noise characteristics of the detector. These filters are also capable of restoring structural details and photometric accuracy, and may serve as a general tool for retrieving reliable information from low-light live cell images. PMID:17970912

  12. Integration of the denoising, inpainting and local harmonic B(z) algorithm for MREIT imaging of intact animals.

    PubMed

    Jeon, Kiwan; Kim, Hyung Joong; Lee, Chang-Ock; Seo, Jin Keun; Woo, Eung Je

    2010-12-21

    Conductivity imaging based on the current-injection MRI technique has been developed in magnetic resonance electrical impedance tomography. Current injected through a pair of surface electrodes induces a magnetic flux density distribution inside an imaging object, which results in additional magnetic field inhomogeneity. We can extract phase changes related to the current injection and obtain an image of the induced magnetic flux density. Without rotating the object inside the bore, we can measure only one component B(z) of the magnetic flux density B = (B(x), B(y), B(z)). Based on a relation between the internal conductivity distribution and B(z) data subject to multiple current injections, one may reconstruct cross-sectional conductivity images. As the image reconstruction algorithm, we have been using the harmonic B(z) algorithm in numerous experimental studies. Performing conductivity imaging of intact animal and human subjects, we found technical difficulties that originated from the MR signal void phenomena in the local regions of bones, lungs and gas-filled tubular organs. Measured B(z) data inside such a problematic region contain an excessive amount of noise that deteriorates the conductivity image quality. In order to alleviate this technical problem, we applied hybrid methods incorporating ramp-preserving denoising, harmonic inpainting with isotropic diffusion and ROI imaging using the local harmonic B(z) algorithm. These methods allow us to produce conductivity images of intact animals with best achievable quality. We suggest guidelines to choose a hybrid method depending on the overall noise level and existence of distinct problematic regions of MR signal void. PMID:21098914

  13. Correlation between internal fiducial tumor motion and external marker motion for liver tumors imaged with 4D-CT

    SciTech Connect

    Beddar, A. Sam . E-mail: abeddar@mdanderson.org; Kainz, Kristofer; Briere, Tina Marie; Tsunashima, Yoshikazu; Pan Tinsu; Prado, Karl; Mohan, Radhe; Gillin, Michael; Krishnan, Sunil

    2007-02-01

    Purpose: We investigated the correlation between the motions of an external marker and internal fiducials implanted in the liver for 8 patients undergoing respiratory-based computed tomography (four-dimensional CT [4D-CT]) procedures. Methods and Materials: The internal fiducials were gold seeds, 3 mm in length and 1.2 mm in diameter. Four patients each had one implanted fiducial, and the other four had three implanted fiducials. The external marker was a plastic box, which is part of the Real-Time Position Management System (RPM) used to track the patient's respiration. Each patient received a standard helical CT scan followed by a time-correlated CT-image acquisition (4D-CT). The 4D-CT images were reconstructed in 10 separate phases covering the entire respiratory cycle. Results: The internal fiducial motion is predominant in the superior-inferior direction, with a range of 7.5-17.5 mm. The correlation between external respiration and internal fiducial motion is best during expiration. For 2 patients with their three fiducials separated by a maximum of 3.2 cm, the motions of the fiducials were well correlated, whereas for 2 patients with more widely spaced fiducials, there was less correlation. Conclusions: In general, there is a good correlation between internal fiducial motion imaged by 4D-CT and external marker motion. We have demonstrated that gating may be best performed at the end of the respiratory cycle. Special attention should be paid to gating for patients whose fiducials do not move in synchrony, because targeting on the correct respiratory amplitude alone would not guarantee that the entire tumor volume is within the treatment field.

  14. 4-D Photoacoustic Tomography

    PubMed Central

    Xiang, Liangzhong; Wang, Bo; Ji, Lijun; Jiang, Huabei

    2013-01-01

    Photoacoustic tomography (PAT) offers three-dimensional (3D) structural and functional imaging of living biological tissue with label-free, optical absorption contrast. These attributes lend PAT imaging to a wide variety of applications in clinical medicine and preclinical research. Despite advances in live animal imaging with PAT, there is still a need for 3D imaging at centimeter depths in real-time. We report the development of four dimensional (4D) PAT, which integrates time resolutions with 3D spatial resolution, obtained using spherical arrays of ultrasonic detectors. The 4D PAT technique generates motion pictures of imaged tissue, enabling real time tracking of dynamic physiological and pathological processes at hundred micrometer-millisecond resolutions. The 4D PAT technique is used here to image needle-based drug delivery and pharmacokinetics. We also use this technique to monitor 1) fast hemodynamic changes during inter-ictal epileptic seizures and 2) temperature variations during tumor thermal therapy. PMID:23346370

  15. 4-D Photoacoustic Tomography

    NASA Astrophysics Data System (ADS)

    Xiang, Liangzhong; Wang, Bo; Ji, Lijun; Jiang, Huabei

    2013-01-01

    Photoacoustic tomography (PAT) offers three-dimensional (3D) structural and functional imaging of living biological tissue with label-free, optical absorption contrast. These attributes lend PAT imaging to a wide variety of applications in clinical medicine and preclinical research. Despite advances in live animal imaging with PAT, there is still a need for 3D imaging at centimeter depths in real-time. We report the development of four dimensional (4D) PAT, which integrates time resolutions with 3D spatial resolution, obtained using spherical arrays of ultrasonic detectors. The 4D PAT technique generates motion pictures of imaged tissue, enabling real time tracking of dynamic physiological and pathological processes at hundred micrometer-millisecond resolutions. The 4D PAT technique is used here to image needle-based drug delivery and pharmacokinetics. We also use this technique to monitor 1) fast hemodynamic changes during inter-ictal epileptic seizures and 2) temperature variations during tumor thermal therapy.

  16. 5D respiratory motion model based image reconstruction algorithm for 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Liu, Jiulong; Zhang, Xue; Zhang, Xiaoqun; Zhao, Hongkai; Gao, Yu; Thomas, David; Low, Daniel A.; Gao, Hao

    2015-11-01

    4D cone-beam computed tomography (4DCBCT) reconstructs a temporal sequence of CBCT images for the purpose of motion management or 4D treatment in radiotherapy. However the image reconstruction often involves the binning of projection data to each temporal phase, and therefore suffers from deteriorated image quality due to inaccurate or uneven binning in phase, e.g., under the non-periodic breathing. A 5D model has been developed as an accurate model of (periodic and non-periodic) respiratory motion. That is, given the measurements of breathing amplitude and its time derivative, the 5D model parametrizes the respiratory motion by three time-independent variables, i.e., one reference image and two vector fields. In this work we aim to develop a new 4DCBCT reconstruction method based on 5D model. Instead of reconstructing a temporal sequence of images after the projection binning, the new method reconstructs time-independent reference image and vector fields with no requirement of binning. The image reconstruction is formulated as a optimization problem with total-variation regularization on both reference image and vector fields, and the problem is solved by the proximal alternating minimization algorithm, during which the split Bregman method is used to reconstruct the reference image, and the Chambolle's duality-based algorithm is used to reconstruct the vector fields. The convergence analysis of the proposed algorithm is provided for this nonconvex problem. Validated by the simulation studies, the new method has significantly improved image reconstruction accuracy due to no binning and reduced number of unknowns via the use of the 5D model.

  17. Advanced image reconstruction strategies for 4D prostate DCE-MRI: steps toward clinical practicality

    NASA Astrophysics Data System (ADS)

    Stinson, Eric G.; Borisch, Eric A.; Froemming, Adam T.; Kawashima, Akira; Young, Phillip M.; Warndahl, Brent A.; Grimm, Roger C.; Manduca, Armando; Riederer, Stephen J.; Trzasko, Joshua D.

    2015-09-01

    Dynamic contrast-enhanced (DCE) MRI is an important tool for the detection and characterization of primary and recurring prostate cancer. Advanced reconstruction strategies (e.g., sparse or low-rank regression) provide improved depiction of contrast dynamics and pharmacokinetic parameters; however, the high computation cost of reconstructing 4D (3D+time, 50+ frames) datasets typically inhibits their routine clinical use. Here, a novel alternating direction method-of-multipliers (ADMM) optimization strategy is described that enables these methods to be executed in ∠5 minutes, and thus within the standard clinical workflow. After overviewing the mechanics of this approach, high-performance implementation strategies will be discussed and demonstrated through clinical cases.

  18. Impact of CT attenuation correction method on quantitative respiratory-correlated (4D) PET/CT imaging

    SciTech Connect

    Nyflot, Matthew J.; Lee, Tzu-Cheng; Alessio, Adam M.; Kinahan, Paul E.; Wollenweber, Scott D.; Stearns, Charles W.; Bowen, Stephen R.

    2015-01-15

    Purpose: Respiratory-correlated positron emission tomography (PET/CT) 4D PET/CT is used to mitigate errors from respiratory motion; however, the optimal CT attenuation correction (CTAC) method for 4D PET/CT is unknown. The authors performed a phantom study to evaluate the quantitative performance of CTAC methods for 4D PET/CT in the ground truth setting. Methods: A programmable respiratory motion phantom with a custom movable insert designed to emulate a lung lesion and lung tissue was used for this study. The insert was driven by one of five waveforms: two sinusoidal waveforms or three patient-specific respiratory waveforms. 3DPET and 4DPET images of the phantom under motion were acquired and reconstructed with six CTAC methods: helical breath-hold (3DHEL), helical free-breathing (3DMOT), 4D phase-averaged (4DAVG), 4D maximum intensity projection (4DMIP), 4D phase-matched (4DMATCH), and 4D end-exhale (4DEXH) CTAC. Recovery of SUV{sub max}, SUV{sub mean}, SUV{sub peak}, and segmented tumor volume was evaluated as RC{sub max}, RC{sub mean}, RC{sub peak}, and RC{sub vol}, representing percent difference relative to the static ground truth case. Paired Wilcoxon tests and Kruskal–Wallis ANOVA were used to test for significant differences. Results: For 4DPET imaging, the maximum intensity projection CTAC produced significantly more accurate recovery coefficients than all other CTAC methods (p < 0.0001 over all metrics). Over all motion waveforms, ratios of 4DMIP CTAC recovery were 0.2 ± 5.4, −1.8 ± 6.5, −3.2 ± 5.0, and 3.0 ± 5.9 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. In comparison, recovery coefficients for phase-matched CTAC were −8.4 ± 5.3, −10.5 ± 6.2, −7.6 ± 5.0, and −13.0 ± 7.7 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. When testing differences between phases over all CTAC methods and waveforms, end-exhale phases were significantly more accurate (p = 0.005). However, these differences were driven by

  19. WE-G-BRF-09: Force- and Image-Adaptive Strategies for Robotised Placement of 4D Ultrasound Probes

    SciTech Connect

    Kuhlemann, I; Bruder, R; Ernst, F; Schweikard, A

    2014-06-15

    Purpose: To allow continuous acquisition of high quality 4D ultrasound images for non-invasive live tracking of tumours for IGRT, image- and force-adaptive strategies for robotised placement of 4D ultrasound probes are developed and evaluated. Methods: The developed robotised ultrasound system is based on a 6-axes industrial robot (adept Viper s850) carrying a 4D ultrasound transducer with a mounted force-torque sensor. The force-adaptive placement strategies include probe position control using artificial potential fields and contact pressure regulation by a PD controller strategy. The basis for live target tracking is a continuous minimum contact pressure to ensure good image quality and high patient comfort. This contact pressure can be significantly disturbed by respiratory movements and has to be compensated. All measurements were performed on human subjects under realistic conditions. When performing cardiac ultrasound, rib- and lung shadows are a common source of interference and can disrupt the tracking. To ensure continuous tracking, these artefacts had to be detected to automatically realign the probe. The detection is realised by multiple algorithms based on entropy calculations as well as a determination of the image quality. Results: Through active contact pressure regulation it was possible to reduce the variance of the contact pressure by 89.79% despite respiratory motion of the chest. The results regarding the image processing clearly demonstrate the feasibility to detect image artefacts like rib shadows in real-time. Conclusion: In all cases, it was possible to stabilise the image quality by active contact pressure control and automatically detected image artefacts. This fact enables the possibility to compensate for such interferences by realigning the probe and thus continuously optimising the ultrasound images. This is a huge step towards fully automated transducer positioning and opens the possibility for stable target tracking in

  20. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G.; Liu, Chihray; Lu, Bo

    2015-12-01

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm ‘the common mask guided image reconstruction’ (c-MGIR). In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and ‘well’ solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the

  1. 4D reconstruction of the past: the image retrieval and 3D model construction pipeline

    NASA Astrophysics Data System (ADS)

    Hadjiprocopis, Andreas; Ioannides, Marinos; Wenzel, Konrad; Rothermel, Mathias; Johnsons, Paul S.; Fritsch, Dieter; Doulamis, Anastasios; Protopapadakis, Eftychios; Kyriakaki, Georgia; Makantasis, Kostas; Weinlinger, Guenther; Klein, Michael; Fellner, Dieter; Stork, Andre; Santos, Pedro

    2014-08-01

    One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Our main goal is to enable historians, architects, archaeolo- gists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web. This paper aims to provide an update of our progress in designing and imple- menting a pipeline for searching, filtering and retrieving photographs from Open Access Image Repositories and social media sites and using these images to build accurate 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EU- ROPEANA. We provide details of how our implemented software searches and retrieves images of archaeological sites from Flickr and Picasa repositories as well as strategies on how to filter the results, on two levels; a) based on their built-in metadata including geo-location information and b) based on image processing and clustering techniques. We also describe our implementation of a Structure from Motion pipeline designed for producing 3D models using the large collection of 2D input images (>1000) retrieved from Internet Repositories.

  2. 4D optical coherence tomography of the embryonic heart using gated imaging

    NASA Astrophysics Data System (ADS)

    Jenkins, Michael W.; Rothenberg, Florence; Roy, Debashish; Nikolski, Vladimir P.; Wilson, David L.; Efimov, Igor R.; Rollins, Andrew M.

    2005-04-01

    Computed tomography (CT), ultrasound, and magnetic resonance imaging have been used to image and diagnose diseases of the human heart. By gating the acquisition of the images to the heart cycle (gated imaging), these modalities enable one to produce 3D images of the heart without significant motion artifact and to more accurately calculate various parameters such as ejection fractions [1-3]. Unfortunately, these imaging modalities give inadequate resolution when investigating embryonic development in animal models. Defects in developmental mechanisms during embryogenesis have long been thought to result in congenital cardiac anomalies. Our understanding of normal mechanisms of heart development and how abnormalities can lead to defects has been hampered by our inability to detect anatomic and physiologic changes in these small (<2mm) organs. Optical coherence tomography (OCT) has made it possible to visualize internal structures of the living embryonic heart with high-resolution in two- and threedimensions. OCT offers higher resolution than ultrasound (30 um axial, 90 um lateral) and magnetic resonance microscopy (25 um axial, 31 um lateral) [4, 5], with greater depth penetration over confocal microscopy (200 um). Optical coherence tomography (OCT) uses back reflected light from a sample to create an image with axial resolutions ranging from 2-15 um, while penetrating 1-2 mm in depth [6]. In the past, OCT groups estimated ejection fractions using 2D images in a Xenopus laevis [7], created 3D renderings of chick embryo hearts [8], and used a gated reconstruction technique to produce 2D Doppler OCT image of an in vivo Xenopus laevis heart [9]. In this paper we present a gated imaging system that allowed us to produce a 16-frame 3D movie of a beating chick embryo heart. The heart was excised from a day two (stage 13) chicken embryo and electrically paced at 1 Hz. We acquired 2D images (B-scans) in 62.5 ms, which provides enough temporal resolution to distinguish end

  3. 4D motion modeling of the coronary arteries from CT images for robotic assisted minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Zhang, Dong Ping; Edwards, Eddie; Mei, Lin; Rueckert, Daniel

    2009-02-01

    In this paper, we present a novel approach for coronary artery motion modeling from cardiac Computed Tomography( CT) images. The aim of this work is to develop a 4D motion model of the coronaries for image guidance in robotic-assisted totally endoscopic coronary artery bypass (TECAB) surgery. To utilize the pre-operative cardiac images to guide the minimally invasive surgery, it is essential to have a 4D cardiac motion model to be registered with the stereo endoscopic images acquired intraoperatively using the da Vinci robotic system. In this paper, we are investigating the extraction of the coronary arteries and the modelling of their motion from a dynamic sequence of cardiac CT. We use a multi-scale vesselness filter to enhance vessels in the cardiac CT images. The centerlines of the arteries are extracted using a ridge traversal algorithm. Using this method the coronaries can be extracted in near real-time as only local information is used in vessel tracking. To compute the deformation of the coronaries due to cardiac motion, the motion is extracted from a dynamic sequence of cardiac CT. Each timeframe in this sequence is registered to the end-diastole timeframe of the sequence using a non-rigid registration algorithm based on free-form deformations. Once the images have been registered a dynamic motion model of the coronaries can be obtained by applying the computed free-form deformations to the extracted coronary arteries. To validate the accuracy of the motion model we compare the actual position of the coronaries in each time frame with the predicted position of the coronaries as estimated from the non-rigid registration. We expect that this motion model of coronaries can facilitate the planning of TECAB surgery, and through the registration with real-time endoscopic video images it can reduce the conversion rate from TECAB to conventional procedures.

  4. 4D PET iterative deconvolution with spatiotemporal regularization for quantitative dynamic PET imaging.

    PubMed

    Reilhac, Anthonin; Charil, Arnaud; Wimberley, Catriona; Angelis, Georgios; Hamze, Hasar; Callaghan, Paul; Garcia, Marie-Paule; Boisson, Frederic; Ryder, Will; Meikle, Steven R; Gregoire, Marie-Claude

    2015-09-01

    Quantitative measurements in dynamic PET imaging are usually limited by the poor counting statistics particularly in short dynamic frames and by the low spatial resolution of the detection system, resulting in partial volume effects (PVEs). In this work, we present a fast and easy to implement method for the restoration of dynamic PET images that have suffered from both PVE and noise degradation. It is based on a weighted least squares iterative deconvolution approach of the dynamic PET image with spatial and temporal regularization. Using simulated dynamic [(11)C] Raclopride PET data with controlled biological variations in the striata between scans, we showed that the restoration method provides images which exhibit less noise and better contrast between emitting structures than the original images. In addition, the method is able to recover the true time activity curve in the striata region with an error below 3% while it was underestimated by more than 20% without correction. As a result, the method improves the accuracy and reduces the variability of the kinetic parameter estimates calculated from the corrected images. More importantly it increases the accuracy (from less than 66% to more than 95%) of measured biological variations as well as their statistical detectivity. PMID:26080302

  5. Iterative 4D cardiac micro-CT image reconstruction using an adaptive spatio-temporal sparsity prior

    NASA Astrophysics Data System (ADS)

    Ritschl, Ludwig; Sawall, Stefan; Knaup, Michael; Hess, Andreas; Kachelrieß, Marc

    2012-03-01

    Temporal-correlated image reconstruction, also known as 4D CT image reconstruction, is a big challenge in computed tomography. The reasons for incorporating the temporal domain into the reconstruction are motions of the scanned object, which would otherwise lead to motion artifacts. The standard method for 4D CT image reconstruction is extracting single motion phases and reconstructing them separately. These reconstructions can suffer from undersampling artifacts due to the low number of used projections in each phase. There are different iterative methods which try to incorporate some a priori knowledge to compensate for these artifacts. In this paper we want to follow this strategy. The cost function we use is a higher dimensional cost function which accounts for the sparseness of the measured signal in the spatial and temporal directions. This leads to the definition of a higher dimensional total variation. The method is validated using in vivo cardiac micro-CT mouse data. Additionally, we compare the results to phase-correlated reconstructions using the FDK algorithm and a total variation constrained reconstruction, where the total variation term is only defined in the spatial domain. The reconstructed datasets show strong improvements in terms of artifact reduction and low-contrast resolution compared to other methods. Thereby the temporal resolution of the reconstructed signal is not affected.

  6. A novel non-registration based segmentation approach of 4D dynamic upper airway MR images: minimally interactive fuzzy connectedness

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Sin, Sanghun; Wagshul, Mark E.; Arens, Raanan

    2014-03-01

    There are several disease conditions that lead to upper airway restrictive disorders. In the study of these conditions, it is important to take into account the dynamic nature of the upper airway. Currently, dynamic MRI is the modality of choice for studying these diseases. Unfortunately, the contrast resolution obtainable in the images poses many challenges for an effective segmentation of the upper airway structures. No viable methods have been developed to date to solve this problem. In this paper, we demonstrate the adaptation of the iterative relative fuzzy connectedness (IRFC) algorithm for this application as a potential practical tool. After preprocessing to correct for background image non-uniformities and the non-standardness of MRI intensities, seeds are specified for the airway and its crucial background tissue components in only the 3D image corresponding to the first time instance of the 4D volume. Subsequently the process runs without human interaction and completes segmenting the whole 4D volume in 10 sec. Our evaluations indicate that the segmentations are of very good quality achieving true positive and false positive volume fractions and boundary distance with respect to reference manual segmentations of about 93%, 0.1%, and 0.5 mm, respectively.

  7. Accelerated 4D Quantitative Single Point EPR Imaging Using Model-based Reconstruction

    PubMed Central

    Jang, Hyungseok; Matsumoto, Shingo; Devasahayam, Nallathamby; Subramanian, Sankaran; Zhuo, Jiachen; Krishna, Murali C.; McMillan, Alan B

    2014-01-01

    Purpose EPRI has surfaced as a promising non-invasive imaging modality that is capable of imaging tissue oxygenation. Due to extremely short spin-spin relaxation time, EPRI benefits from single point imaging and inherently suffers from limited spatial and temporal resolution, preventing localization of small hypoxic tissues and differentiation of hypoxia dynamics, making accelerated imaging a crucial issue. Method In this study, methods for accelerated single point imaging were developed by combining a bilateral k-space extrapolation technique with model-based reconstruction that benefits from dense sampling in the parameter domain (measurement of the T2* decay of an FID). In bilateral k-space extrapolation, more k-space samples are obtained in a sparsely sampled region by bilaterally extrapolating data from temporally neighboring k-spaces. To improve the accuracy of T2* estimation, a principal component analysis (PCA)-based method was implemented. Result In a computer simulation and a phantom experiment, the proposed methods showed its capability for reliable T2* estimation with high acceleration (8-fold, 15-fold, and 30-fold accelerations for 61×61×61, 95×95×95, and 127×127×127 matrix, respectively). Conclusion By applying bilateral k-space extrapolation and model-based reconstruction, improved scan times with higher spatial resolution can be achieved in the current SP-EPRI modality. PMID:24803382

  8. Dynamic Multiscale Boundary Conditions for 4D CT Images of Healthy and Emphysematous Rat

    SciTech Connect

    Jacob, Rick E.; Carson, James P.; Thomas, Mathew; Einstein, Daniel R.

    2013-06-14

    Changes in the shape of the lung during breathing determine the movement of airways and alveoli, and thus impact airflow dynamics. Modeling airflow dynamics in health and disease is a key goal for predictive multiscale models of respiration. Past efforts to model changes in lung shape during breathing have measured shape at multiple breath-holds. However, breath-holds do not capture hysteretic differences between inspiration and expiration resulting from the additional energy required for inspiration. Alternatively, imaging dynamically – without breath-holds – allows measurement of hysteretic differences. In this study, we acquire multiple micro-CT images per breath (4DCT) in live rats, and from these images we develop, for the first time, dynamic volume maps. These maps show changes in local volume across the entire lung throughout the breathing cycle and accurately predict the global pressure-volume (PV) hysteresis.

  9. Denoising and artefact reduction in dynamic flat detector CT perfusion imaging using high speed acquisition: first experimental and clinical results.

    PubMed

    Manhart, Michael T; Aichert, André; Struffert, Tobias; Deuerling-Zheng, Yu; Kowarschik, Markus; Maier, Andreas K; Hornegger, Joachim; Doerfler, Arnd

    2014-08-21

    Flat detector CT perfusion (FD-CTP) is a novel technique using C-arm angiography systems for interventional dynamic tissue perfusion measurement with high potential benefits for catheter-guided treatment of stroke. However, FD-CTP is challenging since C-arms rotate slower than conventional CT systems. Furthermore, noise and artefacts affect the measurement of contrast agent flow in tissue. Recent robotic C-arms are able to use high speed protocols (HSP), which allow sampling of the contrast agent flow with improved temporal resolution. However, low angular sampling of projection images leads to streak artefacts, which are translated to the perfusion maps. We recently introduced the FDK-JBF denoising technique based on Feldkamp (FDK) reconstruction followed by joint bilateral filtering (JBF). As this edge-preserving noise reduction preserves streak artefacts, an empirical streak reduction (SR) technique is presented in this work. The SR method exploits spatial and temporal information in the form of total variation and time-curve analysis to detect and remove streaks. The novel approach is evaluated in a numerical brain phantom and a patient study. An improved noise and artefact reduction compared to existing post-processing methods and faster computation speed compared to an algebraic reconstruction method are achieved. PMID:25069101

  10. PDE-based Non-Linear Diffusion Techniques for Denoising Scientific and Industrial Images: An Empirical Study

    SciTech Connect

    Weeratunga, S K; Kamath, C

    2001-12-20

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, they focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. They complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. They explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. They also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. The empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  11. Denoising and artefact reduction in dynamic flat detector CT perfusion imaging using high speed acquisition: first experimental and clinical results

    NASA Astrophysics Data System (ADS)

    Manhart, Michael T.; Aichert, André; Struffert, Tobias; Deuerling-Zheng, Yu; Kowarschik, Markus; Maier, Andreas K.; Hornegger, Joachim; Doerfler, Arnd

    2014-08-01

    Flat detector CT perfusion (FD-CTP) is a novel technique using C-arm angiography systems for interventional dynamic tissue perfusion measurement with high potential benefits for catheter-guided treatment of stroke. However, FD-CTP is challenging since C-arms rotate slower than conventional CT systems. Furthermore, noise and artefacts affect the measurement of contrast agent flow in tissue. Recent robotic C-arms are able to use high speed protocols (HSP), which allow sampling of the contrast agent flow with improved temporal resolution. However, low angular sampling of projection images leads to streak artefacts, which are translated to the perfusion maps. We recently introduced the FDK-JBF denoising technique based on Feldkamp (FDK) reconstruction followed by joint bilateral filtering (JBF). As this edge-preserving noise reduction preserves streak artefacts, an empirical streak reduction (SR) technique is presented in this work. The SR method exploits spatial and temporal information in the form of total variation and time-curve analysis to detect and remove streaks. The novel approach is evaluated in a numerical brain phantom and a patient study. An improved noise and artefact reduction compared to existing post-processing methods and faster computation speed compared to an algebraic reconstruction method are achieved.

  12. Dose-Response Relationship for Image-Guided Stereotactic Body Radiotherapy of Pulmonary Tumors: Relevance of 4D Dose Calculation

    SciTech Connect

    Guckenberger, Matthias Wulf, Joern; Mueller, Gerd; Krieger, Thomas; Baier, Kurt; Gabor, Manuela; Richter, Anne; Wilbert, Juergen; Flentje, Michael

    2009-05-01

    Purpose: To evaluate outcome after image-guided stereotactic body radiotherapy (SBRT) for early-stage non-small-cell lung cancer (NSCLC) and pulmonary metastases. Methods and Materials: A total of 124 patients with 159 pulmonary lesions (metastases n = 118; NSCLC, n = 41; Stage IA, n = 13; Stage IB, n = 19; T3N0, n = 9) were treated with SBRT. Patients were treated with hypofractionated schemata (one to eight fractions of 6-26 Gy); biologic effective doses (BED) to the clinical target volume (CTV) were calculated based on four-dimensional (4D) dose calculation. The position of the pulmonary target was verified using volume imaging before all treatments. Results: With mean/median follow-up of 18/14 months, actuarial local control was 83% at 36 months with no difference between NSCLC and metastases. The dose to the CTV based on 4D dose calculation was closely correlated with local control: local control rates were 89% and 62% at 36 months for >100 Gy and <100 Gy BED (p = 0.0001), respectively. Actuarial freedom from regional and systemic progression was 34% at 36 months for primary NSCLC group; crude rate of regional failure was 15%. Three-year overall survival was 37% for primary NSCLC and 16% for metastases; no dose-response relationship for survival was observed. Exacerbation of comorbidities was the most frequent cause of death for primary NSCLC. Conclusions: Doses of >100 Gy BED to the CTV based on 4D dose calculation resulted in excellent local control rates. This cutoff dose is not specific to the treatment technique and protocol of our study and may serve as a general recommendation.

  13. 4-D flow magnetic resonance imaging: blood flow quantification compared to 2-D phase-contrast magnetic resonance imaging and Doppler echocardiography

    PubMed Central

    Gabbour, Maya; Schnell, Susanne; Jarvis, Kelly; Robinson, Joshua D.; Markl, Michael

    2015-01-01

    Background Doppler echocardiography (echo) is the reference standard for blood flow velocity analysis, and two-dimensional (2-D) phase-contrast magnetic resonance imaging (MRI) is considered the reference standard for quantitative blood flow assessment. However, both clinical standard-of-care techniques are limited by 2-D acquisitions and single-direction velocity encoding and may make them inadequate to assess the complex three-dimensional hemodynamics seen in congenital heart disease. Four-dimensional flow MRI (4-D flow) enables qualitative and quantitative analysis of complex blood flow in the heart and great arteries. Objectives The objectives of this study are to compare 4-D flow with 2-D phase-contrast MRI for quantification of aortic and pulmonary flow and to evaluate the advantage of 4-D flow-based volumetric flow analysis compared to 2-D phase-contrast MRI and echo for peak velocity assessment in children and young adults. Materials and methods Two-dimensional phase-contrast MRI of the aortic root, main pulmonary artery (MPA), and right and left pulmonary arteries (RPA, LPA) and 4-D flow with volumetric coverage of the aorta and pulmonary arteries were performed in 50 patients (mean age: 13.1±6.4 years). Four-dimensional flow analyses included calculation of net flow and regurgitant fraction with 4-D flow analysis planes similarly positioned to 2-D planes. In addition, 4-D flow volumetric assessment of aortic root/ascending aorta and MPA peak velocities was performed and compared to 2-D phase-contrast MRI and echo. Results Excellent correlation and agreement were found between 2-D phase-contrast MRI and 4-D flow for net flow (r=0.97, P<0.001) and excellent correlation with good agreement was found for regurgitant fraction (r= 0.88, P<0.001) in all vessels. Two-dimensional phase-contrast MRI significantly underestimated aortic (P= 0.032) and MPA (P<0.001) peak velocities compared to echo, while volumetric 4-D flow analysis resulted in higher (aortic: P=0

  14. Online 4d Reconstruction Using Multi-Images Available Under Open Access

    NASA Astrophysics Data System (ADS)

    Ioannides, M.; Hadjiprocopi, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E.; Makantasis, K.; Santos, P.; Fellner, D.; Stork, A.; Balet, O.; Julien, M.; Weinlinger, G.; Johnson, P. S.; Klein, M.; Fritsch, D.

    2013-07-01

    The advent of technology in digital cameras and their incorporation into virtually any smart mobile device has led to an explosion of the number of photographs taken every day. Today, the number of images stored online and available freely has reached unprecedented levels. It is estimated that in 2011, there were over 100 billion photographs stored in just one of the major social media sites. This number is growing exponentially. Moreover, advances in the fields of Photogrammetry and Computer Vision have led to significant breakthroughs such as the Structure from Motion algorithm which creates 3D models of objects using their twodimensional photographs. The existence of powerful and affordable computational machinery not only the reconstruction of complex structures but also entire cities. This paper illustrates an overview of our methodology for producing 3D models of Cultural Heritage structures such as monuments and artefacts from 2D data (pictures, video), available on Internet repositories, social media, Google Maps, Bing, etc. We also present new approaches to semantic enrichment of the end results and their subsequent export to Europeana, the European digital library, for integrated, interactive 3D visualisation within regular web browsers using WebGl and X3D. Our main goal is to enable historians, architects, archaeologists, urban planners and affiliated professionals to reconstruct views of historical structures from millions of images floating around the web and interact with them.

  15. Assessment of regional ventilation and deformation using 4D-CT imaging for healthy human lungs during tidal breathing.

    PubMed

    Jahani, Nariman; Choi, Sanghun; Choi, Jiwoong; Iyer, Krishna; Hoffman, Eric A; Lin, Ching-Long

    2015-11-15

    This study aims to assess regional ventilation, nonlinearity, and hysteresis of human lungs during dynamic breathing via image registration of four-dimensional computed tomography (4D-CT) scans. Six healthy adult humans were studied by spiral multidetector-row CT during controlled tidal breathing as well as during total lung capacity and functional residual capacity breath holds. Static images were utilized to contrast static vs. dynamic (deep vs. tidal) breathing. A rolling-seal piston system was employed to maintain consistent tidal breathing during 4D-CT spiral image acquisition, providing required between-breath consistency for physiologically meaningful reconstructed respiratory motion. Registration-derived variables including local air volume and anisotropic deformation index (ADI, an indicator of preferential deformation in response to local force) were employed to assess regional ventilation and lung deformation. Lobar distributions of air volume change during tidal breathing were correlated with those of deep breathing (R(2) ≈ 0.84). Small discrepancies between tidal and deep breathing were shown to be likely due to different distributions of air volume change in the left and the right lungs. We also demonstrated an asymmetric characteristic of flow rate between inhalation and exhalation. With ADI, we were able to quantify nonlinearity and hysteresis of lung deformation that can only be captured in dynamic images. Nonlinearity quantified by ADI is greater during inhalation, and it is stronger in the lower lobes (P < 0.05). Lung hysteresis estimated by the difference of ADI between inhalation and exhalation is more significant in the right lungs than that in the left lungs. PMID:26316512

  16. Comment on ‘A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry’

    NASA Astrophysics Data System (ADS)

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Kottler, Christian

    2015-01-01

    In a recent paper (Scholkamm et al 2014 Phys. Med. Biol. 59 1425-40) we presented a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast, differential phase contrast and dark-field contrast images retrieved from x-ray Talbot-Lau grating interferometry. In this comment we give additional information and report about the application of our framework to breast cancer tissue which we presented in our paper as an example. The applied procedure is suitable for a qualitative comparison of different algorithms. For a quantitative juxtaposition original data would however be needed as an input.

  17. 4-D imaging and monitoring of the Solfatara crater (Italy) by ambient noise tomography

    NASA Astrophysics Data System (ADS)

    Pilz, Marco; Parolai, Stefano; Woith, Heiko; Gresse, Marceau; Vandemeulebrouck, Jean

    2016-04-01

    Imaging shallow subsurface structures and monitoring related temporal variations are two of the main tasks for modern geosciences and seismology. Although many observations have reported temporal velocity changes, e.g., in volcanic areas and on landslides, new methods based on passive sources like ambient seismic noise can provide accurate spatially and temporally resolved information on the velocity structure and on velocity changes. The success of these passive applications is explained by the fact that these methods are based on surface waves which are always present in the ambient seismic noise wave field because they are excited preferentially by superficial sources. Such surface waves can easily be extracted because they dominate the Greeńs function between receivers located at the surface. For real-time monitoring of the shallow velocity structure of the Solfatara crater, one of the forty volcanoes in the Campi Flegrei area characterized by an intense hydrothermal activity due to the interaction of deep convection and meteoric water, we have installed a dense network of 50 seismological sensing units covering the whole surface area in the framework of the European project MED-SUV (The MED-SUV project has received funding from the European Union Seventh Framework Programme FP7 under Grant agreement no 308665). Continuous recordings of the ambient seismic noise over several days as well as signals of an active vibroseis source have been used. Based on a weighted inversion procedure for 3D-passive imaging using ambient noise cross-correlations of both Rayleigh and Love waves, we will present a high-resolution shear-wave velocity model of the structure beneath the Solfatara crater and its temporal changes. Results of seismic tomography are compared with a 3-D electrical resistivity model and CO2 flux map.

  18. Dynamic Denoising of Tracking Sequences

    PubMed Central

    Michailovich, Oleg; Tannenbaum, Allen

    2009-01-01

    In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences. Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement “collaborate” in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over “static” approaches, in which the tracking images are enhanced independently of each other. PMID:18482881

  19. Impact of scanning parameters and breathing patterns on image quality and accuracy of tumor motion reconstruction in 4D CBCT: a phantom study.

    PubMed

    Lee, Soyoung; Yan, Guanghua; Lu, Bo; Kahler, Darren; Li, Jonathan G; Sanjiv, Samat S

    2015-01-01

    Four-dimensional, cone-beam CT (4D CBCT) substantially reduces respiration-induced motion blurring artifacts in three-dimension (3D) CBCT. However, the image quality of 4D CBCT is significantly degraded which may affect its accuracy in localizing a mobile tumor for high-precision, image-guided radiation therapy (IGRT). The purpose of this study was to investigate the impact of scanning parameters hereinafter collectively referred to as scanning sequence) and breathing patterns on the image quality and the accuracy of computed tumor trajectory for a commercial 4D CBCT system, in preparation for its clinical implementation. We simulated a series of periodic and aperiodic sinusoidal breathing patterns with a respiratory motion phantom. The aperiodic pattern was created by varying the period or amplitude of individual sinusoidal breathing cycles. 4D CBCT scans of the phantom were acquired with a manufacturer-supplied scanning sequence (4D-S-slow) and two in-house modified scanning sequences (4D-M-slow and 4D-M-fast). While 4D-S-slow used small field of view (FOV), partial rotation (200°), and no imaging filter, 4D-M-slow and 4D-M-fast used medium FOV, full rotation, and the F1 filter. The scanning speed was doubled in 4D-M-fast (100°/min gantry rotation). The image quality of the 4D CBCT scans was evaluated using contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), and motion blurring ratio (MBR). The trajectory of the moving target was reconstructed by registering each phase of the 4D CBCT with a reference CT. The root-mean-squared-error (RMSE) analysis was used to quantify its accuracy. Significant decrease in CNR and SNR from 3D CBCT to 4D CBCT was observed. The 4D-S-slow and 4D-M-fast scans had comparable image quality, while the 4D-M-slow scans had better performance due to doubled projections. Both CNR and SNR decreased slightly as the breathing period increased, while no dependence on the amplitude was observed. The difference of both CNR and SNR

  20. 4D imaging of fracturing in organic-rich shales during heating

    SciTech Connect

    Maya Kobchenko; Hamed Panahi; François Renard; Dag K. Dysthe; Anders Malthe-Sørenssen; Adriano Mazzini; Julien Scheibert1; Bjørn Jamtveit; Paul Meakin

    2011-12-01

    To better understand the mechanisms of fracture pattern development and fluid escape in low permeability rocks, we performed time-resolved in situ X-ray tomography imaging to investigate the processes that occur during the slow heating (from 60 to 400 C) of organic-rich Green River shale. At about 350 C cracks nucleated in the sample, and as the temperature continued to increase, these cracks propagated parallel to shale bedding and coalesced, thus cutting across the sample. Thermogravimetry and gas chromatography revealed that the fracturing occurring at {approx}350 C was associated with significant mass loss and release of light hydrocarbons generated by the decomposition of immature organic matter. Kerogen decomposition is thought to cause an internal pressure build up sufficient to form cracks in the shale, thus providing pathways for the outgoing hydrocarbons. We show that a 2D numerical model based on this idea qualitatively reproduces the experimentally observed dynamics of crack nucleation, growth and coalescence, as well as the irregular outlines of the cracks. Our results provide a new description of fracture pattern formation in low permeability shales.

  1. Computational biomechanics and experimental validation of vessel deformation based on 4D-CT imaging of the porcine aorta

    NASA Astrophysics Data System (ADS)

    Hazer, Dilana; Finol, Ender A.; Kostrzewa, Michael; Kopaigorenko, Maria; Richter, Götz-M.; Dillmann, Rüdiger

    2009-02-01

    Cardiovascular disease results from pathological biomechanical conditions and fatigue of the vessel wall. Image-based computational modeling provides a physical and realistic insight into the patient-specific biomechanics and enables accurate predictive simulations of development, growth and failure of cardiovascular disease. An experimental validation is necessary for the evaluation and the clinical implementation of such computational models. In the present study, we have implemented dynamic Computed-Tomography (4D-CT) imaging and catheter-based in vivo measured pressures to numerically simulate and experimentally evaluate the biomechanics of the porcine aorta. The computations are based on the Finite Element Method (FEM) and simulate the arterial wall response to the transient pressure-based boundary condition. They are evaluated by comparing the numerically predicted wall deformation and that calculated from the acquired 4D-CT data. The dynamic motion of the vessel is quantified by means of the hydraulic diameter, analyzing sequences at 5% increments over the cardiac cycle. Our results show that accurate biomechanical modeling is possible using FEM-based simulations. The RMS error of the computed hydraulic diameter at five cross-sections of the aorta was 0.188, 0.252, 0.280, 0.237 and 0.204 mm, which is equivalent to 1.7%, 2.3%, 2.7%, 2.3% and 2.0%, respectively, when expressed as a function of the time-averaged hydraulic diameter measured from the CT images. The present investigation is a first attempt to simulate and validate vessel deformation based on realistic morphological data and boundary conditions. An experimentally validated system would help in evaluating individual therapies and optimal treatment strategies in the field of minimally invasive endovascular surgery.

  2. SU-E-J-74: Impact of Respiration-Correlated Image Quality On Tumor Motion Reconstruction in 4D-CBCT: A Phantom Study

    SciTech Connect

    Lee, S; Lu, B; Samant, S

    2014-06-01

    Purpose: To investigate the effects of scanning parameters and respiratory patterns on the image quality for 4-dimensional cone-beam computed tomography(4D-CBCT) imaging, and assess the accuracy of computed tumor trajectory for lung imaging using registration of phased 4D-CBCT imaging with treatment planning-CT. Methods: We simulated a periodic and non-sinusoidal respirations with various breathing periods and amplitudes using a respiratory phantom(Quasar, Modus Medical Devices Inc) to acquire respiration-correlated 4D-CBCT images. 4D-CBCT scans(Elekta Oncology Systems Ltd) were performed with different scanning parameters for collimation size(e.g., small and medium field-of-views) and scanning speed(e.g., slow 50°·min{sup −1}, fast 100°·min{sup −1}). Using a standard CBCT-QA phantom(Catphan500, The Phantom Laboratory), the image qualities of all phases in 4D-CBCT were evaluated with contrast-to-noise ratio(CNR) for lung tissue and uniformity in each module. Using a respiratory phantom, the target imaging in 4D-CBCT was compared to 3D-CBCT target image. The target trajectory from 10-respiratory phases in 4D-CBCT was extracted using an automatic image registration and subsequently assessed the accuracy by comparing with actual motion of the target. Results: Image analysis indicated that a short respiration with a small amplitude resulted in superior CNR and uniformity. Smaller variation of CNR and uniformity was present amongst different respiratory phases. The small field-of-view with a partial scan using slow scan can improve CNR, but degraded uniformity. Large amplitude of respiration can degrade image quality. RMS of voxel densities in tumor area of 4D-CBCT images between sinusoidal and non-sinusoidal motion exhibited no significant difference. The maximum displacement errors of motion trajectories were less than 1.0 mm and 13.5 mm, for sinusoidal and non-sinusoidal breathings, respectively. The accuracy of motion reconstruction showed good overall

  3. Denoising and covariance estimation of single particle cryo-EM images.

    PubMed

    Bhamre, Tejal; Zhang, Teng; Singer, Amit

    2016-07-01

    The problem of image restoration in cryo-EM entails correcting for the effects of the Contrast Transfer Function (CTF) and noise. Popular methods for image restoration include 'phase flipping', which corrects only for the Fourier phases but not amplitudes, and Wiener filtering, which requires the spectral signal to noise ratio. We propose a new image restoration method which we call 'Covariance Wiener Filtering' (CWF). In CWF, the covariance matrix of the projection images is used within the classical Wiener filtering framework for solving the image restoration deconvolution problem. Our estimation procedure for the covariance matrix is new and successfully corrects for the CTF. We demonstrate the efficacy of CWF by applying it to restore both simulated and experimental cryo-EM images. Results with experimental datasets demonstrate that CWF provides a good way to evaluate the particle images and to see what the dataset contains even without 2D classification and averaging. PMID:27129418

  4. First Steps Toward Ultrasound-Based Motion Compensation for Imaging and Therapy: Calibration with an Optical System and 4D PET Imaging

    PubMed Central

    Schwaab, Julia; Kurz, Christopher; Sarti, Cristina; Bongers, André; Schoenahl, Frédéric; Bert, Christoph; Debus, Jürgen; Parodi, Katia; Jenne, Jürgen Walter

    2015-01-01

    Target motion, particularly in the abdomen, due to respiration or patient movement is still a challenge in many diagnostic and therapeutic processes. Hence, methods to detect and compensate this motion are required. Diagnostic ultrasound (US) represents a non-invasive and dose-free alternative to fluoroscopy, providing more information about internal target motion than respiration belt or optical tracking. The goal of this project is to develop an US-based motion tracking for real-time motion correction in radiation therapy and diagnostic imaging, notably in 4D positron emission tomography (PET). In this work, a workflow is established to enable the transformation of US tracking data to the coordinates of the treatment delivery or imaging system – even if the US probe is moving due to respiration. It is shown that the US tracking signal is equally adequate for 4D PET image reconstruction as the clinically used respiration belt and provides additional opportunities in this concern. Furthermore, it is demonstrated that the US probe being within the PET field of view generally has no relevant influence on the image quality. The accuracy and precision of all the steps in the calibration workflow for US tracking-based 4D PET imaging are found to be in an acceptable range for clinical implementation. Eventually, we show in vitro that an US-based motion tracking in absolute room coordinates with a moving US transducer is feasible. PMID:26649277

  5. Image enhancement and denoising by wavelet transform for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Raghuveer, M. R.

    1997-02-01

    Wavelet transform based techniques were developed and investigated for isolation and enhancement of objects in images. The primary motivation is the development of image processing algorithms as part of an automatic system for the detection of concealed weapons under a person's clothing; a problem of considerable potential utility to the military in certain common types of deployment in the post cold war environment such as small unit operations. The issue has potential for other dual use purposes such as law enforcement applications. Wavelet decompositions of the currently available images in the Rome Laboratory database, namely, noisy, low contrast, infrared images, were studied in space-scale-amplitude space. An isolation technique for separating potential suspicious regions/objects from surrounding clutter has been proposed. Based on the images available, the study indicates that the technique is promising in providing the image enhancement necessary for further pattern detection and classification.

  6. Progressive image denoising through hybrid graph Laplacian regularization: a unified framework.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhao, Debin; Zhai, Guangtao; Gao, Wen

    2014-04-01

    Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms. PMID:24565791

  7. The development of a population of 4D pediatric XCAT phantoms for CT imaging research and optimization

    NASA Astrophysics Data System (ADS)

    Norris, Hannah; Zhang, Yakun; Frush, Jack; Sturgeon, Gregory M.; Minhas, Anum; Tward, Daniel J.; Ratnanather, J. Tilak; Miller, M. I.; Frush, Donald; Samei, Ehsan; Segars, W. Paul

    2014-03-01

    With the increased use of CT examinations, the associated radiation dose has become a large concern, especially for pediatrics. Much research has focused on reducing radiation dose through new scanning and reconstruction methods. Computational phantoms provide an effective and efficient means for evaluating image quality, patient-specific dose, and organ-specific dose in CT. We previously developed a set of highly-detailed 4D reference pediatric XCAT phantoms at ages of newborn, 1, 5, 10, and 15 years with organ and tissues masses matched to ICRP Publication 89 values. We now extend this reference set to a series of 64 pediatric phantoms of a variety of ages and height and weight percentiles, representative of the public at large. High resolution PET-CT data was reviewed by a practicing experienced radiologist for anatomic regularity and was then segmented with manual and semi-automatic methods to form a target model. A Multi-Channel Large Deformation Diffeomorphic Metric Mapping (MC-LDDMM) algorithm was used to calculate the transform from the best age matching pediatric reference phantom to the patient target. The transform was used to complete the target, filling in the non-segmented structures and defining models for the cardiac and respiratory motions. The complete phantoms, consisting of thousands of structures, were then manually inspected for anatomical accuracy. 3D CT data was simulated from the phantoms to demonstrate their ability to generate realistic, patient quality imaging data. The population of pediatric phantoms developed in this work provides a vital tool to investigate dose reduction techniques in 3D and 4D pediatric CT.

  8. De-Noising Ultrasound Images of Colon Tumors Using Daubechies Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Moldovanu, Simona; Nicolae, Mariana Carmen

    2011-10-01

    In this paper, we present a new approach to analysis of the cancer of the colon in ultrasonography. A speckle suppression method was presented. Daubechies wavelet transform is used due to its approximate shift invariance property and extra information in imaginary plane of complex wavelet domain when compared to real wavelet domain. The methods that we propose have provided quite satisfactory results and show the usefulness of image processing techniques in the diagnosis by means of medical imaging. Local echogenicity variance of ROI is utilized so as to compare with local echogenicity distribution within entire acquired image. Also the image was analyzed using the histogram which interprets the gray-level of images. Such information is valuable for the discrimination of tumors. The aim of this work is not the substitution of the specialist, but the generation of a series of parameters which reduce the need of carrying out the biopsy.

  9. SU-E-J-153: Reconstructing 4D Cone Beam CT Images for Clinical QA of Lung SABR Treatments

    SciTech Connect

    Beaudry, J; Bergman, A; Cropp, R

    2015-06-15

    Purpose: To verify that the planned Primary Target Volume (PTV) and Internal Gross Tumor Volume (IGTV) fully enclose a moving lung tumor volume as visualized on a pre-SABR treatment verification 4D Cone Beam CT. Methods: Daily 3DCBCT image sets were acquired immediately prior to treatment for 10 SABR lung patients using the on-board imaging system integrated into a Varian TrueBeam (v1.6: no 4DCBCT module available). Respiratory information was acquired during the scan using the Varian RPM system. The CBCT projections were sorted into 8 bins offline, both by breathing phase and amplitude, using in-house software. An iterative algorithm based on total variation minimization, implemented in the open source reconstruction toolkit (RTK), was used to reconstruct the binned projections into 4DCBCT images. The relative tumor motion was quantified by tracking the centroid of the tumor volume from each 4DCBCT image. Following CT-CBCT registration, the planning CT volumes were compared to the location of the CBCT tumor volume as it moves along its breathing trajectory. An overlap metric quantified the ability of the planned PTV and IGTV to contain the tumor volume at treatment. Results: The 4DCBCT reconstructed images visibly show the tumor motion. The mean overlap between the planned PTV (IGTV) and the 4DCBCT tumor volumes was 100% (94%), with an uncertainty of 5% from the 4DCBCT tumor volume contours. Examination of the tumor motion and overlap metric verify that the IGTV drawn at the planning stage is a good representation of the tumor location at treatment. Conclusion: It is difficult to compare GTV volumes from a 4DCBCT and a planning CT due to image quality differences. However, it was possible to conclude the GTV remained within the PTV 100% of the time thus giving the treatment staff confidence that SABR lung treatements are being delivered accurately.

  10. 4-D segmentation and normalization of 3He MR images for intrasubject assessment of ventilated lung volumes

    NASA Astrophysics Data System (ADS)

    Contrella, Benjamin; Tustison, Nicholas J.; Altes, Talissa A.; Avants, Brian B.; Mugler, John P., III; de Lange, Eduard E.

    2012-03-01

    Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under the same conditions from 7 to 467 days (mean +/- standard deviation: 185 +/- 37.2) later. Several techniques were evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.

  11. Performances of a specific denoising wavelet process for high-resolution gamma imaging

    NASA Astrophysics Data System (ADS)

    Pousse, Annie; Dornier, Christophe; Parmentier, Michel; Kastler, Bruno; Chavanelle, Jerome

    2004-02-01

    Due to its functional capabilities, gamma imaging is an interesting tool for medical diagnosis. Recent developments lead to improved intrinsic resolution. However this gain is impaired by the poor activity detected and the Poissonian feature of gamma ray emission. High resolution gamma images are grainy. This is a real nuisance for detecting cold nodules in an emitting organ. A specific translation wavelet filter which takes into account the Poissonian feature of noise, has been developed in order to improve the diagnostic capabilities of radioisotopic high resolution images. Monte Carlo simulations of a hot thyroid phantom in which cold spheres, 3-7 mm in diameter, could be included were performed. The loss of activity induced by cold nodules was determined on filtered images by using catchment basins determination. On the original images, only 5-7 mm cold spheres were clearly visible. On filtered images, 3 and 4 mm spheres were put in prominent. The limit of the developed filter is approximately the detection of 3 mm spherical cold nodule in acquisition and activity conditions which mimic a thyroid examination. Furthermore, no disturbing artifacts are generated. It is therefore a powerful tool for detecting small cold nodules in a gamma emitting medium.

  12. SU-D-207-03: Development of 4D-CBCT Imaging System with Dual Source KV X-Ray Tubes

    SciTech Connect

    Nakamura, M; Ishihara, Y; Matsuo, Y; Ueki, N; Iizuka, Y; Mizowaki, T; Hiraoka, M

    2015-06-15

    Purpose: The purposes of this work are to develop 4D-CBCT imaging system with orthogonal dual source kV X-ray tubes, and to determine the imaging doses from 4D-CBCT scans. Methods: Dual source kV X-ray tubes were used for the 4D-CBCT imaging. The maximum CBCT field of view was 200 mm in diameter and 150 mm in length, and the imaging parameters were 110 kV, 160 mA and 5 ms. The rotational angle was 105°, the rotational speed of the gantry was 1.5°/s, the gantry rotation time was 70 s, and the image acquisition interval was 0.3°. The observed amplitude of infrared marker motion during respiration was used to sort each image into eight respiratory phase bins. The EGSnrc/BEAMnrc and EGSnrc/DOSXYZnrc packages were used to simulate kV X-ray dose distributions of 4D-CBCT imaging. The kV X-ray dose distributions were calculated for 9 lung cancer patients based on the planning CT images with dose calculation grid size of 2.5 x 2.5 x 2.5 mm. The dose covering a 2-cc volume of skin (D2cc), defined as the inner 5 mm of the skin surface with the exception of bone structure, was assessed. Results: A moving object was well identified on 4D-CBCT images in a phantom study. Given a gantry rotational angle of 105° and the configuration of kV X-ray imaging subsystems, both kV X-ray fields overlapped at a part of skin surface. The D2cc for the 4D-CBCT scans was in the range 73.8–105.4 mGy. Linear correlation coefficient between the 1000 minus averaged SSD during CBCT scanning and D2cc was −0.65 (with a slope of −0.17) for the 4D-CBCT scans. Conclusion: We have developed 4D-CBCT imaging system with dual source kV X-ray tubes. The total imaging dose with 4D-CBCT scans was up to 105.4 mGy.

  13. TH-E-17A-02: High-Pitch and Sparse-View Helical 4D CT Via Iterative Image Reconstruction Method Based On Tensor Framelet

    SciTech Connect

    Guo, M; Nam, H; Li, R; Xing, L; Gao, H

    2014-06-15

    Purpose: 4D CT is routinely performed during radiation therapy treatment planning of thoracic and abdominal cancers. Compared with the cine mode, the helical mode is advantageous in temporal resolution. However, a low pitch (∼0.1) for 4D CT imaging is often required instead of the standard pitch (∼1) for static imaging, since standard image reconstruction based on analytic method requires the low-pitch scanning in order to satisfy the data sufficient condition when reconstructing each temporal frame individually. In comparison, the flexible iterative method enables the reconstruction of all temporal frames simultaneously, so that the image similarity among frames can be utilized to possibly perform high-pitch and sparse-view helical 4D CT imaging. The purpose of this work is to investigate such an exciting possibility for faster imaging with lower dose. Methods: A key for highpitch and sparse-view helical 4D CT imaging is the simultaneous reconstruction of all temporal frames using the prior that temporal frames are continuous along the temporal direction. In this work, such a prior is regularized through the sparsity transform based on spatiotemporal tensor framelet (TF) as a multilevel and high-order extension of total variation transform. Moreover, GPU-based fast parallel computing of X-ray transform and its adjoint together with split Bregman method is utilized for solving the 4D image reconstruction problem efficiently and accurately. Results: The simulation studies based on 4D NCAT phantoms were performed with various pitches (i.e., 0.1, 0.2, 0.5, and 1) and sparse views (i.e., 400 views per rotation instead of standard >2000 views per rotation), using 3D iterative individual reconstruction method based on 3D TF and 4D iterative simultaneous reconstruction method based on 4D TF respectively. Conclusion: The proposed TF-based simultaneous 4D image reconstruction method enables high-pitch and sparse-view helical 4D CT with lower dose and faster speed.

  14. Nonlocal means denoising of ECG signals.

    PubMed

    Tracey, Brian H; Miller, Eric L

    2012-09-01

    Patch-based methods have attracted significant attention in recent years within the field of image processing for a variety of problems including denoising, inpainting, and super-resolution interpolation. Despite their prevalence for processing 2-D signals, they have received little attention in the 1-D signal processing literature. In this letter, we explore application of one such method, the nonlocal means (NLM) approach, to the denoising of biomedical signals. Using ECG as an example, we demonstrate that a straightforward NLM-based denoising scheme provides signal-to-noise ratio improvements very similar to state of the art wavelet-based methods, while giving ~3 × or greater reduction in metrics measuring distortion of the denoised waveform. PMID:22829361

  15. Compression and denoising in magnetic resonance imaging via SVD on the Fourier domain using computer algebra

    NASA Astrophysics Data System (ADS)

    Díaz, Felipe

    2015-09-01

    Magnetic resonance (MR) data reconstruction can be computationally a challenging task. The signal-to-noise ratio might also present complications, especially with high-resolution images. In this sense, data compression can be useful not only for reducing the complexity and memory requirements, but also to reduce noise, even to allow eliminate spurious components.This article proposes the use of a system based on singular value decomposition of low order for noise reconstruction and reduction in MR imaging system. The proposed method is evaluated using in vivo MRI data. Rebuilt images with less than 20 of the original data and with similar quality in terms of visual inspection are presented. Also a quantitative evaluation of the method is presented.

  16. A rapid compression technique for 4-D functional MRI images using data rearrangement and modified binary array techniques.

    PubMed

    Uma Vetri Selvi, G; Nadarajan, R

    2015-12-01

    Compression techniques are vital for efficient storage and fast transfer of medical image data. The existing compression techniques take significant amount of time for performing encoding and decoding and hence the purpose of compression is not fully satisfied. In this paper a rapid 4-D lossy compression method constructed using data rearrangement, wavelet-based contourlet transformation and a modified binary array technique has been proposed for functional magnetic resonance imaging (fMRI) images. In the proposed method, the image slices of fMRI data are rearranged so that the redundant slices form a sequence. The image sequence is then divided into slices and transformed using wavelet-based contourlet transform (WBCT). In WBCT, the high frequency sub-band obtained from wavelet transform is further decomposed into multiple directional sub-bands by directional filter bank to obtain more directional information. The relationship between the coefficients has been changed in WBCT as it has more directions. The differences in parent–child relationships are handled by a repositioning algorithm. The repositioned coefficients are then subjected to quantization. The quantized coefficients are further compressed by modified binary array technique where the most frequently occurring value of a sequence is coded only once. The proposed method has been experimented with fMRI images the results indicated that the processing time of the proposed method is less compared to existing wavelet-based set partitioning in hierarchical trees and set partitioning embedded block coder (SPECK) compression schemes [1]. The proposed method could also yield a better compression performance compared to wavelet-based SPECK coder. The objective results showed that the proposed method could gain good compression ratio in maintaining a peak signal noise ratio value of above 70 for all the experimented sequences. The SSIM value is equal to 1 and the value of CC is greater than 0.9 for all

  17. A scale-based forward-and-backward diffusion process for adaptive image enhancement and denoising

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Niu, Ruiqing; Zhang, Liangpei; Wu, Ke; Sahli, Hichem

    2011-12-01

    This work presents a scale-based forward-and-backward diffusion (SFABD) scheme. The main idea of this scheme is to perform local adaptive diffusion using local scale information. To this end, we propose a diffusivity function based on the Minimum Reliable Scale (MRS) of Elder and Zucker (IEEE Trans. Pattern Anal. Mach. Intell. 20(7), 699-716, 1998) to detect the details of local structures. The magnitude of the diffusion coefficient at each pixel is determined by taking into account the local property of the image through the scales. A scale-based variable weight is incorporated into the diffusivity function for balancing the forward and backward diffusion. Furthermore, as numerical scheme, we propose a modification of the Perona-Malik scheme (IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629-639, 1990) by incorporating edge orientations. The article describes the main principles of our method and illustrates image enhancement results on a set of standard images as well as simulated medical images, together with qualitative and quantitative comparisons with a variety of anisotropic diffusion schemes.

  18. A weighted dictionary learning model for denoising images corrupted by mixed noise.

    PubMed

    Liu, Jun; Tai, Xue-Cheng; Huang, Haiyang; Huan, Zhongdan

    2013-03-01

    This paper proposes a general weighted l(2)-l(0) norms energy minimization model to remove mixed noise such as Gaussian-Gaussian mixture, impulse noise, and Gaussian-impulse noise from the images. The approach is built upon maximum likelihood estimation framework and sparse representations over a trained dictionary. Rather than optimizing the likelihood functional derived from a mixture distribution, we present a new weighting data fidelity function, which has the same minimizer as the original likelihood functional but is much easier to optimize. The weighting function in the model can be determined by the algorithm itself, and it plays a role of noise detection in terms of the different estimated noise parameters. By incorporating the sparse regularization of small image patches, the proposed method can efficiently remove a variety of mixed or single noise while preserving the image textures well. In addition, a modified K-SVD algorithm is designed to address the weighted rank-one approximation. The experimental results demonstrate its better performance compared with some existing methods. PMID:23193456

  19. Performance comparison of denoising filters for source camera identification

    NASA Astrophysics Data System (ADS)

    Cortiana, A.; Conotter, V.; Boato, G.; De Natale, F. G. B.

    2011-02-01

    Source identification for digital content is one of the main branches of digital image forensics. It relies on the extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently characterizes the digital device which generated the content. Such noise is estimated as the difference between the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance comparison of different denoising filters for source identification purposes. In particular, results achieved with a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously employed in such a context.

  20. Echocardiogram enhancement using supervised manifold denoising.

    PubMed

    Wu, Hui; Huynh, Toan T; Souvenir, Richard

    2015-08-01

    This paper presents data-driven methods for echocardiogram enhancement. Existing denoising algorithms typically rely on a single noise model, and do not generalize to the composite noise sources typically found in real-world echocardiograms. Our methods leverage the low-dimensional intrinsic structure of echocardiogram videos. We assume that echocardiogram images are noisy samples from an underlying manifold parametrized by cardiac motion and denoise images via back-projection onto a learned (non-linear) manifold. Our methods incorporate synchronized side information (e.g., electrocardiography), which is often collected alongside the visual data. We evaluate the proposed methods on a synthetic data set and real-world echocardiograms. Quantitative results show improved performance of our methods over recent image despeckling methods and video denoising methods, and a visual analysis of real-world data shows noticeable image enhancement, even in the challenging case of noise due to dropout artifacts. PMID:26072166

  1. TU-F-12A-05: Sensitivity of Textural Features to 3D Vs. 4D FDG-PET/CT Imaging in NSCLC Patients

    SciTech Connect

    Yang, F; Nyflot, M; Bowen, S; Kinahan, P; Sandison, G

    2014-06-15

    Purpose: Neighborhood Gray-level difference matrices (NGLDM) based texture parameters extracted from conventional (3D) 18F-FDG PET scans in patients with NSCLC have been previously shown to associate with response to chemoradiation and poorer patient outcome. However, the change in these parameters when utilizing respiratory-correlated (4D) FDG-PET scans has not yet been characterized for NSCLC. The Objectives: of this study was to assess the extent to which NGLDM-based texture parameters on 4D PET images vary with reference to values derived from 3D scans in NSCLC. Methods: Eight patients with newly diagnosed NSCLC treated with concomitant chemoradiotherapy were included in this study. 4D PET scans were reconstructed with OSEM-IR in 5 respiratory phase-binned images and corresponding CT data of each phase were employed for attenuation correction. NGLDM-based texture features, consisting of coarseness, contrast, busyness, complexity and strength, were evaluated for gross tumor volumes defined on 3D/4D PET scans by radiation oncologists. Variation of the obtained texture parameters over the respiratory cycle were examined with respect to values extracted from 3D scans. Results: Differences between texture parameters derived from 4D scans at different respiratory phases and those extracted from 3D scans ranged from −30% to 13% for coarseness, −12% to 40% for contrast, −5% to 50% for busyness, −7% to 38% for complexity, and −43% to 20% for strength. Furthermore, no evident correlations were observed between respiratory phase and 4D scan texture parameters. Conclusion: Results of the current study showed that NGLDM-based texture parameters varied considerably based on choice of 3D PET and 4D PET reconstruction of NSCLC patient images, indicating that standardized image acquisition and analysis protocols need to be established for clinical studies, especially multicenter clinical trials, intending to validate prognostic values of texture features for NSCLC.

  2. SU-E-J-28: Gantry Speed Significantly Affects Image Quality and Imaging Dose for 4D Cone-Beam Computed Tomography On the Varian Edge Platform

    SciTech Connect

    Santoso, A; Song, K; Gardner, S; Chetty, I; Wen, N

    2015-06-15

    Purpose: 4D-CBCT facilitates assessment of tumor motion at treatment position. We investigated the effect of gantry speed on 4D-CBCT image quality and dose using the Varian Edge On-Board Imager (OBI). Methods: A thoracic protocol was designed using a 125 kVp spectrum. Image quality parameters were obtained via 4D acquisition using a Catphan phantom with a gating system. A sinusoidal waveform was executed with a five second period and superior-inferior motion. 4D-CBCT scans were sorted into 4 and 10 phases. Image quality metrics included spatial resolution, contrast-to-noise ratio (CNR), uniformity index (UI), Hounsfield unit (HU) sensitivity, and RMS error (RMSE) of motion amplitude. Dosimetry was accomplished using Gafchromic XR-QA2 films within a CIRS Thorax phantom. This was placed on the gating phantom using the same motion waveform. Results: High contrast resolution decreased linearly from 5.93 to 4.18 lp/cm, 6.54 to 4.18 lp/cm, and 5.19 to 3.91 lp/cm for averaged, 4 phase, and 10 phase 4DCBCT volumes respectively as gantry speed increased from 1.0 to 6.0 degs/sec. CNRs decreased linearly from 4.80 to 1.82 as the gantry speed increased from 1.0 to 6.0 degs/sec, respectively. No significant variations in UIs, HU sensitivities, or RMSEs were observed with variable gantry speed. Ion chamber measurements compared to film yielded small percent differences in plastic water regions (0.1–9.6%), larger percent differences in lung equivalent regions (7.5–34.8%), and significantly larger percent differences in bone equivalent regions (119.1–137.3%). Ion chamber measurements decreased from 17.29 to 2.89 cGy with increasing gantry speed from 1.0 to 6.0 degs/sec. Conclusion: Maintaining technique factors while changing gantry speed changes the number of projections used for reconstruction. Increasing the number of projections by decreasing gantry speed decreases noise, however, dose is increased. The future of 4DCBCT’s clinical utility relies on further

  3. Real-time image-content-based beamline control for smart 4D X-ray imaging.

    PubMed

    Vogelgesang, Matthias; Farago, Tomas; Morgeneyer, Thilo F; Helfen, Lukas; Dos Santos Rolo, Tomy; Myagotin, Anton; Baumbach, Tilo

    2016-09-01

    Real-time processing of X-ray image data acquired at synchrotron radiation facilities allows for smart high-speed experiments. This includes workflows covering parameterized and image-based feedback-driven control up to the final storage of raw and processed data. Nevertheless, there is presently no system that supports an efficient construction of such experiment workflows in a scalable way. Thus, here an architecture based on a high-level control system that manages low-level data acquisition, data processing and device changes is described. This system is suitable for routine as well as prototypical experiments, and provides specialized building blocks to conduct four-dimensional in situ, in vivo and operando tomography and laminography. PMID:27577784

  4. SU-E-J-154: Image Quality Assessment of Contrast-Enhanced 4D-CT for Pancreatic Adenocarcinoma in Radiotherapy Simulation

    SciTech Connect

    Choi, W; Xue, M; Patel, K; Regine, W; Wang, J; D’Souza, W; Lu, W; Kang, M; Klahr, P

    2015-06-15

    Purpose: This study presents quantitative and qualitative assessment of the image qualities in contrast-enhanced (CE) 3D-CT, 4D-CT and CE 4D-CT to identify feasibility for replacing the clinical standard simulation with a single CE 4D-CT for pancreatic adenocarcinoma (PDA) in radiotherapy simulation. Methods: Ten PDA patients were enrolled and underwent three CT scans: a clinical standard pair of CE 3D-CT immediately followed by a 4D-CT, and a CE 4D-CT one week later. Physicians qualitatively evaluated the general image quality and regional vessel definitions and gave a score from 1 to 5. Next, physicians delineated the contours of the tumor (T) and the normal pancreatic parenchyma (P) on the three CTs (CE 3D-CT, 50% phase for 4D-CT and CE 4D-CT), then high density areas were automatically removed by thresholding at 500 HU and morphological operations. The pancreatic tumor contrast-to-noise ratio (CNR), signal-tonoise ratio (SNR) and conspicuity (C, absolute difference of mean enhancement levels in P and T) were computed to quantitatively assess image quality. The Wilcoxon rank sum test was used to compare these quantities. Results: In qualitative evaluations, CE 3D-CT and CE 4D-CT scored equivalently (4.4±0.4 and 4.3±0.4) and both were significantly better than 4D-CT (3.1±0.6). In quantitative evaluations, the C values were higher in CE 4D-CT (28±19 HU, p=0.19 and 0.17) than the clinical standard pair of CE 3D-CT and 4D-CT (17±12 and 16±17 HU, p=0.65). In CE 3D-CT and CE 4D-CT, mean CNR (1.8±1.4 and 1.8±1.7, p=0.94) and mean SNR (5.8±2.6 and 5.5±3.2, p=0.71) both were higher than 4D-CT (CNR: 1.1±1.3, p<0.3; SNR: 3.3±2.1, p<0.1). The absolute enhancement levels for T and P were higher in CE 4D-CT (87, 82 HU) than in CE 3D-CT (60, 56) and 4DCT (53, 70). Conclusions: The individually optimized CE 4D-CT is feasible and achieved comparable image qualities to the clinical standard simulation. This study was supported in part by Philips Healthcare.

  5. Extension of wavelet compression algorithms to 3D and 4D image data: exploitation of data coherence in higher dimensions allows very high compression ratios

    NASA Astrophysics Data System (ADS)

    Zeng, Li; Jansen, Christian; Unser, Michael A.; Hunziker, Patrick

    2001-12-01

    High resolution multidimensional image data yield huge datasets. For compression and analysis, 2D approaches are often used, neglecting the information coherence in higher dimensions, which can be exploited for improved compression. We designed a wavelet compression algorithm suited for data of arbitrary dimensions, and assessed its ability for compression of 4D medical images. Basically, separable wavelet transforms are done in each dimension, followed by quantization and standard coding. Results were compared with conventional 2D wavelet. We found that in 4D heart images, this algorithm allowed high compression ratios, preserving diagnostically important image features. For similar image quality, compression ratios using the 3D/4D approaches were typically much higher (2-4 times per added dimension) than with the 2D approach. For low-resolution images created with the requirement to keep predefined key diagnostic information (contractile function of the heart), compression ratios up to 2000 could be achieved. Thus, higher-dimensional wavelet compression is feasible, and by exploitation of data coherence in higher image dimensions allows much higher compression than comparable 2D approaches. The proven applicability of this approach to multidimensional medical imaging has important implications especially for the fields of image storage and transmission and, specifically, for the emerging field of telemedicine.

  6. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    PubMed Central

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  7. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    SciTech Connect

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J.; Kuncic, Zdenka

    2014-04-15

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  8. TU-F-17A-01: BEST IN PHYSICS (JOINT IMAGING-THERAPY) - An Automatic Toolkit for Efficient and Robust Analysis of 4D Respiratory Motion

    SciTech Connect

    Wei, J; Yuan, A; Li, G

    2014-06-15

    Purpose: To provide an automatic image analysis toolkit to process thoracic 4-dimensional computed tomography (4DCT) and extract patient-specific motion information to facilitate investigational or clinical use of 4DCT. Methods: We developed an automatic toolkit in MATLAB to overcome the extra workload from the time dimension in 4DCT. This toolkit employs image/signal processing, computer vision, and machine learning methods to visualize, segment, register, and characterize lung 4DCT automatically or interactively. A fully-automated 3D lung segmentation algorithm was designed and 4D lung segmentation was achieved in batch mode. Voxel counting was used to calculate volume variations of the torso, lung and its air component, and local volume changes at the diaphragm and chest wall to characterize breathing pattern. Segmented lung volumes in 12 patients are compared with those from a treatment planning system (TPS). Voxel conversion was introduced from CT# to other physical parameters, such as gravity-induced pressure, to create a secondary 4D image. A demon algorithm was applied in deformable image registration and motion trajectories were extracted automatically. Calculated motion parameters were plotted with various templates. Machine learning algorithms, such as Naive Bayes and random forests, were implemented to study respiratory motion. This toolkit is complementary to and will be integrated with the Computational Environment for Radiotherapy Research (CERR). Results: The automatic 4D image/data processing toolkit provides a platform for analysis of 4D images and datasets. It processes 4D data automatically in batch mode and provides interactive visual verification for manual adjustments. The discrepancy in lung volume calculation between this and the TPS is <±2% and the time saving is by 1–2 orders of magnitude. Conclusion: A framework of 4D toolkit has been developed to analyze thoracic 4DCT automatically or interactively, facilitating both investigational

  9. SU-E-J-200: A Dosimetric Analysis of 3D Versus 4D Image-Based Dose Calculation for Stereotactic Body Radiation Therapy in Lung Tumors

    SciTech Connect

    Ma, M; Rouabhi, O; Flynn, R; Xia, J; Bayouth, J

    2014-06-01

    Purpose: To evaluate the dosimetric difference between 3D and 4Dweighted dose calculation using patient specific respiratory trace and deformable image registration for stereotactic body radiation therapy in lung tumors. Methods: Two dose calculation techniques, 3D and 4D-weighed dose calculation, were used for dosimetric comparison for 9 lung cancer patients. The magnitude of the tumor motion varied from 3 mm to 23 mm. Breath-hold exhale CT was used for 3D dose calculation with ITV generated from the motion observed from 4D-CT. For 4D-weighted calculation, dose of each binned CT image from the ten breathing amplitudes was first recomputed using the same planning parameters as those used in the 3D calculation. The dose distribution of each binned CT was mapped to the breath-hold CT using deformable image registration. The 4D-weighted dose was computed by summing the deformed doses with the temporal probabilities calculated from their corresponding respiratory traces. Dosimetric evaluation criteria includes lung V20, mean lung dose, and mean tumor dose. Results: Comparing with 3D calculation, lung V20, mean lung dose, and mean tumor dose using 4D-weighted dose calculation were changed by −0.67% ± 2.13%, −4.11% ± 6.94% (−0.36 Gy ± 0.87 Gy), −1.16% ± 1.36%(−0.73 Gy ± 0.85 Gy) accordingly. Conclusion: This work demonstrates that conventional 3D dose calculation method may overestimate the lung V20, MLD, and MTD. The absolute difference between 3D and 4D-weighted dose calculation in lung tumor may not be clinically significant. This research is supported by Siemens Medical Solutions USA, Inc and Iowa Center for Research By Undergraduates.

  10. Non-rigid dual respiratory and cardiac motion correction methods after, during, and before image reconstruction for 4D cardiac PET

    NASA Astrophysics Data System (ADS)

    Feng, Tao; Wang, Jizhe; Fung, George; Tsui, Benjamin

    2016-01-01

    Respiratory motion (RM) and cardiac motion (CM) degrade the quality and resolution in cardiac PET scans. We have developed non-rigid motion estimation methods to estimate both RM and CM based on 4D cardiac gated PET data alone, and compensate the dual respiratory and cardiac (R&C) motions after (MCAR), during (MCDR), and before (MCBR) image reconstruction. In all three R&C motion correction methods, attenuation-activity mismatch effect was modeled by using transformed attenuation maps using the estimated RM. The difference of using activity preserving and non-activity preserving models in R&C correction was also studied. Realistic Monte Carlo simulated 4D cardiac PET data using the 4D XCAT phantom and accurate models of the scanner design parameters and performance characteristics at different noise levels were employed as the known truth and for method development and evaluation. Results from the simulation study suggested that all three dual R&C motion correction methods provide substantial improvement in the quality of 4D cardiac gated PET images as compared with no motion correction. Specifically, the MCDR method yields the best performance for all different noise levels compared with the MCAR and MCBR methods. While MCBR reduces computational time dramatically but the resultant 4D cardiac gated PET images has overall inferior image quality when compared to that from the MCAR and MCDR approaches in the ‘almost’ noise free case. Also, the MCBR method has better noise handling properties when compared with MCAR and provides better quantitative results in high noise cases. When the goal is to reduce scan time or patient radiation dose, MCDR and MCBR provide a good compromise between image quality and computational times.

  11. Task-based evaluation of a 4D MAP-RBI-EM image reconstruction method for gated myocardial perfusion SPECT using a human observer study

    NASA Astrophysics Data System (ADS)

    Lee, Taek-Soo; Higuchi, Takahiro; Lautamäki, Riikka; Bengel, Frank M.; Tsui, Benjamin M. W.

    2015-09-01

    We evaluated the performance of a new 4D image reconstruction method for improved 4D gated myocardial perfusion (MP) SPECT using a task-based human observer study. We used a realistic 4D NURBS-based Cardiac-Torso (NCAT) phantom that models cardiac beating motion. Half of the population was normal; the other half had a regional hypokinetic wall motion abnormality. Noise-free and noisy projection data with 16 gates/cardiac cycle were generated using an analytical projector that included the effects of attenuation, collimator-detector response, and scatter (ADS), and were reconstructed using the 3D FBP without and 3D OS-EM with ADS corrections followed by different cut-off frequencies of a 4D linear post-filter. A 4D iterative maximum a posteriori rescaled-block (MAP-RBI)-EM image reconstruction method with ADS corrections was also used to reconstruct the projection data using various values of the weighting factor for its prior. The trade-offs between bias and noise were represented by the normalized mean squared error (NMSE) and averaged normalized standard deviation (NSDav), respectively. They were used to select reasonable ranges of the reconstructed images for use in a human observer study. The observers were trained with the simulated cine images and were instructed to rate their confidence on the absence or presence of a motion defect on a continuous scale. We then applied receiver operating characteristic (ROC) analysis and used the area under the ROC curve (AUC) index. The results showed that significant differences in detection performance among the different NMSE-NSDav combinations were found and the optimal trade-off from optimized reconstruction parameters corresponded to a maximum AUC value. The 4D MAP-RBI-EM with ADS correction, which had the best trade-off among the tested reconstruction methods, also had the highest AUC value, resulting in significantly better human observer detection performance when detecting regional myocardial wall motion

  12. Task-Based Evaluation of a 4D MAP-RBI-EM Image Reconstruction Method for Gated Myocardial Perfusion SPECT using a Human Observer Study

    PubMed Central

    Lee, Taek-Soo; Higuchi, Takahiro; Lautamäki, Riikka; Bengel, Frank M.; Tsui, Benjamin M. W.

    2015-01-01

    We evaluated the performance of a new 4D image reconstruction method for improved 4D gated myocardial perfusion (MP) SPECT using a task-based human observer study. We used a realistic 4D NURBS-based Cardiac-Torso (NCAT) phantom that models cardiac beating motion. Half of the population was normal; the other half had a regional hypokinetic wall motion abnormality. Noise-free and noisy projection data with 16 gates/cardiac cycle were generated using an analytical projector that included the effects of attenuation, collimator-detector response, and scatter (ADS), and were reconstructed using the 3D FBP without and 3D OS-EM with ADS corrections followed by different cut-off frequencies of a 4D linear post-filter. A 4D iterative maximum a posteriori rescaled-block (MAP-RBI)-EM image reconstruction method with ADS corrections was also used to reconstruct the projection data using various values of the weighting factor for its prior. The trade-offs between bias and noise were represented by the normalized mean squared error (NMSE) and averaged normalized standard deviation (NSDav), respectively. They were used to select reasonable ranges of the reconstructed images for use in a human observer study. The observers were trained with the simulated cine images and were instructed to rate their confidence on the absence or presence of a motion defect on a continuous scale. We then applied receiver operating characteristic (ROC) analysis and used the area under the ROC curve (AUC) index. The results showed that significant differences in detection performance among the different NMSE-NSDav combinations were found and the optimal trade-off from optimized reconstruction parameters corresponded to a maximum AUC value. The 4D MAP-RBI-EM with ADS correction, which had the best trade-off among the tested reconstruction methods, also had the highest AUC value, resulting in significantly better human observer detection performance when detecting regional myocardial wall motion

  13. Task-based evaluation of a 4D MAP-RBI-EM image reconstruction method for gated myocardial perfusion SPECT using a human observer study.

    PubMed

    Lee, Taek-Soo; Higuchi, Takahiro; Lautamäki, Riikka; Bengel, Frank M; Tsui, Benjamin M W

    2015-09-01

    We evaluated the performance of a new 4D image reconstruction method for improved 4D gated myocardial perfusion (MP) SPECT using a task-based human observer study. We used a realistic 4D NURBS-based Cardiac-Torso (NCAT) phantom that models cardiac beating motion. Half of the population was normal; the other half had a regional hypokinetic wall motion abnormality. Noise-free and noisy projection data with 16 gates/cardiac cycle were generated using an analytical projector that included the effects of attenuation, collimator-detector response, and scatter (ADS), and were reconstructed using the 3D FBP without and 3D OS-EM with ADS corrections followed by different cut-off frequencies of a 4D linear post-filter. A 4D iterative maximum a posteriori rescaled-block (MAP-RBI)-EM image reconstruction method with ADS corrections was also used to reconstruct the projection data using various values of the weighting factor for its prior. The trade-offs between bias and noise were represented by the normalized mean squared error (NMSE) and averaged normalized standard deviation (NSDav), respectively. They were used to select reasonable ranges of the reconstructed images for use in a human observer study. The observers were trained with the simulated cine images and were instructed to rate their confidence on the absence or presence of a motion defect on a continuous scale. We then applied receiver operating characteristic (ROC) analysis and used the area under the ROC curve (AUC) index. The results showed that significant differences in detection performance among the different NMSE-NSDav combinations were found and the optimal trade-off from optimized reconstruction parameters corresponded to a maximum AUC value. The 4D MAP-RBI-EM with ADS correction, which had the best trade-off among the tested reconstruction methods, also had the highest AUC value, resulting in significantly better human observer detection performance when detecting regional myocardial wall motion

  14. Computer processing of image captured by the passive THz imaging device as an effective tool for its de-noising

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.; Zhang, Cun-lin; Deng, Chao; Zhao, Yuan-meng; Zhang, Xin

    2012-12-01

    As it is well-known, passive THz imaging devices have big potential for solution of the security problem. Nevertheless, one of the main problems, which take place on the way of using these devices, consists in the low image quality of developed passive THz camera. To change this situation, it is necessary to improve the engineering characteristics (resolution, sensitivity and so on) of the THz camera or to use computer processing of the image. In our opinion, the last issue is more preferable because it is more inexpensive. Below we illustrate possibility of suppression of the noise of the image captured by three THz passive camera developed in CNU (Beijing. China). After applying the computer processing of the image, its quality enhances many times. Achieved quality in many cases becomes enough for the detection of the object hidden under opaque clothes. We stress that the performance of developed computer code is enough high and does not restrict the performance of passive THz imaging device. The obtained results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem. Nevertheless, developing the new spatial filter for treatment of the THz image remains a modern problem at present time.

  15. Wavelet denoising in voxel-based parametric estimation of small animal PET images: a systematic evaluation of spatial constraints and noise reduction algorithms

    NASA Astrophysics Data System (ADS)

    Su, Yi; Shoghi, Kooresh I.

    2008-11-01

    Voxel-based estimation of PET images, generally referred to as parametric imaging, can provide invaluable information about the heterogeneity of an imaging agent in a given tissue. Due to high level of noise in dynamic images, however, the estimated parametric image is often noisy and unreliable. Several approaches have been developed to address this challenge, including spatial noise reduction techniques, cluster analysis and spatial constrained weighted nonlinear least-square (SCWNLS) methods. In this study, we develop and test several noise reduction techniques combined with SCWNLS using simulated dynamic PET images. Both spatial smoothing filters and wavelet-based noise reduction techniques are investigated. In addition, 12 different parametric imaging methods are compared using simulated data. With the combination of noise reduction techniques and SCWNLS methods, more accurate parameter estimation can be achieved than with either of the two techniques alone. A less than 10% relative root-mean-square error is achieved with the combined approach in the simulation study. The wavelet denoising based approach is less sensitive to noise and provides more accurate parameter estimation at higher noise levels. Further evaluation of the proposed methods is performed using actual small animal PET datasets. We expect that the proposed method would be useful for cardiac, neurological and oncologic applications.

  16. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  17. Transformation of light double cones in the human retina: the origin of trichromatism, of 4D-spatiotemporal vision, and of patchwise 4D Fourier transformation in Talbot imaging

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    1997-09-01

    The interpretation of the 'inverted' retina of primates as an 'optoretina' (a light cones transforming diffractive cellular 3D-phase grating) integrates the functional, structural, and oscillatory aspects of a cortical layer. It is therefore relevant to consider prenatal developments as a basis of the macro- and micro-geometry of the inner eye. This geometry becomes relevant for the postnatal trichromatic synchrony organization (TSO) as well as the adaptive levels of human vision. It is shown that the functional performances, the trichromatism in photopic vision, the monocular spatiotemporal 3D- and 4D-motion detection, as well as the Fourier optical image transformation with extraction of invariances all become possible. To transform light cones into reciprocal gratings especially the spectral phase conditions in the eikonal of the geometrical optical imaging before the retinal 3D-grating become relevant first, then in the von Laue resp. reciprocal von Laue equation for 3D-grating optics inside the grating and finally in the periodicity of Talbot-2/Fresnel-planes in the near-field behind the grating. It is becoming possible to technically realize -- at least in some specific aspects -- such a cortical optoretina sensor element with its typical hexagonal-concentric structure which leads to these visual functions.

  18. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data.

    PubMed

    Kotasidis, F A; Mehranian, A; Zaidi, H

    2016-05-01

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image

  19. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data

    NASA Astrophysics Data System (ADS)

    Kotasidis, F. A.; Mehranian, A.; Zaidi, H.

    2016-05-01

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image

  20. Photogrammetric DSM denoising

    NASA Astrophysics Data System (ADS)

    Nex, F.; Gerke, M.

    2014-08-01

    Image matching techniques can nowadays provide very dense point clouds and they are often considered a valid alternative to LiDAR point cloud. However, photogrammetric point clouds are often characterized by a higher level of random noise compared to LiDAR data and by the presence of large outliers. These problems constitute a limitation in the practical use of photogrammetric data for many applications but an effective way to enhance the generated point cloud has still to be found. In this paper we concentrate on the restoration of Digital Surface Models (DSM), computed from dense image matching point clouds. A photogrammetric DSM, i.e. a 2.5D representation of the surface is still one of the major products derived from point clouds. Four different algorithms devoted to DSM denoising are presented: a standard median filter approach, a bilateral filter, a variational approach (TGV: Total Generalized Variation), as well as a newly developed algorithm, which is embedded into a Markov Random Field (MRF) framework and optimized through graph-cuts. The ability of each algorithm to recover the original DSM has been quantitatively evaluated. To do that, a synthetic DSM has been generated and different typologies of noise have been added to mimic the typical errors of photogrammetric DSMs. The evaluation reveals that standard filters like median and edge preserving smoothing through a bilateral filter approach cannot sufficiently remove typical errors occurring in a photogrammetric DSM. The TGV-based approach much better removes random noise, but large areas with outliers still remain. Our own method which explicitly models the degradation properties of those DSM outperforms the others in all aspects.

  1. Comparison of two respiration monitoring systems for 4D imaging with a Siemens CT using a new dynamic breathing phantom

    NASA Astrophysics Data System (ADS)

    Vásquez, A. C.; Runz, A.; Echner, G.; Sroka-Perez, G.; Karger, C. P.

    2012-05-01

    Four-dimensional computed tomography (4D-CT) requires breathing information from the patient, and for this, several systems are available. Testing of these systems, under realistic conditions, requires a phantom with a moving target and an expandable outer contour. An anthropomorphic phantom was developed to simulate patient breathing as well as lung tumor motion. Using the phantom, an optical camera system (GateCT) and a pressure sensor (AZ-733V) were simultaneously operated, and 4D-CTs were reconstructed with a Siemens CT using the provided local-amplitude-based sorting algorithm. The comparison of the tumor trajectories of both systems revealed discrepancies up to 9.7 mm. Breathing signal differences, such as baseline drift, temporal resolution and noise level were shown not to be the reason for this. Instead, the variability of the sampling interval and the accuracy of the sampling rate value written on the header of the GateCT-signal file were identified as the cause. Interpolation to regular sampling intervals and correction of the sampling rate to the actual value removed the observed discrepancies. Consistently, the introduction of sampling interval variability and inaccurate sampling rate values into the header of the AZ-733V file distorted the tumor trajectory for this system. These results underline the importance of testing new equipment thoroughly, especially if components of different manufacturers are combined.

  2. IMRT treatment plans and functional planning with functional lung imaging from 4D-CT for thoracic cancer patients

    PubMed Central

    2013-01-01

    Background and purpose Currently, the inhomogeneity of the pulmonary function is not considered when treatment plans are generated in thoracic cancer radiotherapy. This study evaluates the dose of treatment plans on highly-functional volumes and performs functional treatment planning by incorporation of ventilation data from 4D-CT. Materials and methods Eleven patients were included in this retrospective study. Ventilation was calculated using 4D-CT. Two treatment plans were generated for each case, the first one without the incorporation of the ventilation and the second with it. The dose of the first plans was overlapped with the ventilation and analyzed. Highly-functional regions were avoided in the second treatment plans. Results For small targets in the first plans (PTV < 400 cc, 6 cases), all V5, V20 and the mean lung dose values for the highly-functional regions were lower than that of the total lung. For large targets, two out of five cases had higher V5 and V20 values for the highly-functional regions. All the second plans were within constraints. Conclusion Radiation treatments affect functional lung more seriously in large tumor cases. With compromise of dose to other critical organs, functional treatment planning to reduce dose in highly-functional lung volumes can be achieved PMID:23281734

  3. Minimum entropy approach to denoising time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Aviyente, Selin; Williams, William J.

    2001-11-01

    Signals used in time-frequency analysis are usually corrupted by noise. Therefore, denoising the time-frequency representation is a necessity for producing readable time-frequency images. Denoising is defined as the operation of smoothing a noisy signal or image for producing a noise free representation. Linear smoothing of time-frequency distributions (TFDs) suppresses noise at the expense of considerable smearing of the signal components. For this reason, nonlinear denoising has been preferred. A common example to nonlinear denoising methods is the wavelet thresholding. In this paper, we introduce an entropy based approach to denoising time-frequency distributions. This new approach uses the spectrogram decomposition of time-frequency kernels proposed by Cunningham and Williams.In order to denoise the time-frequency distribution, we combine those spectrograms with smallest entropy values, thus ensuring that each spectrogram is well concentrated on the time-frequency plane and contains as little noise as possible. Renyi entropy is used as the measure to quantify the complexity of each spectrogram. The threshold for the number of spectrograms to combine is chosen adaptively based on the tradeoff between entropy and variance. The denoised time-frequency distributions for several signals are shown to demonstrate the effectiveness of the method. The improvement in performance is quantitatively evaluated.

  4. A novel CT-FFR method for the coronary artery based on 4D-CT image analysis and structural and fluid analysis

    NASA Astrophysics Data System (ADS)

    Hirohata, K.; Kano, A.; Goryu, A.; Ooga, J.; Hongo, T.; Higashi, S.; Fujisawa, Y.; Wakai, S.; Arakita, K.; Ikeda, Y.; Kaminaga, S.; Ko, B. S.; Seneviratne, S. K.

    2015-03-01

    Non invasive fractional flow reserve derived from CT coronary angiography (CT-FFR) has to date been typically performed using the principles of fluid analysis in which a lumped parameter coronary vascular bed model is assigned to represent the impedance of the downstream coronary vascular networks absent in the computational domain for each coronary outlet. This approach may have a number of limitations. It may not account for the impact of the myocardial contraction and relaxation during the cardiac cycle, patient-specific boundary conditions for coronary artery outlets and vessel stiffness. We have developed a novel approach based on 4D-CT image tracking (registration) and structural and fluid analysis, to address these issues. In our approach, we analyzed the deformation variation of vessels and the volume variation of vessels, primarily from 70% to 100% of cardiac phase, to better define boundary conditions and stiffness of vessels. We used a statistical estimation method based on a hierarchical Bayes model to integrate 4D-CT measurements and structural and fluid analysis data. Under these analysis conditions, we performed structural and fluid analysis to determine pressure, flow rate and CT-FFR. The consistency of this method has been verified by a comparison of 4D-CTFFR analysis results derived from five clinical 4D-CT datasets with invasive measurements of FFR. Additionally, phantom experiments of flexible tubes with/without stenosis using pulsating pumps, flow sensors and pressure sensors were performed. Our results show that the proposed 4D-CT-FFR analysis method has the potential to accurately estimate the effect of coronary artery stenosis on blood flow.

  5. 4-D OCT in Developmental Cardiology

    NASA Astrophysics Data System (ADS)

    Jenkins, Michael W.; Rollins, Andrew M.

    Although strong evidence exists to suggest that altered cardiac function can lead to CHDs, few studies have investigated the influential role of cardiac function and biophysical forces on the development of the cardiovascular system due to a lack of proper in vivo imaging tools. 4-D imaging is needed to decipher the complex spatial and temporal patterns of biomechanical forces acting upon the heart. Numerous solutions over the past several years have demonstrated 4-D OCT imaging of the developing cardiovascular system. This chapter will focus on these solutions and explain their context in the evolution of 4-D OCT imaging. The first sections describe the relevant techniques (prospective gating, direct 4-D imaging, retrospective gating), while later sections focus on 4-D Doppler imaging and measurements of force implementing 4-D OCT Doppler. Finally, the techniques are summarized, and some possible future directions are discussed.

  6. A Desktop Computer Based Workstation for Display and Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Erickson, Bradley J.; Robb, Richard A.

    1987-01-01

    While great advances have been made in developing new and better ways to produce medical images, the technology to efficiently display and analyze them has lagged. This paper describes design considerations and development of a workstation based on an IBM PC/AT for the analysis of three and four dimensional medical image data. ImagesFigure 1Figure 2Figure 3Figure 4Figure 5Figure 6Figure 7Figure 8Figure 9

  7. Magnetic Particle / Magnetic Resonance Imaging: In-Vitro MPI-Guided Real Time Catheter Tracking and 4D Angioplasty Using a Road Map and Blood Pool Tracer Approach

    PubMed Central

    Jung, Caroline; Kaul, Michael Gerhard; Werner, Franziska; Them, Kolja; Reimer, Rudolph; Nielsen, Peter; vom Scheidt, Annika; Adam, Gerhard; Knopp, Tobias; Ittrich, Harald

    2016-01-01

    Purpose In-vitro evaluation of the feasibility of 4D real time tracking of endovascular devices and stenosis treatment with a magnetic particle imaging (MPI) / magnetic resonance imaging (MRI) road map approach and an MPI-guided approach using a blood pool tracer. Materials and Methods A guide wire and angioplasty-catheter were labeled with a thin layer of magnetic lacquer. For real time MPI a custom made software framework was developed. A stenotic vessel phantom filled with saline or superparamagnetic iron oxide nanoparticles (MM4) was equipped with bimodal fiducial markers for co-registration in preclinical 7T MRI and MPI. In-vitro angioplasty was performed inflating the balloon with saline or MM4. MPI data were acquired using a field of view of 37.3×37.3×18.6 mm3 and a frame rate of 46 volumes/sec. Analysis of the magnetic lacquer-marks on the devices were performed with electron microscopy, atomic absorption spectrometry and micro-computed tomography. Results Magnetic marks allowed for MPI/MRI guidance of interventional devices. Bimodal fiducial markers enable MPI/MRI image fusion for MRI based roadmapping. MRI roadmapping and the blood pool tracer approach facilitate MPI real time monitoring of in-vitro angioplasty. Successful angioplasty was verified with MPI and MRI. Magnetic marks consist of micrometer sized ferromagnetic plates mainly composed of iron and iron oxide. Conclusions 4D real time MP imaging, tracking and guiding of endovascular instruments and in-vitro angioplasty is feasible. In addition to an approach that requires a blood pool tracer, MRI based roadmapping might emerge as a promising tool for radiation free 4D MPI-guided interventions. PMID:27249022

  8. Tissue Probability Map Constrained 4-D Clustering Algorithm for Increased Accuracy and Robustness in Serial MR Brain Image Segmentation

    PubMed Central

    Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen

    2010-01-01

    The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399

  9. MO-C-17A-02: A Novel Method for Evaluating Hepatic Stiffness Based On 4D-MRI and Deformable Image Registration

    SciTech Connect

    Cui, T; Liang, X; Czito, B; Palta, M; Bashir, M; Yin, F; Cai, J

    2014-06-15

    Purpose: Quantitative imaging of hepatic stiffness has significant potential in radiation therapy, ranging from treatment planning to response assessment. This study aims to develop a novel, noninvasive method to quantify liver stiffness with 3D strains liver maps using 4D-MRI and deformable image registration (DIR). Methods: Five patients with liver cancer were imaged with an institutionally developed 4D-MRI technique under an IRB-approved protocol. Displacement vector fields (DVFs) across the liver were generated via DIR of different phases of 4D-MRI. Strain tensor at each voxel of interest (VOI) was computed from the relative displacements between the VOI and each of the six adjacent voxels. Three principal strains (E{sub 1}, E{sub 2} and E{sub 3}) of the VOI were derived as the eigenvalue of the strain tensor, which represent the magnitudes of the maximum and minimum stretches. Strain tensors for two regions of interest (ROIs) were calculated and compared for each patient, one within the tumor (ROI{sub 1}) and the other in normal liver distant from the heart (ROI{sub 2}). Results: 3D strain maps were successfully generated fort each respiratory phase of 4D-MRI for all patients. Liver deformations induced by both respiration and cardiac motion were observed. Differences in strain values adjacent to the distant from the heart indicate significant deformation caused by cardiac expansion during diastole. The large E{sub 1}/E{sub 2} (∼2) and E{sub 1}/E{sub 2} (∼10) ratios reflect the predominance of liver deformation in the superior-inferior direction. The mean E{sub 1} in ROI{sub 1} (0.12±0.10) was smaller than in ROI{sub 2} (0.15±0.12), reflecting a higher degree of stiffness of the cirrhotic tumor. Conclusion: We have successfully developed a novel method for quantitatively evaluating regional hepatic stiffness based on DIR of 4D-MRI. Our initial findings indicate that liver strain is heterogeneous, and liver tumors may have lower principal strain values

  10. Imaging 4-D hydrogeologic processes with geophysics: an example using crosswell electrical measurements to characterize a tracer plume

    NASA Astrophysics Data System (ADS)

    Singha, K.; Gorelick, S. M.

    2005-05-01

    Geophysical methods provide an inexpensive way to collect spatially exhaustive data about hydrogeologic, mechanical or geochemical parameters. In the presence of heterogeneity over multiple scales of these parameters at most field sites, geophysical data can contribute greatly to our understanding about the subsurface by providing important data we would otherwise lack without extensive, and often expensive, direct sampling. Recent work has highlighted the use of time-lapse geophysical data to help characterize hydrogeologic processes. We investigate the potential for making quantitative assessments of sodium-chloride tracer transport using 4-D crosswell electrical resistivity tomography (ERT) in a sand and gravel aquifer at the Massachusetts Military Reservation on Cape Cod. Given information about the relation between electrical conductivity and tracer concentration, we can estimate spatial moments from the 3-D ERT inversions, which give us information about tracer mass, center of mass, and dispersivity through time. The accuracy of these integrated measurements of tracer plume behavior is dependent on spatially variable resolution. The ERT inversions display greater apparent dispersion than tracer plumes estimated by 3D advective-dispersive simulation. This behavior is attributed to reduced measurement sensitivity to electrical conductivity values with distance from the electrodes and differential smoothing from tomographic inversion. The latter is a problem common to overparameterized inverse problems, which often occur when real-world budget limitations preclude extensive well-drilling or additional data collection. These results prompt future work on intelligent methods for reparameterizing the inverse problem and coupling additional disparate data sets.

  11. Digital in-line holography: 4-D imaging and tracking of micro-structures and organisms in microfluidics and biology

    NASA Astrophysics Data System (ADS)

    Garcia-Sucerquia, J.; Xu, W.; Jericho, S. K.; Jericho, M. H.; Tamblyn, I.; Kreuzer, H. J.

    2006-01-01

    In recent years, in-line holography as originally proposed by Gabor, supplemented with numerical reconstruction, has been perfected to the point at which wavelength resolution both laterally and in depth is routinely achieved with light by using digital in-line holographic microscopy (DIHM). The advantages of DIHM are: (1) simplicity of the hardware (laser- pinhole-CCD camera), (2) magnification is obtained in the numerical reconstruction, (3) maximum information of the 3-D structure with a depth of field of millimeters, (4) changes in the specimen and the simultaneous motion of many species, can be followed in 4-D at the camera frame rate. We present results obtained with DIHM in biological and microfluidic applications. By taking advantage of the large depth of field and the plane-to-plane reconstruction capability of DIHM, we can produce 3D representations of the paths followed by micron-sized objects such as suspensions of microspheres and biological samples (cells, algae, protozoa, bacteria). Examples from biology include a study of the motion of bacteria in a diatom and the track of algae and paramecium. In microfluidic applications we observe micro-channel flow, motion of bubbles in water and evolution in electrolysis. The paper finishes with new results from an underwater version of DIHM.

  12. A finite element updating approach for identification of the anisotropic hyperelastic properties of normal and diseased aortic walls from 4D ultrasound strain imaging.

    PubMed

    Wittek, Andreas; Derwich, Wojciech; Karatolios, Konstantinos; Fritzen, Claus Peter; Vogt, Sebastian; Schmitz-Rixen, Thomas; Blase, Christopher

    2016-05-01

    Computational analysis of the biomechanics of the vascular system aims at a better understanding of its physiology and pathophysiology and eventually at diagnostic clinical use. Because of great inter-individual variations, such computational models have to be patient-specific with regard to geometry, material properties and applied loads and boundary conditions. Full-field measurements of heterogeneous displacement or strain fields can be used to improve the reliability of parameter identification based on a reduced number of observed load cases as is usually given in an in vivo setting. Time resolved 3D ultrasound combined with speckle tracking (4D US) is an imaging technique that provides full field information of heterogeneous aortic wall strain distributions in vivo. In a numerical verification experiment, we have shown the feasibility of identifying nonlinear and orthotropic constitutive behaviour based on the observation of just two load cases, even though the load free geometry is unknown, if heterogeneous strain fields are available. Only clinically available 4D US measurements of wall motion and diastolic and systolic blood pressure are required as input for the inverse FE updating approach. Application of the developed inverse approach to 4D US data sets of three aortic wall segments from volunteers of different age and pathology resulted in the reproducible identification of three distinct and (patho-) physiologically reasonable constitutive behaviours. The use of patient-individual material properties in biomechanical modelling of AAAs is a step towards more personalized rupture risk assessment. PMID:26455809

  13. Imaging and dosimetric errors in 4D PET/CT-guided radiotherapy from patient-specific respiratory patterns: a dynamic motion phantom end-to-end study

    NASA Astrophysics Data System (ADS)

    Bowen, S. R.; Nyflot, M. J.; Herrmann, C.; Groh, C. M.; Meyer, J.; Wollenweber, S. D.; Stearns, C. W.; Kinahan, P. E.; Sandison, G. A.

    2015-05-01

    Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10-20%, treatment planning errors were 5-10%, and treatment delivery errors were 5-30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5-10% in PET/CT imaging, <5% in treatment planning, and <2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT

  14. Imaging and dosimetric errors in 4D PET/CT-guided radiotherapy from patient-specific respiratory patterns: a dynamic motion phantom end-to-end study

    PubMed Central

    Bowen, S R; Nyflot, M J; Hermann, C; Groh, C; Meyer, J; Wollenweber, S D; Stearns, C W; Kinahan, P E; Sandison, G A

    2015-01-01

    Effective positron emission tomography/computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by 6 different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy (VMAT) were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses (EUD), and 2%-2mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10–20%, treatment planning errors were 5–10%, and treatment delivery errors were 5–30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5–10% in PET/CT imaging, < 5% in treatment planning, and < 2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT

  15. Assessing Cardiac Injury in Mice With Dual Energy-MicroCT, 4D-MicroCT, and MicroSPECT Imaging After Partial Heart Irradiation

    SciTech Connect

    Lee, Chang-Lung; Min, Hooney; Befera, Nicholas; Clark, Darin; Qi, Yi; Das, Shiva; Johnson, G. Allan; Badea, Cristian T.; Kirsch, David G.

    2014-03-01

    Purpose: To develop a mouse model of cardiac injury after partial heart irradiation (PHI) and to test whether dual energy (DE)-microCT and 4-dimensional (4D)-microCT can be used to assess cardiac injury after PHI to complement myocardial perfusion imaging using micro-single photon emission computed tomography (SPECT). Methods and Materials: To study cardiac injury from tangent field irradiation in mice, we used a small-field biological irradiator to deliver a single dose of 12 Gy x-rays to approximately one-third of the left ventricle (LV) of Tie2Cre; p53{sup FL/+} and Tie2Cre; p53{sup FL/−} mice, where 1 or both alleles of p53 are deleted in endothelial cells. Four and 8 weeks after irradiation, mice were injected with gold and iodinated nanoparticle-based contrast agents, and imaged with DE-microCT and 4D-microCT to evaluate myocardial vascular permeability and cardiac function, respectively. Additionally, the same mice were imaged with microSPECT to assess myocardial perfusion. Results: After PHI with tangent fields, DE-microCT scans showed a time-dependent increase in accumulation of gold nanoparticles (AuNp) in the myocardium of Tie2Cre; p53{sup FL/−} mice. In Tie2Cre; p53{sup FL/−} mice, extravasation of AuNp was observed within the irradiated LV, whereas in the myocardium of Tie2Cre; p53{sup FL/+} mice, AuNp were restricted to blood vessels. In addition, data from DE-microCT and microSPECT showed a linear correlation (R{sup 2} = 0.97) between the fraction of the LV that accumulated AuNp and the fraction of LV with a perfusion defect. Furthermore, 4D-microCT scans demonstrated that PHI caused a markedly decreased ejection fraction, and higher end-diastolic and end-systolic volumes, to develop in Tie2Cre; p53{sup FL/−} mice, which were associated with compensatory cardiac hypertrophy of the heart that was not irradiated. Conclusions: Our results show that DE-microCT and 4D-microCT with nanoparticle-based contrast agents are novel imaging approaches

  16. NiftyFit: a Software Package for Multi-parametric Model-Fitting of 4D Magnetic Resonance Imaging Data.

    PubMed

    Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien

    2016-07-01

    Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require. PMID:26972806

  17. SU-E-J-151: Dosimetric Evaluation of DIR Mapped Contours for Image Guided Adaptive Radiotherapy with 4D Cone-Beam CT

    SciTech Connect

    Balik, S; Weiss, E; Williamson, J; Hugo, G; Jan, N; Zhang, L; Roman, N; Christensen, G

    2014-06-01

    Purpose: To estimate dosimetric errors resulting from using contours deformably mapped from planning CT to 4D cone beam CT (CBCT) images for image-guided adaptive radiotherapy of locally advanced non-small cell lung cancer (NSCLC). Methods: Ten locally advanced non-small cell lung cancer (NSCLC) patients underwent one planning 4D fan-beam CT (4DFBCT) and weekly 4DCBCT scans. Multiple physicians delineated the gross tumor volume (GTV) and normal structures in planning CT images and only GTV in CBCT images. Manual contours were mapped from planning CT to CBCTs using small deformation, inverse consistent linear elastic (SICLE) algorithm for two scans in each patient. Two physicians reviewed and rated the DIR-mapped (auto) and manual GTV contours as clinically acceptable (CA), clinically acceptable after minor modification (CAMM) and unacceptable (CU). Mapped normal structures were visually inspected and corrected if necessary, and used to override tissue density for dose calculation. CTV (6mm expansion of GTV) and PTV (5mm expansion of CTV) were created. VMAT plans were generated using the DIR-mapped contours to deliver 66 Gy in 33 fractions with 95% and 100% coverage (V66) to PTV and CTV, respectively. Plan evaluation for V66 was based on manual PTV and CTV contours. Results: Mean PTV V66 was 84% (range 75% – 95%) and mean CTV V66 was 97% (range 93% – 100%) for CAMM scored plans (12 plans); and was 90% (range 80% – 95%) and 99% (range 95% – 100%) for CA scored plans (7 plans). The difference in V66 between CAMM and CA was significant for PTV (p = 0.03) and approached significance for CTV (p = 0.07). Conclusion: The quality of DIR-mapped contours directly impacted the plan quality for 4DCBCT-based adaptation. Larger safety margins may be needed when planning with auto contours for IGART with 4DCBCT images. Reseach was supported by NIH P01CA116602.

  18. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. PMID:23218511

  19. 4D megahertz optical coherence tomography (OCT): imaging and live display beyond 1 gigavoxel/sec (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Huber, Robert A.; Draxinger, Wolfgang; Wieser, Wolfgang; Kolb, Jan Philip; Pfeiffer, Tom; Karpf, Sebastian N.; Eibl, Matthias; Klein, Thomas

    2016-03-01

    Over the last 20 years, optical coherence tomography (OCT) has become a valuable diagnostic tool in ophthalmology with several 10,000 devices sold today. Other applications, like intravascular OCT in cardiology and gastro-intestinal imaging will follow. OCT provides 3-dimensional image data with microscopic resolution of biological tissue in vivo. In most applications, off-line processing of the acquired OCT-data is sufficient. However, for OCT applications like OCT aided surgical microscopes, for functional OCT imaging of tissue after a stimulus, or for interactive endoscopy an OCT engine capable of acquiring, processing and displaying large and high quality 3D OCT data sets at video rate is highly desired. We developed such a prototype OCT engine and demonstrate live OCT with 25 volumes per second at a size of 320x320x320 pixels. The computer processing load of more than 1.5 TFLOPS was handled by a GTX 690 graphics processing unit with more than 3000 stream processors operating in parallel. In the talk, we will describe the optics and electronics hardware as well as the software of the system in detail and analyze current limitations. The talk also focuses on new OCT applications, where such a system improves diagnosis and monitoring of medical procedures. The additional acquisition of hyperspectral stimulated Raman signals with the system will be discussed.

  20. Usefulness of four dimensional (4D) PET/CT imaging in the evaluation of thoracic lesions and in radiotherapy planning: Review of the literature.

    PubMed

    Sindoni, Alessandro; Minutoli, Fabio; Pontoriero, Antonio; Iatì, Giuseppe; Baldari, Sergio; Pergolizzi, Stefano

    2016-06-01

    In the past decade, Positron Emission Tomography (PET) has become a routinely used methodology for the assessment of solid tumors, which can detect functional abnormalities even before they become morphologically evident on conventional imaging. PET imaging has been reported to be useful in characterizing solitary pulmonary nodules, guiding biopsy, improving lung cancer staging, guiding therapy, monitoring treatment response and predicting outcome. This review focuses on the most relevant and recent literature findings, highlighting the current role of PET/CT and the evaluation of 4D-PET/CT modality for radiation therapy planning applications. Current evidence suggests that gross tumor volume delineation based on 4D-PET/CT information may be the best approach currently available for its delineation in thoracic cancers (lung and non-lung lesions). In our opinion, its use in this clinical setting is strongly encouraged, as it may improve patient treatment outcome in the setting of radiation therapy for cancers of the thoracic region, not only involving lung, but also lymph nodes and esophageal tissue. Literature results warrants further investigation in future prospective studies, especially in the setting of dose escalation. PMID:27133755

  1. CT reconstruction via denoising approximate message passing

    NASA Astrophysics Data System (ADS)

    Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.

    2016-05-01

    In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.

  2. Multimodal 4D imaging of cell-pathogen interactions in the lungs provides new insights into pulmonary infections

    NASA Astrophysics Data System (ADS)

    Fiole, Daniel; Douady, Julien; Cleret, Aurélie; Garraud, Kévin; Mathieu, Jacques; Quesnel-Hellmann, Anne; Tournier, Jean-Nicolas

    2011-07-01

    Lung efficiency as gas exchanger organ is based on the delicate balance of its associated mucosal immune system between inflammation and sterility. In this study, we developed a dynamic imaging protocol using confocal and twophoton excitation fluorescence (2PEF) on freshly harvested infected lungs. This modus operandi allowed the collection of important information about CX3CR1+ pulmonary cells. This major immune cell subset turned out to be distributed in an anisotropic way in the lungs: subpleural, parenchymal and bronchial CX3CR1+ cells have then been described. The way parenchymal CX3CR1+ cells react against LPS activation has been considered using Matlab software, demonstrating a dramatic increase of average cell speed. Then, interactions between Bacillus anthracis spores and CX3CR1+ dendritic cells have been investigated, providing not only evidences of CX3CR1+ cells involvement in pathogen uptake but also details about the capture mechanisms.

  3. Simultaneous de-noising in phase contrast tomography

    NASA Astrophysics Data System (ADS)

    Koehler, Thomas; Roessl, Ewald

    2012-07-01

    In this work, we investigate methods for de-noising of tomographic differential phase contrast and absorption contrast images. We exploit the fact that in grating-based differential phase contrast imaging (DPCI), first, several images are acquired simultaneously in exactly the same geometry, and second, these different images can show very different contrast-to-noise-ratios. These features of grating-based DPCI are used to generalize the conventional bilateral filter. Experiments using simulations show a superior de-noising performance of the generalized algorithm compared with the conventional one.

  4. Cardiac function and perfusion dynamics measured on a beat-by-beat basis in the live mouse using ultra-fast 4D optoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Ford, Steven J.; Deán-Ben, Xosé L.; Razansky, Daniel

    2015-03-01

    The fast heart rate (~7 Hz) of the mouse makes cardiac imaging and functional analysis difficult when studying mouse models of cardiovascular disease, and cannot be done truly in real-time and 3D using established imaging modalities. Optoacoustic imaging, on the other hand, provides ultra-fast imaging at up to 50 volumetric frames per second, allowing for acquisition of several frames per mouse cardiac cycle. In this study, we combined a recently-developed 3D optoacoustic imaging array with novel analytical techniques to assess cardiac function and perfusion dynamics of the mouse heart at high, 4D spatiotemporal resolution. In brief, the heart of an anesthetized mouse was imaged over a series of multiple volumetric frames. In another experiment, an intravenous bolus of indocyanine green (ICG) was injected and its distribution was subsequently imaged in the heart. Unique temporal features of the cardiac cycle and ICG distribution profiles were used to segment the heart from background and to assess cardiac function. The 3D nature of the experimental data allowed for determination of cardiac volumes at ~7-8 frames per mouse cardiac cycle, providing important cardiac function parameters (e.g., stroke volume, ejection fraction) on a beat-by-beat basis, which has been previously unachieved by any other cardiac imaging modality. Furthermore, ICG distribution dynamics allowed for the determination of pulmonary transit time and thus additional quantitative measures of cardiovascular function. This work demonstrates the potential for optoacoustic cardiac imaging and is expected to have a major contribution toward future preclinical studies of animal models of cardiovascular health and disease.

  5. 4D Imaging of Salt Precipitation during Evaporation from Saline Porous Media Influenced by the Particle Size Distribution

    NASA Astrophysics Data System (ADS)

    Norouzi Rad, M.; Shokri, N.

    2014-12-01

    Understanding the physics of water evaporation from saline porous media is important in many processes such as evaporation from porous media, vegetation, plant growth, biodiversity in soil, and durability of building materials. To investigate the effect of particle size distribution on the dynamics of salt precipitation in saline porous media during evaporation, we applied X-ray micro-tomography technique. Six samples of quartz sand with different grain size distributions were used in the present study enabling us to constrain the effects of particle and pore sizes on salt precipitation patterns and dynamics. The pore size distributions were computed using the pore-scale X-ray images. The packed beds were saturated with NaCl solution of 3 Molal and the X-ray imaging was continued for one day with temporal resolution of 30 min resulting in pore scale information about the evaporation and precipitation dynamics. Our results show more precipitation at the early stage of the evaporation in the case of sand with the larger particle size due to the presence of fewer evaporation sites at the surface. The presence of more preferential evaporation sites at the surface of finer sands significantly modified the patterns and thickness of the salt crust deposited on the surface such that a thinner salt crust was formed in the case of sand with smaller particle size covering larger area at the surface as opposed to the thicker patchy crusts in samples with larger particle sizes. Our results provide new insights regarding the physics of salt precipitation in porous media during evaporation.

  6. Fractional domain varying-order differential denoising method

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-Shan; Zhang, Feng; Li, Bing-Zhao; Tao, Ran

    2014-10-01

    Removal of noise is an important step in the image restoration process, and it remains a challenging problem in image processing. Denoising is a process used to remove the noise from the corrupted image, while retaining the edges and other detailed features as much as possible. Recently, denoising in the fractional domain is a hot research topic. The fractional-order anisotropic diffusion method can bring a less blocky effect and preserve edges in image denoising, a method that has received much interest in the literature. Based on this method, we propose a new method for image denoising, in which fractional-varying-order differential, rather than constant-order differential, is used. The theoretical analysis and experimental results show that compared with the state-of-the-art fractional-order anisotropic diffusion method, the proposed fractional-varying-order differential denoising model can preserve structure and texture well, while quickly removing noise, and yields good visual effects and better peak signal-to-noise ratio.

  7. Nonlocal two dimensional denoising of frequency specific chirp evoked ABR single trials.

    PubMed

    Schubert, J Kristof; Teuber, Tanja; Steidl, Gabriele; Strauss, Daniel J; Corona-Strauss, Farah I

    2012-01-01

    Recently, we have shown that denoising evoked potential (EP) images is possible using two dimensional diffusion filtering methods. This restoration allows for an integration of regularities over multiple stimulations into the denoising process. In the present work we propose the nonlocal means (NLM) method for EP image denoising. The EP images were constructed using auditory brainstem responses (ABR) collected in young healthy subjects using frequency specific and broadband chirp stimulations. It is concluded that the NLM method is more efficient than conventional approaches in EP imaging denoising, specially in the case of ABRs, where the relevant information can be easily masked by the ongoing EEG activity, i.e., signals suffer from rather low signal-to-noise ratio SNR. The proposed approach is for the a posteriori denoising of single trials after the experiment and not for real time applications. PMID:23366439

  8. An improved non-local means filter for denoising in brain magnetic resonance imaging based on fuzzy cluster

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Sang, Xinzhu; Xing, Shujun; Wang, Bo

    2014-11-01

    Combining non-local means (NLM) filter with appropriate fuzzy cluster criterion, objective and subjective manners with synthetic brain Magnetic Resonance Imaging(MRI) are evaluated. Experimental results show that noise is effectively suppressed while image details are well kept, compared with the traditional NLM method. Meanwhile, quantitative and qualitative results indicate that artifacts are greatly reduced in our proposed method and brain MR images are typically enhanced.

  9. Validating and improving CT ventilation imaging by correlating with ventilation 4D-PET/CT using {sup 68}Ga-labeled nanoparticles

    SciTech Connect

    Kipritidis, John Keall, Paul J.; Siva, Shankar; Hofman, Michael S.; Callahan, Jason; Hicks, Rodney J.

    2014-01-15

    Purpose: CT ventilation imaging is a novel functional lung imaging modality based on deformable image registration. The authors present the first validation study of CT ventilation using positron emission tomography with{sup 68}Ga-labeled nanoparticles (PET-Galligas). The authors quantify this agreement for different CT ventilation metrics and PET reconstruction parameters. Methods: PET-Galligas ventilation scans were acquired for 12 lung cancer patients using a four-dimensional (4D) PET/CT scanner. CT ventilation images were then produced by applying B-spline deformable image registration between the respiratory correlated phases of the 4D-CT. The authors test four ventilation metrics, two existing and two modified. The two existing metrics model mechanical ventilation (alveolar air-flow) based on Hounsfield unit (HU) change (V{sub HU}) or Jacobian determinant of deformation (V{sub Jac}). The two modified metrics incorporate a voxel-wise tissue-density scaling (ρV{sub HU} and ρV{sub Jac}) and were hypothesized to better model the physiological ventilation. In order to assess the impact of PET image quality, comparisons were performed using both standard and respiratory-gated PET images with the former exhibiting better signal. Different median filtering kernels (σ{sub m} = 0 or 3 mm) were also applied to all images. As in previous studies, similarity metrics included the Spearman correlation coefficient r within the segmented lung volumes, and Dice coefficient d{sub 20} for the (0 − 20)th functional percentile volumes. Results: The best agreement between CT and PET ventilation was obtained comparing standard PET images to the density-scaled HU metric (ρV{sub HU}) with σ{sub m} = 3 mm. This leads to correlation values in the ranges 0.22 ⩽ r ⩽ 0.76 and 0.38 ⩽ d{sub 20} ⩽ 0.68, with r{sup ¯}=0.42±0.16 and d{sup ¯}{sub 20}=0.52±0.09 averaged over the 12 patients. Compared to Jacobian-based metrics, HU-based metrics lead to statistically significant

  10. 4D seismic to image a thin carbonate reservoir during a miscible C02 flood: Hall-Gurney Field, Kansas, USA

    USGS Publications Warehouse

    Raef, A.E.; Miller, R.D.; Franseen, E.K.; Byrnes, A.P.; Watney, W.L.; Harrison, W.E.

    2005-01-01

    The movement of miscible CO2 injected into a shallow (900 m) thin (3.6-6m) carbonate reservoir was monitored using the high-resolution parallel progressive blanking (PPB) approach. The approach concentrated on repeatability during acquisition and processing, and use of amplitude envelope 4D horizon attributes. Comparison of production data and reservoir simulations to seismic images provided a measure of the effectiveness of time-lapse (TL) to detect weak anomalies associated with changes in fluid concentration. Specifically, the method aided in the analysis of high-resolution data to distinguish subtle seismic characteristics and associated trends related to depositional lithofacies and geometries and structural elements of this carbonate reservoir that impact fluid character and EOR efforts.

  11. Data assimilation of non-conventional observations using GEOS-R flash lightning: 1D+4D-VAR approach vs. assimilation of images (Invited)

    NASA Astrophysics Data System (ADS)

    Navon, M. I.; Stefanescu, R.

    2013-12-01

    Previous assimilation of lightning used nudging approaches. We develop three approaches namely, 3D-VAR WRFDA and1D+nD-VAR (n=3,4) WRFDA . The present research uses Convective Available Potential Energy (CAPE) as a proxy between lightning data and model variables. To test performance of aforementioned schemes, we assess quality of resulting analysis and forecasts of precipitation compared to those from a control experiment and verify them against NCEP stage IV precipitation. Results demonstrate that assimilating lightning observations improves precipitation statistics during the assimilation window and for 3-7 h thereafter. The 1D+4D-VAR approach yielded the best performance significantly improving precipitation rmse errors by 25% and 27.5%,compared to control during the assimilation window for two tornadic test cases. Finally we propose a new approach to assimilate 2-D images of lightning flashes based on pixel intensity, mitigating dimensionality by a reduced order method.

  12. 4-D imaging of sub-second dynamics in pore-scale processes using real-time synchrotron X-ray tomography

    NASA Astrophysics Data System (ADS)

    Dobson, Katherine J.; Coban, Sophia B.; McDonald, Samuel A.; Walsh, Joanna N.; Atwood, Robert C.; Withers, Philip J.

    2016-07-01

    A variable volume flow cell has been integrated with state-of-the-art ultra-high-speed synchrotron X-ray tomography imaging. The combination allows the first real-time (sub-second) capture of dynamic pore (micron)-scale fluid transport processes in 4-D (3-D + time). With 3-D data volumes acquired at up to 20 Hz, we perform in situ experiments that capture high-frequency pore-scale dynamics in 5-25 mm diameter samples with voxel (3-D equivalent of a pixel) resolutions of 2.5 to 3.8 µm. The data are free from motion artefacts and can be spatially registered or collected in the same orientation, making them suitable for detailed quantitative analysis of the dynamic fluid distribution pathways and processes. The methods presented here are capable of capturing a wide range of high-frequency nonequilibrium pore-scale processes including wetting, dilution, mixing, and reaction phenomena, without sacrificing significant spatial resolution. As well as fast streaming (continuous acquisition) at 20 Hz, they also allow larger-scale and longer-term experimental runs to be sampled intermittently at lower frequency (time-lapse imaging), benefiting from fast image acquisition rates to prevent motion blur in highly dynamic systems. This marks a major technical breakthrough for quantification of high-frequency pore-scale processes: processes that are critical for developing and validating more accurate multiscale flow models through spatially and temporally heterogeneous pore networks.

  13. Integration of image/video understanding engine into 4D/RCS architecture for intelligent perception-based behavior of robots in real-world environments

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-10-01

    To be completely successful, robots need to have reliable perceptual systems that are similar to human vision. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects with respect to the observer and to each other. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a "raster" into a "vector" representation. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views. Once built, the model of visual scene changes slower then local information in the visual buffer. It allows for disambiguating visual information and effective control of actions and navigation via incremental relational changes in visual buffer. Network-Symbolic models can be seamlessly integrated into the NIST 4D/RCS architecture and better interpret images/video for situation awareness, target recognition, navigation and actions.

  14. A Variational Approach to the Denoising of Images Based on Different Variants of the TV-Regularization

    SciTech Connect

    Bildhauer, Michael Fuchs, Martin

    2012-12-15

    We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.

  15. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  16. Sinogram denoising via simultaneous sparse representation in learned dictionaries.

    PubMed

    Karimi, Davood; Ward, Rabab K

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster. PMID:27055224

  17. Direct 4D PET MLEM reconstruction of parametric images using the simplified reference tissue model with the basis function method for [¹¹C]raclopride.

    PubMed

    Gravel, Paul; Reader, Andrew J

    2015-06-01

    This work assesses the one-step late maximum likelihood expectation maximization (OSL-MLEM) 4D PET reconstruction algorithm for direct estimation of parametric images from raw PET data when using the simplified reference tissue model with the basis function method (SRTM-BFM) for the kinetic analysis. To date, the OSL-MLEM method has been evaluated using kinetic models based on two-tissue compartments with an irreversible component. We extend the evaluation of this method for two-tissue compartments with a reversible component, using SRTM-BFM on simulated 3D + time data sets (with use of [(11)C]raclopride time-activity curves from real data) and on real data sets acquired with the high resolution research tomograph. The performance of the proposed method is evaluated by comparing voxel-level binding potential (BPND) estimates with those obtained from conventional post-reconstruction kinetic parameter estimation. For the commonly chosen number of iterations used in practice, our results show that for the 3D + time simulation, the direct method delivers results with lower (%)RMSE at the normal count level (decreases of 9-10 percentage points, corresponding to a 38-44% reduction), and also at low count levels (decreases of 17-21 percentage points, corresponding to a 26-36% reduction). As for the real 3D data set, the results obtained follow a similar trend, with the direct reconstruction method offering a 21% decrease in (%)CV compared to the post reconstruction method at low count levels. Thus, based on the results presented herein, using the SRTM-BFM kinetic model in conjunction with the OSL-MLEM direct 4D PET MLEM reconstruction method offers an improvement in performance when compared to conventional post reconstruction methods. PMID:25992999

  18. Research and Implementation of Heart Sound Denoising

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  19. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  20. Establishing a framework to implement 4D XCAT Phantom for 4D radiotherapy research

    PubMed Central

    Panta, Raj K.; Segars, Paul; Yin, Fang-Fang; Cai, Jing

    2015-01-01

    Aims To establish a framework to implement the 4D integrated extended cardiac torso (XCAT) digital phantom for 4D radiotherapy (RT) research. Materials and Methods A computer program was developed to facilitate the characterization and implementation of the 4D XCAT phantom. The program can (1) generate 4D XCAT images with customized parameter files; (2) review 4D XCAT images; (3) generate composite images from 4D XCAT images; (4) track motion of selected region-of-interested (ROI); (5) convert XCAT raw binary images into DICOM format; (6) analyse clinically acquired 4DCT images and real-time position management (RPM) respiratory signal. Motion tracking algorithm was validated by comparing with manual method. Major characteristics of the 4D XCAT phantom were studied. Results The comparison between motion tracking and manual measurements of lesion motion trajectory showed a small difference between them (mean difference in motion amplitude: 1.2 mm). The maximum lesion motion decreased nearly linearly (R2 = 0.97) as its distance to the diaphragm (DD) increased. At any given DD, lesion motion amplitude increased nearly linearly (R 2 range: 0.89 to 0.95) as the inputted diaphragm motion increased. For a given diaphragm motion, the lesion motion is independent of the lesion size at any given DD. The 4D XCAT phantom can closely reproduce irregular breathing profile. The end-to-end test showed that clinically comparable treatment plans can be generated successfully based on 4D XCAT images. Conclusions An integrated computer program has been developed to generate, review, analyse, process, and export the 4D XCAT images. A framework has been established to implement the 4D XCAT phantom for 4D RT research. PMID:23361276

  1. Effect of denoising on supervised lung parenchymal clusters

    NASA Astrophysics Data System (ADS)

    Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.

  2. Birdsong Denoising Using Wavelets.

    PubMed

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  3. Birdsong Denoising Using Wavelets

    PubMed Central

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  4. A Novel Fast Helical 4D-CT Acquisition Technique to Generate Low-Noise Sorting Artifact–Free Images at User-Selected Breathing Phases

    SciTech Connect

    Thomas, David; Lamb, James; White, Benjamin; Jani, Shyam; Gaudio, Sergio; Lee, Percy; Ruan, Dan; McNitt-Gray, Michael; Low, Daniel

    2014-05-01

    Purpose: To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Methods and Materials: Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. The tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Results: Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. Conclusions: The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact–free images at a patient dose similar to or less than current 4D-CT techniques.

  5. Enhanced Optoelectronic Performance of a Passivated Nanowire-Based Device: Key Information from Real-Space Imaging Using 4D Electron Microscopy.

    PubMed

    Khan, Jafar I; Adhikari, Aniruddha; Sun, Jingya; Priante, Davide; Bose, Riya; Shaheen, Basamat S; Ng, Tien Khee; Zhao, Chao; Bakr, Osman M; Ooi, Boon S; Mohammed, Omar F

    2016-05-01

    Managing trap states and understanding their role in ultrafast charge-carrier dynamics, particularly at surface and interfaces, remains a major bottleneck preventing further advancements and commercial exploitation of nanowire (NW)-based devices. A key challenge is to selectively map such ultrafast dynamical processes on the surfaces of NWs, a capability so far out of reach of time-resolved laser techniques. Selective mapping of surface dynamics in real space and time can only be achieved by applying four-dimensional scanning ultrafast electron microscopy (4D S-UEM). Charge carrier dynamics are spatially and temporally visualized on the surface of InGaN NW arrays before and after surface passivation with octadecylthiol (ODT). The time-resolved secondary electron images clearly demonstrate that carrier recombination on the NW surface is significantly slowed down after ODT treatment. This observation is fully supported by enhancement of the performance of the light emitting device. Direct observation of surface dynamics provides a profound understanding of the photophysical mechanisms on materials' surfaces and enables the formulation of effective surface trap state management strategies for the next generation of high-performance NW-based optoelectronic devices. PMID:26938476

  6. Diffractive centrosymmetric 3D-transmission phase gratings positioned at the image plane of optical systems transform lightlike 4D-WORLD as tunable resonators into spectral metrics...

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    1999-08-01

    Diffractive 3D phase gratings of spherical scatterers dense in hexagonal packing geometry represent adaptively tunable 4D-spatiotemporal filters with trichromatic resonance in visible spectrum. They are described in the (lambda) - chromatic and the reciprocal (nu) -aspects by reciprocal geometric translations of the lightlike Pythagoras theorem, and by the direction cosine for double cones. The most elementary resonance condition in the lightlike Pythagoras theorem is given by the transformation of the grating constants gx, gy, gz of the hexagonal 3D grating to (lambda) h1h2h3 equals (lambda) 111 with cos (alpha) equals 0.5. Through normalization of the chromaticity in the von Laue-interferences to (lambda) 111, the (nu) (lambda) equals (lambda) h1h2h3/(lambda) 111-factor of phase velocity becomes the crucial resonance factor, the 'regulating device' of the spatiotemporal interaction between 3D grating and light, space and time. In the reciprocal space equal/unequal weights and times in spectral metrics result at positions of interference maxima defined by hyperbolas and circles. A database becomes built up by optical interference for trichromatic image preprocessing, motion detection in vector space, multiple range data analysis, patchwide multiple correlations in the spatial frequency spectrum, etc.

  7. Helical 4D CT and Comparison with Cine 4D CT

    NASA Astrophysics Data System (ADS)

    Pan, Tinsu

    4D CT was one of the most important developments in radiation oncology in the last decade. Its early development in single slice CT and commercialization in multi-slice CT has radically changed our practice in radiation treatment of lung cancer, and has enabled the stereotactic radiosurgery of early stage lung cancer. In this chapter, we will document the history of 4D CT development, detail the data sufficiency condition governing the 4D CT data collection; present the design of the commercial helical 4D CTs from Philips and Siemens; compare the differences between the helical 4D CT and the GE cine 4D CT in data acquisition, slice thickness, acquisition time and work flow; review the respiratory monitoring devices; and understand the causes of image artifacts in 4D CT.

  8. 4-D imaging of seepage in earthen embankments with time-lapse inversion of self-potential data constrained by acoustic emissions localization

    NASA Astrophysics Data System (ADS)

    Rittgers, J. B.; Revil, A.; Planes, T.; Mooney, M. A.; Koelewijn, A. R.

    2015-02-01

    New methods are required to combine the information contained in the passive electrical and seismic signals to detect, localize and monitor hydromechanical disturbances in porous media. We propose a field experiment showing how passive seismic and electrical data can be combined together to detect a preferential flow path associated with internal erosion in a Earth dam. Continuous passive seismic and electrical (self-potential) monitoring data were recorded during a 7-d full-scale levee (earthen embankment) failure test, conducted in Booneschans, Netherlands in 2012. Spatially coherent acoustic emissions events and the development of a self-potential anomaly, associated with induced concentrated seepage and internal erosion phenomena, were identified and imaged near the downstream toe of the embankment, in an area that subsequently developed a series of concentrated water flows and sand boils, and where liquefaction of the embankment toe eventually developed. We present a new 4-D grid-search algorithm for acoustic emissions localization in both time and space, and the application of the localization results to add spatially varying constraints to time-lapse 3-D modelling of self-potential data in the terms of source current localization. Seismic signal localization results are utilized to build a set of time-invariant yet spatially varying model weights used for the inversion of the self-potential data. Results from the combination of these two passive techniques show results that are more consistent in terms of focused ground water flow with respect to visual observation on the embankment. This approach to geophysical monitoring of earthen embankments provides an improved approach for early detection and imaging of the development of embankment defects associated with concentrated seepage and internal erosion phenomena. The same approach can be used to detect various types of hydromechanical disturbances at larger scales.

  9. Non Local Spatial and Angular Matching: Enabling higher spatial resolution diffusion MRI datasets through adaptive denoising.

    PubMed

    St-Jean, Samuel; Coupé, Pierrick; Descoteaux, Maxime

    2016-08-01

    Diffusion magnetic resonance imaging (MRI) datasets suffer from low Signal-to-Noise Ratio (SNR), especially at high b-values. Acquiring data at high b-values contains relevant information and is now of great interest for microstructural and connectomics studies. High noise levels bias the measurements due to the non-Gaussian nature of the noise, which in turn can lead to a false and biased estimation of the diffusion parameters. Additionally, the usage of in-plane acceleration techniques during the acquisition leads to a spatially varying noise distribution, which depends on the parallel acceleration method implemented on the scanner. This paper proposes a novel diffusion MRI denoising technique that can be used on all existing data, without adding to the scanning time. We first apply a statistical framework to convert both stationary and non stationary Rician and non central Chi distributed noise to Gaussian distributed noise, effectively removing the bias. We then introduce a spatially and angular adaptive denoising technique, the Non Local Spatial and Angular Matching (NLSAM) algorithm. Each volume is first decomposed in small 4D overlapping patches, thus capturing the spatial and angular structure of the diffusion data, and a dictionary of atoms is learned on those patches. A local sparse decomposition is then found by bounding the reconstruction error with the local noise variance. We compare against three other state-of-the-art denoising methods and show quantitative local and connectivity results on a synthetic phantom and on an in-vivo high resolution dataset. Overall, our method restores perceptual information, removes the noise bias in common diffusion metrics, restores the extracted peaks coherence and improves reproducibility of tractography on the synthetic dataset. On the 1.2 mm high resolution in-vivo dataset, our denoising improves the visual quality of the data and reduces the number of spurious tracts when compared to the noisy acquisition. Our

  10. Novel use of 4D-CTA in imaging of intranidal aneurysms in an acutely ruptured arteriovenous malformation: is this the way forward?

    PubMed

    Chandran, Arun; Radon, Mark; Biswas, Shubhabrata; Das, Kumar; Puthuran, Mani; Nahser, Hans

    2016-09-01

    Ruptured arteriovenous malformation (AVM) is a frequent cause of intracranial hemorrhage. The presence of associated aneurysms, especially intranidal aneurysms, is considered to increase the risk of re-hemorrhage. We present two cases where an intranidal aneurysm was demonstrated on four-dimensional CT angiography (time-resolved CT angiography) (4D-CTA). These features were confirmed by digital subtraction angiography (catheter arterial angiogram). This is the first report of an intranidal aneurysm demonstrated by 4D-CTA. 4D-CTA can offer a comprehensive evaluation of the angioarchitecture and flow dynamics of an AVM for appropriate classification and management. PMID:26180096

  11. Use of INSAT-3D sounder and imager radiances in the 4D-VAR data assimilation system and its implications in the analyses and forecasts

    NASA Astrophysics Data System (ADS)

    Indira Rani, S.; Taylor, Ruth; George, John P.; Rajagopal, E. N.

    2016-05-01

    INSAT-3D, the first Indian geostationary satellite with sounding capability, provides valuable information over India and the surrounding oceanic regions which are pivotal to Numerical Weather Prediction. In collaboration with UK Met Office, NCMRWF developed the assimilation capability of INSAT-3D Clear Sky Brightness Temperature (CSBT), both from the sounder and imager, in the 4D-Var assimilation system being used at NCMRWF. Out of the 18 sounder channels, radiances from 9 channels are selected for assimilation depending on relevance of the information in each channel. The first three high peaking channels, the CO2 absorption channels and the three water vapor channels (channel no. 10, 11, and 12) are assimilated both over land and Ocean, whereas the window channels (channel no. 6, 7, and 8) are assimilated only over the Ocean. Measured satellite radiances are compared with that from short range forecasts to monitor the data quality. This is based on the assumption that the observed satellite radiances are free from calibration errors and the short range forecast provided by NWP model is free from systematic errors. Innovations (Observation - Forecast) before and after the bias correction are indicative of how well the bias correction works. Since the biases vary with air-masses, time, scan angle and also due to instrument degradation, an accurate bias correction algorithm for the assimilation of INSAT-3D sounder radiance is important. This paper discusses the bias correction methods and other quality controls used for the selected INSAT-3D sounder channels and the impact of bias corrected radiance in the data assimilation system particularly over India and surrounding oceanic regions.

  12. Improving 4D plan quality for PBS-based liver tumour treatments by combining online image guided beam gating with rescanning.

    PubMed

    Zhang, Ye; Knopf, Antje-Christin; Weber, Damien Charles; Lomax, Antony John

    2015-10-21

    Pencil beam scanned (PBS) proton therapy has many advantages over conventional radiotherapy, but its effectiveness for treating mobile tumours remains questionable. Gating dose delivery to the breathing pattern is a well-developed method in conventional radiotherapy for mitigating tumour-motion, but its clinical efficiency for PBS proton therapy is not yet well documented. In this study, the dosimetric benefits and the treatment efficiency of beam gating for PBS proton therapy has been comprehensively evaluated. A series of dedicated 4D dose calculations (4DDC) have been performed on 9 different 4DCT(MRI) liver data sets, which give realistic 4DCT extracting motion information from 4DMRI. The value of 4DCT(MRI) is its capability of providing not only patient geometries and deformable breathing characteristics, but also includes variations in the breathing patterns between breathing cycles. In order to monitor target motion and derive a gating signal, we simulate time-resolved beams' eye view (BEV) x-ray images as an online motion surrogate. 4DDCs have been performed using three amplitude-based gating window sizes (10/5/3 mm) with motion surrogates derived from either pre-implanted fiducial markers or the diaphragm. In addition, gating has also been simulated in combination with up to 19 times rescanning using either volumetric or layered approaches. The quality of the resulting 4DDC plans has been quantified in terms of the plan homogeneity index (HI), total treatment time and duty cycle. Results show that neither beam gating nor rescanning alone can fully retrieve the plan homogeneity of the static reference plan. Especially for variable breathing patterns, reductions of the effective duty cycle to as low as 10% have been observed with the smallest gating rescanning window (3 mm), implying that gating on its own for such cases would result in much longer treatment times. In addition, when rescanning is applied on its own, large differences between volumetric

  13. Improving 4D plan quality for PBS-based liver tumour treatments by combining online image guided beam gating with rescanning

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Knopf, Antje-Christin; Weber, Damien Charles; Lomax, Antony John

    2015-10-01

    Pencil beam scanned (PBS) proton therapy has many advantages over conventional radiotherapy, but its effectiveness for treating mobile tumours remains questionable. Gating dose delivery to the breathing pattern is a well-developed method in conventional radiotherapy for mitigating tumour-motion, but its clinical efficiency for PBS proton therapy is not yet well documented. In this study, the dosimetric benefits and the treatment efficiency of beam gating for PBS proton therapy has been comprehensively evaluated. A series of dedicated 4D dose calculations (4DDC) have been performed on 9 different 4DCT(MRI) liver data sets, which give realistic 4DCT extracting motion information from 4DMRI. The value of 4DCT(MRI) is its capability of providing not only patient geometries and deformable breathing characteristics, but also includes variations in the breathing patterns between breathing cycles. In order to monitor target motion and derive a gating signal, we simulate time-resolved beams’ eye view (BEV) x-ray images as an online motion surrogate. 4DDCs have been performed using three amplitude-based gating window sizes (10/5/3 mm) with motion surrogates derived from either pre-implanted fiducial markers or the diaphragm. In addition, gating has also been simulated in combination with up to 19 times rescanning using either volumetric or layered approaches. The quality of the resulting 4DDC plans has been quantified in terms of the plan homogeneity index (HI), total treatment time and duty cycle. Results show that neither beam gating nor rescanning alone can fully retrieve the plan homogeneity of the static reference plan. Especially for variable breathing patterns, reductions of the effective duty cycle to as low as 10% have been observed with the smallest gating rescanning window (3 mm), implying that gating on its own for such cases would result in much longer treatment times. In addition, when rescanning is applied on its own, large differences between volumetric

  14. Is a Clinical Target Volume (CTV) Necessary in the Treatment of Lung Cancer in the Modern Era Combining 4-D Imaging and Image-guided Radiotherapy (IGRT)?

    PubMed Central

    Kilburn, Jeremy M; Lucas, John T; Soike, Michael H; Ayala-Peacock, Diandra N; Blackstock, Arthur W; Hinson, William H; Munley, Michael T; Petty, William J

    2016-01-01

    Objective: We hypothesized that omission of clinical target volumes (CTV) in lung cancer radiotherapy would not compromise control by determining retrospectively if the addition of a CTV would encompass the site of failure. Methods: Stage II-III patients were treated from 2009-2012 with daily cone-beam imaging and a 5 mm planning target volume (PTV) without a CTV. PTVs were expanded 1 cm and termed CTVretro. Recurrences were scored as 1) within the PTV, 2) within CTVretro, or 3) outside the PTV. Locoregional control (LRC), distant control (DC), progression-free survival (PFS), and overall survival (OS) were estimated. Result: Among 110 patients, Stage IIIA 57%, IIIB 32%, IIA 4%, and IIB 7%. Eighty-six percent of Stage III patients received chemotherapy. Median dose was 70 Gy (45-74 Gy) and fraction size ranged from 1.5-2.7 Gy. Median follow-up was 12 months, median OS was 22 months (95% CI 19-30 months), and LRC at two years was 69%. Fourteen local and eight regional events were scored with two CTVretro failures equating to a two-year CTV failure-free survival of 98%. Conclusion: Omission of a 1 cm CTV expansion appears feasible based on only two events among 110 patients and should be considered in radiation planning. PMID:26929893

  15. Method for identifying subsurface fluid migration and drainage pathways in and among oil and gas reservoirs using 3-D and 4-D seismic imaging

    DOEpatents

    Anderson, R.N.; Boulanger, A.; Bagdonas, E.P.; Xu, L.; He, W.

    1996-12-17

    The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.

  16. Method for identifying subsurface fluid migration and drainage pathways in and among oil and gas reservoirs using 3-D and 4-D seismic imaging

    DOEpatents

    Anderson, Roger N.; Boulanger, Albert; Bagdonas, Edward P.; Xu, Liqing; He, Wei

    1996-01-01

    The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.

  17. Denoising Algorithm for the Pixel-Response Non-Uniformity Correction of a Scientific CMOS Under Low Light Conditions

    NASA Astrophysics Data System (ADS)

    Hu, Changmiao; Bai, Yang; Tang, Ping

    2016-06-01

    We present a denoising algorithm for the pixel-response non-uniformity correction of a scientific complementary metal-oxide-semiconductor (CMOS) image sensor, which captures images under extremely low-light conditions. By analyzing the integrating sphere experimental data, we present a pixel-by-pixel flat-field denoising algorithm to remove this fixed pattern noise, which occur in low-light conditions and high pixel response readouts. The response of the CMOS image sensor imaging system to the uniform radiance field shows a high level of spatial uniformity after the denoising algorithm has been applied.

  18. Constrained reconstructions for 4D intervention guidance

    NASA Astrophysics Data System (ADS)

    Kuntz, J.; Flach, B.; Kueres, R.; Semmler, W.; Kachelrieß, M.; Bartling, S.

    2013-05-01

    Image-guided interventions are an increasingly important part of clinical minimally invasive procedures. However, up to now they cannot be performed under 4D (3D + time) guidance due to the exceedingly high x-ray dose. In this work we investigate the applicability of compressed sensing reconstructions for highly undersampled CT datasets combined with the incorporation of prior images in order to yield low dose 4D intervention guidance. We present a new reconstruction scheme prior image dynamic interventional CT (PrIDICT) that accounts for specific image features in intervention guidance and compare it to PICCS and ASD-POCS. The optimal parameters for the dose per projection and the numbers of projections per reconstruction are determined in phantom simulations and measurements. In vivo experiments in six pigs are performed in a cone-beam CT; measured doses are compared to current gold-standard intervention guidance represented by a clinical fluoroscopy system. Phantom studies show maximum image quality for identical overall doses in the range of 14 to 21 projections per reconstruction. In vivo studies reveal that interventional materials can be followed in 4D visualization and that PrIDICT, compared to PICCS and ASD-POCS, shows superior reconstruction results and fewer artifacts in the periphery with dose in the order of biplane fluoroscopy. These results suggest that 4D intervention guidance can be realized with today’s flat detector and gantry systems using the herein presented reconstruction scheme.

  19. Improving wavelet denoising based on an in-depth analysis of the camera color processing

    NASA Astrophysics Data System (ADS)

    Seybold, Tamara; Plichta, Mathias; Stechele, Walter

    2015-02-01

    While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.

  20. Shearlet-based total variation diffusion for denoising.

    PubMed

    Easley, Glenn R; Labate, Demetrio; Colonna, Flavia

    2009-02-01

    We propose a shearlet formulation of the total variation (TV) method for denoising images. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. Common approaches in combining wavelet-like representations such as curvelets with TV or diffusion methods aim at reducing Gibbs-type artifacts after obtaining a nearly optimal estimate. We show that it is possible to obtain much better estimates from a shearlet representation by constraining the residual coefficients using a projected adaptive total variation scheme in the shearlet domain. We also analyze the performance of a shearlet-based diffusion method. Numerical examples demonstrate that these schemes are highly effective at denoising complex images and outperform a related method based on the use of the curvelet transform. Furthermore, the shearlet-TV scheme requires far fewer iterations than similar competitors. PMID:19095539

  1. Denoised and texture enhanced MVCT to improve soft tissue conspicuity

    SciTech Connect

    Sheng, Ke Qi, Sharon X.; Gou, Shuiping; Wu, Jiaolong

    2014-10-15

    Purpose: MVCT images have been used in TomoTherapy treatment to align patients based on bony anatomies but its usefulness for soft tissue registration, delineation, and adaptive radiation therapy is limited due to insignificant photoelectric interaction components and the presence of noise resulting from low detector quantum efficiency of megavoltage x-rays. Algebraic reconstruction with sparsity regularizers as well as local denoising methods has not significantly improved the soft tissue conspicuity. The authors aim to utilize a nonlocal means denoising method and texture enhancement to recover the soft tissue information in MVCT (DeTECT). Methods: A block matching 3D (BM3D) algorithm was adapted to reduce the noise while keeping the texture information of the MVCT images. Following imaging denoising, a saliency map was created to further enhance visual conspicuity of low contrast structures. In this study, BM3D and saliency maps were applied to MVCT images of a CT imaging quality phantom, a head and neck, and four prostate patients. Following these steps, the contrast-to-noise ratios (CNRs) were quantified. Results: By applying BM3D denoising and saliency map, postprocessed MVCT images show remarkable improvements in imaging contrast without compromising resolution. For the head and neck patient, the difficult-to-see lymph nodes and vein in the carotid space in the original MVCT image became conspicuous in DeTECT. For the prostate patients, the ambiguous boundary between the bladder and the prostate in the original MVCT was clarified. The CNRs of phantom low contrast inserts were improved from 1.48 and 3.8 to 13.67 and 16.17, respectively. The CNRs of two regions-of-interest were improved from 1.5 and 3.17 to 3.14 and 15.76, respectively, for the head and neck patient. DeTECT also increased the CNR of prostate from 0.13 to 1.46 for the four prostate patients. The results are substantially better than a local denoising method using anisotropic diffusion

  2. FluoRender: An Application of 2D Image Space Methods for 3D and 4D Confocal Microscopy Data Visualization in Neurobiology Research

    PubMed Central

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2013-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists’ demands for qualitative analysis of confocal microscopy data. PMID:23584131

  3. Diagnostic accuracy of late iodine enhancement on cardiac computed tomography with a denoise filter for the evaluation of myocardial infarction.

    PubMed

    Matsuda, Takuya; Kido, Teruhito; Itoh, Toshihide; Saeki, Hideyuki; Shigemi, Susumu; Watanabe, Kouki; Kido, Tomoyuki; Aono, Shoji; Yamamoto, Masaya; Matsuda, Takeshi; Mochizuki, Teruhito

    2015-12-01

    We evaluated the image quality and diagnostic performance of late iodine enhancement (LIE) in dual-source computed tomography (DSCT) with low kilo-voltage peak (kVp) images and a denoise filter for the detection of acute myocardial infarction (AMI) in comparison with late gadolinium enhancement (LGE) magnetic resonance imaging (MRI). The Hospital Ethics Committee approved the study protocol. Before discharge, 19 patients who received percutaneous coronary intervention after AMI underwent DSCT and 1.5 T MRI. Immediately after coronary computed tomography (CT) angiography, contrast medium was administered at a slow injection rate. LIE-CT scans were acquired via dual-energy CT and reconstructed as 100-, 140-kVp, and mixed images. An iterative three-dimensional edge-preserved smoothing filter was applied to the 100-kVp images to obtain denoised 100-kVp images. The mixed, 140-kVp, 100-kVp, and denoised 100-kVp images were assessed using contrast-to-noise ratio (CNR), and their diagnostic performance in comparison with MRI and infarcted volumes were evaluated. Three hundred four segments of 19 patients were evaluated. Fifty-three segments showed LGE in MRI. The median CNR of the mixed, 140-, 100-kVp and denoised 100-kVp images was 3.49, 1.21, 3.57, and 6.08, respectively. The median CNR was significantly higher in the denoised 100-kVp images than in the other three images (P < 0.05). The denoised 100-kVp images showed the highest diagnostic accuracy and sensitivity. The percentage of myocardium in the four CT image types was significantly correlated with the respective MRI findings. The use of a denoise filter with a low-kVp image can improve CNR, sensitivity, and accuracy in LIE-CT. PMID:26202159

  4. Fractional Diffusion, Low Exponent Lévy Stable Laws, and ‘Slow Motion’ Denoising of Helium Ion Microscope Nanoscale Imagery

    PubMed Central

    Carasso, Alfred S.; Vladár, András E.

    2012-01-01

    Helium ion microscopes (HIM) are capable of acquiring images with better than 1 nm resolution, and HIM images are particularly rich in morphological surface details. However, such images are generally quite noisy. A major challenge is to denoise these images while preserving delicate surface information. This paper presents a powerful slow motion denoising technique, based on solving linear fractional diffusion equations forward in time. The method is easily implemented computationally, using fast Fourier transform (FFT) algorithms. When applied to actual HIM images, the method is found to reproduce the essential surface morphology of the sample with high fidelity. In contrast, such highly sophisticated methodologies as Curvelet Transform denoising, and Total Variation denoising using split Bregman iterations, are found to eliminate vital fine scale information, along with the noise. Image Lipschitz exponents are a useful image metrology tool for quantifying the fine structure content in an image. In this paper, this tool is applied to rank order the above three distinct denoising approaches, in terms of their texture preserving properties. In several denoising experiments on actual HIM images, it was found that fractional diffusion smoothing performed noticeably better than split Bregman TV, which in turn, performed slightly better than Curvelet denoising. PMID:26900518

  5. 4D (x-y-z-t) imaging of thick biological samples by means of Two-Photon inverted Selective Plane Illumination Microscopy (2PE-iSPIM)

    PubMed Central

    Lavagnino, Zeno; Sancataldo, Giuseppe; d’Amora, Marta; Follert, Philipp; De Pietri Tonelli, Davide; Diaspro, Alberto; Cella Zanacchi, Francesca

    2016-01-01

    In the last decade light sheet fluorescence microscopy techniques, such as selective plane illumination microscopy (SPIM), has become a well established method for developmental biology. However, conventional SPIM architectures hardly permit imaging of certain tissues since the common sample mounting procedure, based on gel embedding, could interfere with the sample morphology. In this work we propose an inverted selective plane microscopy system (iSPIM), based on non-linear excitation, suitable for 3D tissue imaging. First, the iSPIM architecture provides flexibility on the sample mounting, getting rid of the gel-based mounting typical of conventional SPIM, permitting 3D imaging of hippocampal slices from mouse brain. Moreover, all the advantages brought by two photon excitation (2PE) in terms of reduction of scattering effects and contrast improvement are exploited, demonstrating an improved image quality and contrast compared to single photon excitation. The system proposed represents an optimal platform for tissue imaging and it smooths the way to the applicability of light sheet microscopy to a wider range of samples including those that have to be mounted on non-transparent surfaces. PMID:27033347

  6. 4D (x-y-z-t) imaging of thick biological samples by means of Two-Photon inverted Selective Plane Illumination Microscopy (2PE-iSPIM).

    PubMed

    Lavagnino, Zeno; Sancataldo, Giuseppe; d'Amora, Marta; Follert, Philipp; De Pietri Tonelli, Davide; Diaspro, Alberto; Cella Zanacchi, Francesca

    2016-01-01

    In the last decade light sheet fluorescence microscopy techniques, such as selective plane illumination microscopy (SPIM), has become a well established method for developmental biology. However, conventional SPIM architectures hardly permit imaging of certain tissues since the common sample mounting procedure, based on gel embedding, could interfere with the sample morphology. In this work we propose an inverted selective plane microscopy system (iSPIM), based on non-linear excitation, suitable for 3D tissue imaging. First, the iSPIM architecture provides flexibility on the sample mounting, getting rid of the gel-based mounting typical of conventional SPIM, permitting 3D imaging of hippocampal slices from mouse brain. Moreover, all the advantages brought by two photon excitation (2PE) in terms of reduction of scattering effects and contrast improvement are exploited, demonstrating an improved image quality and contrast compared to single photon excitation. The system proposed represents an optimal platform for tissue imaging and it smooths the way to the applicability of light sheet microscopy to a wider range of samples including those that have to be mounted on non-transparent surfaces. PMID:27033347

  7. 4D (x-y-z-t) imaging of thick biological samples by means of Two-Photon inverted Selective Plane Illumination Microscopy (2PE-iSPIM)

    NASA Astrophysics Data System (ADS)

    Lavagnino, Zeno; Sancataldo, Giuseppe; D’Amora, Marta; Follert, Philipp; de Pietri Tonelli, Davide; Diaspro, Alberto; Cella Zanacchi, Francesca

    2016-04-01

    In the last decade light sheet fluorescence microscopy techniques, such as selective plane illumination microscopy (SPIM), has become a well established method for developmental biology. However, conventional SPIM architectures hardly permit imaging of certain tissues since the common sample mounting procedure, based on gel embedding, could interfere with the sample morphology. In this work we propose an inverted selective plane microscopy system (iSPIM), based on non-linear excitation, suitable for 3D tissue imaging. First, the iSPIM architecture provides flexibility on the sample mounting, getting rid of the gel-based mounting typical of conventional SPIM, permitting 3D imaging of hippocampal slices from mouse brain. Moreover, all the advantages brought by two photon excitation (2PE) in terms of reduction of scattering effects and contrast improvement are exploited, demonstrating an improved image quality and contrast compared to single photon excitation. The system proposed represents an optimal platform for tissue imaging and it smooths the way to the applicability of light sheet microscopy to a wider range of samples including those that have to be mounted on non-transparent surfaces.

  8. Evaluation of the cone beam CT for internal target volume localization in lung stereotactic radiotherapy in comparison with 4D MIP images

    SciTech Connect

    Wang, Lu; Chen, Xiaoming; Lin, Mu-Han; Lin, Teh; Fan, Jiajin; Jin, Lihui; Ma, Charlie M.; Xue, Jun

    2013-11-15

    Purpose: To investigate whether the three-dimensional cone-beam CT (CBCT) is clinically equivalent to the four-dimensional computed tomography (4DCT) maximum intensity projection (MIP) reconstructed images for internal target volume (ITV) localization in image-guided lung stereotactic radiotherapy.Methods: A ball-shaped polystyrene phantom with built-in cube, sphere, and cone of known volumes was attached to a motor-driven platform, which simulates a sinusoidal movement with changeable motion amplitude and frequency. Target motion was simulated in the patient in a superior-inferior (S-I) direction with three motion periods and 2 cm peak-to-peak amplitudes. The Varian onboard Exact-Arms kV CBCT system and the GE LightSpeed four-slice CT integrated with the respiratory-position-management 4DCT scanner were used to scan the moving phantom. MIP images were generated from the 4DCT images. The clinical equivalence of the two sets of images was evaluated by comparing the extreme locations of the moving objects along the motion direction, the centroid position of the ITV, and the ITV volumes that were contoured automatically by Velocity or calculated with an imaging gradient method. The authors compared the ITV volumes determined by the above methods with those theoretically predicted by taking into account the physical object dimensions and the motion amplitudes. The extreme locations were determined by the gradient method along the S-I axis through the center of the object. The centroid positions were determined by autocenter functions. The effect of motion period on the volume sizes was also studied.Results: It was found that the extreme locations of the objects determined from the two image modalities agreed with each other satisfactorily. They were not affected by the motion period. The average difference between the two modalities in the extreme locations was 0.68% for the cube, 1.35% for the sphere, and 0.5% for the cone, respectively. The maximum difference in the

  9. Abdominal and pancreatic motion correlation using 4D CT, 4D transponders, and a gating belt.

    PubMed

    Betancourt, Ricardo; Zou, Wei; Plastaras, John P; Metz, James M; Teo, Boon-Keng; Kassaee, Alireza

    2013-01-01

    The correlation between the pancreatic and external abdominal motion due to respiration was investigated on two patients. These studies utilized four dimensional computer tomography (4D CT), a four dimensional (4D) electromagnetic transponder system, and a gating belt system. One 4D CT study was performed during simulation to quantify the pancreatic motion using computer tomography images at eight breathing phases. The motion under free breathing and breath-hold were analyzed for the 4D electromagnetic transponder system and the gating belt system during treatment. A linear curve was fitted for all data sets and correlation factors were evaluated between the 4D electromagnetic transponder system and the gating belt system data. The 4D CT study demonstrated a modest correlation between the external marker and the pancreatic motion with R-square values larger than 0.8 for the inferior-superior (inf-sup). Then, the relative pressure from the belt gating system correlated well with the 4D electromagnetic transponder system's motion in the anterior-posterior (ant-post) and the inf-post directions. These directions have a correlation value of -0.93 and 0.76, while the lateral only had a 0.03 correlation coefficient. Based on our limited study, external surrogates can be used as predictors of the pancreatic motion in the inf-sup and the ant-post directions. Although there is a low correlation on the lateral direction, its motion is significantly shorter. In conclusion, an appropriate treatment delivery can be used for pancreatic cancer when an internal tracking system, such as the 4D electromagnetic transponder system, is unavailable. PMID:23652242

  10. 4D imaging of velocity variation of the underground by single ultra-stable seismic source and multi-receivers (Invited)

    NASA Astrophysics Data System (ADS)

    Kasahara, J.; Hasada, Y.; Tsuruga, K.; Fujii, N.

    2010-12-01

    We propose a seismological method to construct images of any time-variable zone(s) in the underground such as earthquake focal zone, volcanic magma intruding zone, oil-gas reservoirs and CO2 sequestration zone. If fluid flow controls earthquake generation, sudden change of physical state due to fluid migration may suggest a high possibility of future earthquake events. Increasing of magma body in volcano may also cause any change of seismic reflections from volcanic zone. Injection of CO2 to the ground may also cause decrease of injected zone. We use an extremely stable seismic system (ACROSS: Accurately Controlled and Routinely Operated Signal System) to perform continuous monitor of them. The seismic ACROSS source is non-destructive seismic source, which can be used to continuously monitor a change of target zone. If we assume the seismic source signature does not change during a certain time frame, we can compare the waveforms between any observation periods. Using single seismic source and multi-receivers, we made back-propagate the differential waveforms of multi-receivers between before and after the Vp and Vs change. We carried out simulation by subduction zone and small-scale examples such as CO2 sequestration zone. In this talk, we present the change of image with time of CO2 sequestration zone. Assuming we know the velocity structure of the target zone and no or very small velocity change of near surface zone, we may image the place of time-variable zone by use of appropriate location of seismic source(s). Multi-seismic sources can improve the image. The result may apply to earthquake forecasting in the suducting plate, forecasting of volcanic eruption and oil and gas reservoir EOR.

  11. Spatio-Temporal Multiscale Denoising of Fluoroscopic Sequence.

    PubMed

    Amiot, Carole; Girard, Catherine; Chanussot, Jocelyn; Pescatore, Jeremie; Desvignes, Michel

    2016-06-01

    In the past 20 years, a wide range of complex fluoroscopically guided procedures have shown considerable growth. Biologic effects of the exposure (radiation induced burn, cancer) lead to reduce the dose during the intervention, for the safety of patients and medical staff. However, when the dose is reduced, image quality decreases, with a high level of noise and a very low contrast. Efficient restoration and denoising algorithms should overcome this drawback. We propose a spatio-temporal filter operating in a multi-scales space. This filter relies on a first order, motion compensated, recursive temporal denoising. Temporal high frequency content is first detected and then matched over time to allow for a strong denoising in the temporal axis. We study this filter in the curvelet domain and in the dual-tree complex wavelet domain, and compare those results to state of the art methods. Quantitative and qualitative analysis on both synthetic and real fluoroscopic sequences demonstrate that the proposed filter allows a great dose reduction. PMID:26812705

  12. Accuracy and Utility of Deformable Image Registration in {sup 68}Ga 4D PET/CT Assessment of Pulmonary Perfusion Changes During and After Lung Radiation Therapy

    SciTech Connect

    Hardcastle, Nicholas; Hofman, Michael S.; Hicks, Rodney J.; Callahan, Jason; Kron, Tomas; MacManus, Michael P.; Ball, David L.; Jackson, Price; Siva, Shankar

    2015-09-01

    Purpose: Measuring changes in lung perfusion resulting from radiation therapy dose requires registration of the functional imaging to the radiation therapy treatment planning scan. This study investigates registration accuracy and utility for positron emission tomography (PET)/computed tomography (CT) perfusion imaging in radiation therapy for non–small cell lung cancer. Methods: {sup 68}Ga 4-dimensional PET/CT ventilation-perfusion imaging was performed before, during, and after radiation therapy for 5 patients. Rigid registration and deformable image registration (DIR) using B-splines and Demons algorithms was performed with the CT data to obtain a deformation map between the functional images and planning CT. Contour propagation accuracy and correspondence of anatomic features were used to assess registration accuracy. Wilcoxon signed-rank test was used to determine statistical significance. Changes in lung perfusion resulting from radiation therapy dose were calculated for each registration method for each patient and averaged over all patients. Results: With B-splines/Demons DIR, median distance to agreement between lung contours reduced modestly by 0.9/1.1 mm, 1.3/1.6 mm, and 1.3/1.6 mm for pretreatment, midtreatment, and posttreatment (P<.01 for all), and median Dice score between lung contours improved by 0.04/0.04, 0.05/0.05, and 0.05/0.05 for pretreatment, midtreatment, and posttreatment (P<.001 for all). Distance between anatomic features reduced with DIR by median 2.5 mm and 2.8 for pretreatment and midtreatment time points, respectively (P=.001) and 1.4 mm for posttreatment (P>.2). Poorer posttreatment results were likely caused by posttreatment pneumonitis and tumor regression. Up to 80% standardized uptake value loss in perfusion scans was observed. There was limited change in the loss in lung perfusion between registration methods; however, Demons resulted in larger interpatient variation compared with rigid and B-splines registration

  13. ASTER and USGS EROS emergency imaging for hurricane disasters: Chapter 4D in Science and the storms-the USGS response to the hurricanes of 2005

    USGS Publications Warehouse

    Duda, Kenneth A.; Abrams, Michael

    2007-01-01

    Satellite images have been extremely useful in a variety of emergency response activities, including hurricane disasters. This article discusses the collaborative efforts of the U.S. Geological Survey (USGS), the Joint United States-Japan Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Science Team, and the National Aeronautics and Space Administration (NASA) in responding to crisis situations by tasking the ASTER instrument and rapidly providing information to initial responders. Insight is provided on the characteristics of the ASTER systems, and specific details are presented regarding Hurricane Katrina support.

  14. Nanowires: Enhanced Optoelectronic Performance of a Passivated Nanowire-Based Device: Key Information from Real-Space Imaging Using 4D Electron Microscopy (Small 17/2016).

    PubMed

    Khan, Jafar I; Adhikari, Aniruddha; Sun, Jingya; Priante, Davide; Bose, Riya; Shaheen, Basamat S; Ng, Tien Khee; Zhao, Chao; Bakr, Osman M; Ooi, Boon S; Mohammed, Omar F

    2016-05-01

    Selective mapping of surface charge carrier dynamics of InGaN nanowires before and after surface passivation with octadecylthiol (ODT) is reported by O. F. Mohammed and co-workers on page 2313, using scanning ultrafast electron microscopy. In a typical experiment, the 343 nm output of the laser beam is used to excite the microscope tip to generate pulsed electrons for probing, and the 515 nm output is used as a clocking excitation pulse to initiate dynamics. Time-resolved images demonstrate clearly that carrier recombination is significantly slowed after ODT treatment, which supports the efficient removal of surface trap states. PMID:27124006

  15. 4D analysis of the microstructural evolution of Si-based electrodes during lithiation: Time-lapse X-ray imaging and digital volume correlation

    NASA Astrophysics Data System (ADS)

    Paz-Garcia, J. M.; Taiwo, O. O.; Tudisco, E.; Finegan, D. P.; Shearing, P. R.; Brett, D. J. L.; Hall, S. A.

    2016-07-01

    Silicon is a promising candidate to substitute or complement graphite as anode material in Li-ion batteries due, mainly, to its high energy density. However, the lithiation/delithiation processes of silicon particles are inherently related to drastic volume changes which, within a battery's physically constrained case, can induce significant deformation of the fundamental components of the battery that can eventually cause it to fail. In this work, we use non-destructive time-lapse X-ray imaging techniques to study the coupled electrochemo-mechanical phenomena in Li-ion batteries. We present X-ray computed tomography data acquired at different times during the first lithiation of custom-built silicon-lithium battery cells. Microstructural volume changes have been quantified using full 3D strain field measurements from digital volume correlation analysis. Furthermore, the extent of lithiation of silicon particles has been quantified in 3D from the grey-scale of the tomography images. Correlation of the volume expansion and grey-scale changes over the silicon-based electrode volume indicates that the process of lithiation is kinetically affected by the reaction at the Si/LixSi interface.

  16. R4D on Ramp

    NASA Technical Reports Server (NTRS)

    1956-01-01

    This Photograph taken in 1956 shows the first of three R4D Skytrain aircraft on the ramp behind the NACA High-Speed Flight Station. Note the designation 'United States NACA' on the side of the aircraft. NACA stood for the National Advisory Committee for Aeronautics, which evolved into the National Aeronautics and Space Administration (NASA) in 1958. The R4D Skytrain was one of the early workhorses for NACA and NASA at Edwards Air Force Base, California, from 1952 to 1984. Designated the R4D by the U.S. Navy, the aircraft was called the C-47 by the U.S. Army and U.S. Air Force and the DC-3 by its builder, Douglas Aircraft. Nearly everyone called it the 'Gooney Bird.' In 1962, Congress consolidated the military-service designations and called all of them the C-47. After that date, the R4D at NASA's Flight Research Center (itself redesignated the Dryden Flight Research Center in 1976) was properly called a C-47. Over the 32 years it was used at Edwards, three different R4D/C-47s were used to shuttle personnel and equipment between NACA/NASA Centers and test locations throughout the country and for other purposes. One purpose was landing on 'dry' lakebeds used as alternate landing sites for the X-15, to determine whether their surfaces were hard (dry) enough for the X-15 to land on in case an emergency occurred after its launch and before it could reach Rogers Dry Lake at Edwards Air Force Base. The R4D/C-47 served a variety of needs, including serving as the first air-tow vehicle for the M2-F1 lifting body (which was built of mahogany plywood). The C-47 (as it was then called) was used for 77 tows before the M2-F1 was retired for more advanced lifting bodies that were dropped from the NASA B-52 'Mothership.' The R4D also served as a research aircraft. It was used to conduct early research on wing-tip-vortex flow visualization as well as checking out the NASA Uplink Control System. The first Gooney Bird was at the NACA High-Speed Flight Research Station (now the Dryden

  17. GLMdenoise: a fast, automated technique for denoising task-based fMRI data.

    PubMed

    Kay, Kendrick N; Rokem, Ariel; Winawer, Jonathan; Dougherty, Robert F; Wandell, Brian A

    2013-01-01

    In task-based functional magnetic resonance imaging (fMRI), researchers seek to measure fMRI signals related to a given task or condition. In many circumstances, measuring this signal of interest is limited by noise. In this study, we present GLMdenoise, a technique that improves signal-to-noise ratio (SNR) by entering noise regressors into a general linear model (GLM) analysis of fMRI data. The noise regressors are derived by conducting an initial model fit to determine voxels unrelated to the experimental paradigm, performing principal components analysis (PCA) on the time-series of these voxels, and using cross-validation to select the optimal number of principal components to use as noise regressors. Due to the use of data resampling, GLMdenoise requires and is best suited for datasets involving multiple runs (where conditions repeat across runs). We show that GLMdenoise consistently improves cross-validation accuracy of GLM estimates on a variety of event-related experimental datasets and is accompanied by substantial gains in SNR. To promote practical application of methods, we provide MATLAB code implementing GLMdenoise. Furthermore, to help compare GLMdenoise to other denoising methods, we present the Denoise Benchmark (DNB), a public database and architecture for evaluating denoising methods. The DNB consists of the datasets described in this paper, a code framework that enables automatic evaluation of a denoising method, and implementations of several denoising methods, including GLMdenoise, the use of motion parameters as noise regressors, ICA-based denoising, and RETROICOR/RVHRCOR. Using the DNB, we find that GLMdenoise performs best out of all of the denoising methods we tested. PMID:24381539

  18. GPU-based cone-beam reconstruction using wavelet denoising

    NASA Astrophysics Data System (ADS)

    Jin, Kyungchan; Park, Jungbyung; Park, Jongchul

    2012-03-01

    The scattering noise artifact resulted in low-dose projection in repetitive cone-beam CT (CBCT) scans decreases the image quality and lessens the accuracy of the diagnosis. To improve the image quality of low-dose CT imaging, the statistical filtering is more effective in noise reduction. However, image filtering and enhancement during the entire reconstruction process exactly may be challenging due to high performance computing. The general reconstruction algorithm for CBCT data is the filtered back-projection, which for a volume of 512×512×512 takes up to a few minutes on a standard system. To speed up reconstruction, massively parallel architecture of current graphical processing unit (GPU) is a platform suitable for acceleration of mathematical calculation. In this paper, we focus on accelerating wavelet denoising and Feldkamp-Davis-Kress (FDK) back-projection using parallel processing on GPU, utilize compute unified device architecture (CUDA) platform and implement CBCT reconstruction based on CUDA technique. Finally, we evaluate our implementation on clinical tooth data sets. Resulting implementation of wavelet denoising is able to process a 1024×1024 image within 2 ms, except data loading process, and our GPU-based CBCT implementation reconstructs a 512×512×512 volume from 400 projection data in less than 1 minute.

  19. Observer performance for adaptive, image-based denoising and filtered back projection compared to scanner-based iterative reconstruction for lower dose CT enterography

    PubMed Central

    Fletcher, Joel G.; Hara, Amy K.; Fidler, Jeff L.; Silva, Alvin C.; Barlow, John M.; Carter, Rickey E.; Bartley, Adam; Shiung, Maria; Holmes, David R.; Weber, Nicolas K.; Bruining, David H.; Yu, Lifeng; McCollough, Cynthia H.

    2015-01-01

    Purpose The purpose of this study was to compare observer performance for detection of intestinal inflammation for low-dose CT enterography (LD-CTE) using scanner-based iterative reconstruction (IR) vs. vendor-independent, adaptive image-based noise reduction (ANLM) or filtered back projection (FBP). Methods Sixty-two LD-CTE exams were performed. LD-CTE images were reconstructed using IR, ANLM, and FBP. Three readers, blinded to image type, marked intestinal inflammation directly on patient images using a specialized workstation over three sessions, interpreting one image type/patient/session. Reference standard was created by a gastroenterologist and radiologist, who reviewed all available data including dismissal Gastroenterology records, and who marked all inflamed bowel segments on the same workstation. Reader and reference localizations were then compared. Non-inferiority was tested using Jackknife free-response ROC (JAFROC) figures of merit (FOM) for ANLM and FBP compared to IR. Patient-level analyses for the presence or absence of inflammation were also conducted. Results There were 46 inflamed bowel segments in 24/62 patients (CTDIvol interquartile range 6.9–10.1 mGy). JAFROC FOM for ANLM and FBP were 0.84 (95% CI 0.75–0.92) and 0.84 (95% CI 0.75–0.92), and were statistically non-inferior to IR (FOM 0.84; 95% CI 0.76–0.93). Patient-level pooled confidence intervals for sensitivity widely overlapped, as did specificities. Image quality was rated as better with IR and AMLM compared to FBP (p < 0.0001), with no difference in reading times (p = 0.89). Conclusions Vendor-independent adaptive image-based noise reduction and FBP provided observer performance that was non-inferior to scanner-based IR methods. Adaptive image-based noise reduction maintained or improved upon image quality ratings compared to FBP when performing CTE at lower dose levels. PMID:25725794

  20. Observer Performance in the Detection and Classification of Malignant Hepatic Nodules and Masses with CT Image-Space Denoising and Iterative Reconstruction1

    PubMed Central

    Fletcher, Joel G.; Yu, Lifeng; Li, Zhoubo; Manduca, Armando; Blezek, Daniel J.; Hough, David M.; Venkatesh, Sudhakar K.; Brickner, Gregory C.; Cernigliaro, Joseph C.; Hara, Amy K.; Fidler, Jeff L.; Lake, David S.; Shiung, Maria; Lewis, David; Leng, Shuai; Augustine, Kurt E.; Carter, Rickey E.; Holmes, David R.; McCollough, Cynthia H.

    2015-01-01

    Purpose To determine if lower-dose computed tomographic (CT) scans obtained with adaptive image-based noise reduction (adaptive nonlocal means [ANLM]) or iterative reconstruction (sinogram-affirmed iterative reconstruction [SAFIRE]) result in reduced observer performance in the detection of malignant hepatic nodules and masses compared with routine-dose scans obtained with filtered back projection (FBP). Materials and Methods This study was approved by the institutional review board and was compliant with HIPAA. Informed consent was obtained from patients for the retrospective use of medical records for research purposes. CT projection data from 33 abdominal and 27 liver or pancreas CT examinations were collected (median volume CT dose index, 13.8 and 24.0 mGy, respectively). Hepatic malignancy was defined by progression or regression or with histopathologic findings. Lower-dose data were created by using a validated noise insertion method (10.4 mGy for abdominal CT and 14.6 mGy for liver or pancreas CT) and images reconstructed with FBP, ANLM, and SAFIRE. Four readers evaluated routine-dose FBP images and all lower-dose images, circumscribing liver lesions and selecting diagnosis. The jack-knife free-response receiver operating characteristic figure of merit (FOM) was calculated on a per-malignant nodule or per-mass basis. Noninferiority was defined by the lower limit of the 95% confidence interval (CI) of the difference between lower-dose and routine-dose FOMs being less than −0.10. Results Twenty-nine patients had 62 malignant hepatic nodules and masses. Estimated FOM differences between lower-dose FBP and lower-dose ANLM versus routine-dose FBP were noninferior (difference: −0.041 [95% CI: −0.090, 0.009] and −0.003 [95% CI: −0.052, 0.047], respectively). In patients with dedicated liver scans, lower-dose ANLM images were noninferior (difference: +0.015 [95% CI: −0.077, 0.106]), whereas lower-dose FBP images were not (difference −0.049 [95% CI:

  1. Texture preservation in de-noising UAV surveillance video through multi-frame sampling

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Fevig, Ronald A.; Schultz, Richard R.

    2009-02-01

    Image de-noising is a widely-used technology in modern real-world surveillance systems. Methods can seldom do both de-noising and texture preservation very well without a direct knowledge of the noise model. Most of the neighborhood fusion-based de-noising methods tend to over-smooth the images, which causes a significant loss of detail. Recently, a new non-local means method has been developed, which is based on the similarities among the different pixels. This technique results in good preservation of the textures; however, it also causes some artifacts. In this paper, we utilize the scale-invariant feature transform (SIFT) [1] method to find the corresponding region between different images, and then reconstruct the de-noised images by a weighted sum of these corresponding regions. Both hard and soft criteria are chosen in order to minimize the artifacts. Experiments applied to real unmanned aerial vehicle thermal infrared surveillance video show that our method is superior to popular methods in the literature.

  2. Determination and Visualization of pH Values in Anaerobic Digestion of Water Hyacinth and Rice Straw Mixtures Using Hyperspectral Imaging with Wavelet Transform Denoising and Variable Selection

    PubMed Central

    Zhang, Chu; Ye, Hui; Liu, Fei; He, Yong; Kong, Wenwen; Sheng, Kuichuan

    2016-01-01

    Biomass energy represents a huge supplement for meeting current energy demands. A hyperspectral imaging system covering the spectral range of 874–1734 nm was used to determine the pH value of anaerobic digestion liquid produced by water hyacinth and rice straw mixtures used for methane production. Wavelet transform (WT) was used to reduce noises of the spectral data. Successive projections algorithm (SPA), random frog (RF) and variable importance in projection (VIP) were used to select 8, 15 and 20 optimal wavelengths for the pH value prediction, respectively. Partial least squares (PLS) and a back propagation neural network (BPNN) were used to build the calibration models on the full spectra and the optimal wavelengths. As a result, BPNN models performed better than the corresponding PLS models, and SPA-BPNN model gave the best performance with a correlation coefficient of prediction (rp) of 0.911 and root mean square error of prediction (RMSEP) of 0.0516. The results indicated the feasibility of using hyperspectral imaging to determine pH values during anaerobic digestion. Furthermore, a distribution map of the pH values was achieved by applying the SPA-BPNN model. The results in this study would help to develop an on-line monitoring system for biomass energy producing process by hyperspectral imaging. PMID:26901202

  3. Determination and Visualization of pH Values in Anaerobic Digestion of Water Hyacinth and Rice Straw Mixtures Using Hyperspectral Imaging with Wavelet Transform Denoising and Variable Selection.

    PubMed

    Zhang, Chu; Ye, Hui; Liu, Fei; He, Yong; Kong, Wenwen; Sheng, Kuichuan

    2016-01-01

    Biomass energy represents a huge supplement for meeting current energy demands. A hyperspectral imaging system covering the spectral range of 874-1734 nm was used to determine the pH value of anaerobic digestion liquid produced by water hyacinth and rice straw mixtures used for methane production. Wavelet transform (WT) was used to reduce noises of the spectral data. Successive projections algorithm (SPA), random frog (RF) and variable importance in projection (VIP) were used to select 8, 15 and 20 optimal wavelengths for the pH value prediction, respectively. Partial least squares (PLS) and a back propagation neural network (BPNN) were used to build the calibration models on the full spectra and the optimal wavelengths. As a result, BPNN models performed better than the corresponding PLS models, and SPA-BPNN model gave the best performance with a correlation coefficient of prediction (rp) of 0.911 and root mean square error of prediction (RMSEP) of 0.0516. The results indicated the feasibility of using hyperspectral imaging to determine pH values during anaerobic digestion. Furthermore, a distribution map of the pH values was achieved by applying the SPA-BPNN model. The results in this study would help to develop an on-line monitoring system for biomass energy producing process by hyperspectral imaging. PMID:26901202

  4. The research and application of double mean weighting denoising algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Hao; Xiong, Feng

    2015-12-01

    In the application of image processing and pattern recognition, the precision of image preprocessing has a great influence on the image after-processing and analysis. This paper describes a novel local double mean weighted algorithm (hereinafter referred to as D-M algorithm) for image denoising. Firstly, the pixel difference and the absolute value are taken for the current pixels and the pixels in the neighborhood; then the absolute values are sorted again, the means of such pixels are taken in an half-to-half way; finally the weighting coefficient of the mean is taken. According to a large number of experiments, such algorithm not only introduces a certain robustness, but also improves increment significantly.

  5. HARDI denoising using nonlocal means on S2

    NASA Astrophysics Data System (ADS)

    Kuurstra, Alan; Dolui, Sudipto; Michailovich, Oleg

    2012-02-01

    Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.

  6. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  7. Doppler ultrasound signal denoising based on wavelet frames.

    PubMed

    Zhang, Y; Wang, Y; Wang, W; Liu, B

    2001-05-01

    A novel approach was proposed to denoise the Doppler ultrasound signal. Using this method, wavelet coefficients of the Doppler signal at multiple scales were first obtained using the discrete wavelet frame analysis. Then, a soft thresholding-based denoising algorithm was employed to deal with these coefficients to get the denoised signal. In the simulation experiments, the SNR improvements and the maximum frequency estimation precision were studied for the denoised signal. From the simulation and clinical studies, it was concluded that the performance of this discrete wavelet frame (DWF) approach is higher than that of the standard (critically sampled) wavelet transform (DWT) for the Doppler ultrasound signal denoising. PMID:11381694

  8. Local thresholding de-noise speech signal

    NASA Astrophysics Data System (ADS)

    Luo, Haitao

    2013-07-01

    De-noise speech signal if it is noisy. Construct a wavelet according to Daubechies' method, and derive a wavelet packet from the constructed scaling and wavelet functions. Decompose the noisy speech signal by wavelet packet. Develop algorithms to detect beginning and ending point of speech. Construct polynomial function for local thresholding. Apply different strategies to de-noise and compress the decomposed terminal nodes coefficients. Reconstruct the wavelet packet tree. Re-build audio file using reconstructed data and compare the effectiveness of different strategies.

  9. Wavelet denoising of multiframe optical coherence tomography data

    PubMed Central

    Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2012-01-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103

  10. Denoising PCR-amplified metagenome data

    PubMed Central

    2012-01-01

    Background PCR amplification and high-throughput sequencing theoretically enable the characterization of the finest-scale diversity in natural microbial and viral populations, but each of these methods introduces random errors that are difficult to distinguish from genuine biological diversity. Several approaches have been proposed to denoise these data but lack either speed or accuracy. Results We introduce a new denoising algorithm that we call DADA (Divisive Amplicon Denoising Algorithm). Without training data, DADA infers both the sample genotypes and error parameters that produced a metagenome data set. We demonstrate performance on control data sequenced on Roche’s 454 platform, and compare the results to the most accurate denoising software currently available, AmpliconNoise. Conclusions DADA is more accurate and over an order of magnitude faster than AmpliconNoise. It eliminates the need for training data to establish error parameters, fully utilizes sequence-abundance information, and enables inclusion of context-dependent PCR error rates. It should be readily extensible to other sequencing platforms such as Illumina. PMID:23113967

  11. Wavelet-domain TI Wiener-like filtering for complex MR data denoising.

    PubMed

    Hu, Kai; Cheng, Qiaocui; Gao, Xieping

    2016-10-01

    Magnetic resonance (MR) images are affected by random noises, which degrade many image processing and analysis tasks. It has been shown that the noise in magnitude MR images follows a Rician distribution. Unlike additive Gaussian noise, the noise is signal-dependent, and consequently difficult to reduce, especially in low signal-to-noise ratio (SNR) images. Wirestam et al. in [20] proposed a Wiener-like filtering technique in wavelet-domain to reduce noise before construction of the magnitude MR image. Based on Wirestam's study, we propose a wavelet-domain translation-invariant (TI) Wiener-like filtering algorithm for noise reduction in complex MR data. The proposed denoising algorithm shows the following improvements compared with Wirestam's method: (1) we introduce TI property into the Wiener-like filtering in wavelet-domain to suppress artifacts caused by translations of the signal; (2) we integrate one Stein's Unbiased Risk Estimator (SURE) thresholding with two Wiener-like filters to make the hard-thresholding scale adaptive; and (3) the first Wiener-like filtering is used to filter the original noisy image in which the noise obeys Gaussian distribution and it provides more reasonable results. The proposed algorithm is applied to denoise the real and imaginary parts of complex MR images. To evaluate our proposed algorithm, we conduct extensive denoising experiments using T1-weighted simulated MR images, diffusion-weighted (DW) phantom and in vivo data. We compare our algorithm with other popular denoising methods. The results demonstrate that our algorithm outperforms others in term of both efficiency and robustness. PMID:27238055

  12. Fast 4D segmentation of large datasets using graph cuts

    NASA Astrophysics Data System (ADS)

    Lombaert, Herve; Sun, Yiyong; Cheriet, Farida

    2011-03-01

    In this paper, we propose to use 4D graph cuts for the segmentation of large spatio-temporal (4D) datasets. Indeed, as 4D datasets grow in popularity in many clinical areas, so will the demand for efficient general segmentation algorithms. The graph cuts method1 has become a leading method for complex 2D and 3D image segmentation in many applications. Despite a few attempts2-5 in 4D, the use of graph cuts on typical medical volume quickly exceeds today's computer capacities. Among all existing graph cuts based methods6-10 the multilevel banded graph cuts9 is the fastest and uses the least amount of memory. Nevertheless, this method has its limitation. Memory becomes an issue when using large 4D volume sequences, and small structures become hardly recoverable when using narrow bands. We thus improve the boundary refinement efficiency by using a 4D competitive region growing. First, we construct a coarse graph at a low resolution with strong temporal links to prevent the shrink bias inherent to the graph cuts method. Second, we use a competitive region growing using a priority queue to capture all fine details. Leaks are prevented by constraining the competitive region growing within a banded region and by adding a viscosity term. This strategy yields results comparable to the multilevel banded graph cuts but is faster and allows its application to large 4D datasets. We applied our method on both cardiac 4D MRI and 4D CT datasets with promising results.

  13. Radiation dose reduction in computed tomography (CT) using a new implementation of wavelet denoising in low tube current acquisitions

    NASA Astrophysics Data System (ADS)

    Tao, Yinghua; Brunner, Stephen; Tang, Jie; Speidel, Michael; Rowley, Howard; VanLysel, Michael; Chen, Guang-Hong

    2011-03-01

    Radiation dose reduction remains at the forefront of research in computed tomography. X-ray tube parameters such as tube current can be lowered to reduce dose; however, images become prohibitively noisy when the tube current is too low. Wavelet denoising is one of many noise reduction techniques. However, traditional wavelet techniques have the tendency to create an artificial noise texture, due to the nonuniform denoising across the image, which is undesirable from a diagnostic perspective. This work presents a new implementation of wavelet denoising that is able to achieve noise reduction, while still preserving spatial resolution. Further, the proposed method has the potential to improve those unnatural noise textures. The technique was tested on both phantom and animal datasets (Catphan phantom and timeresolved swine heart scan) acquired on a GE Discovery VCT scanner. A number of tube currents were used to investigate the potential for dose reduction.

  14. 4D-DSA and 4D fluoroscopy: preliminary implementation

    NASA Astrophysics Data System (ADS)

    Mistretta, C. A.; Oberstar, E.; Davis, B.; Brodsky, E.; Strother, C. M.

    2010-04-01

    We have described methods that allow highly accelerated MRI using under-sampled acquisitions and constrained reconstruction. One is a hybrid acquisition involving the constrained reconstruction of time dependent information obtained from a separate scan of longer duration. We have developed reconstruction algorithms for DSA that allow use of a single injection to provide the temporal data required for flow visualization and the steady state data required for construction of a 3D-DSA vascular volume. The result is time resolved 3D volumes with typical resolution of 5123 at frame rates of 20-30 fps. Full manipulation of these images is possible during each stage of vascular filling thereby allowing for simplified interpretation of vascular dynamics. For intravenous angiography this time resolved 3D capability overcomes the vessel overlap problem that greatly limited the use of conventional intravenous 2D-DSA. Following further hardware development, it will be also be possible to rotate fluoroscopic volumes for use as roadmaps that can be viewed at arbitrary angles without a need for gantry rotation. The most precise implementation of this capability requires availability of biplane fluoroscopy data. Since the reconstruction of 3D volumes presently suppresses the contrast in the soft tissue, the possibility of using these techniques to derive complete indications of perfusion deficits based on cerebral blood volume (CBV), mean transit time (MTT) and time to peak (TTP) parameters requires further investigation. Using MATLAB post-processing, successful studies in animals and humans done in conjunction with both intravenous and intra-arterial injections have been completed. Real time implementation is in progress.

  15. Locally optimized non-local means denoising for low-dose X-ray backscatter imagery.

    PubMed

    Tracey, Brian H; Miller, Eric L; Wu, Yue; Alvino, Christopher; Schiefele, Markus; Al-Kofahi, Omar

    2014-01-01

    While recent years have seen considerable progress in image denoising, the leading techniques have been developed for digital photographs or other images that can have very different characteristics than those encountered in X-ray applications. In particular here we examine X-ray backscatter (XBS) images collected by airport security systems, where images are piecewise smooth and edge information is typically more correlated with objects while texture is dominated by statistical noise in the detected signal. In this paper, we show how multiple estimates for a denoised XBS image can be combined using a variational approach, giving a solution that enhances edge contrast by trading off gradient penalties against data fidelity terms. We demonstrate the approach by combining several estimates made using the non-local means (NLM) algorithm, a widely used patch-based denoising method. The resulting improvements hold the potential for improving automated analysis of low-SNR X-ray imagery and can be applied in other applications where edge information is of interest. PMID:25265919

  16. Adaptive non-local means filtering based on local noise level for CT denoising

    NASA Astrophysics Data System (ADS)

    Li, Zhoubo; Yu, Lifeng; Trzasko, Joshua D.; Fletcher, Joel G.; McCollough, Cynthia H.; Manduca, Armando

    2012-03-01

    Radiation dose from CT scans is an increasing health concern in the practice of radiology. Higher dose scans can produce clearer images with high diagnostic quality, but may increase the potential risk of radiation-induced cancer or other side effects. Lowering radiation dose alone generally produces a noisier image and may degrade diagnostic performance. Recently, CT dose reduction based on non-local means (NLM) filtering for noise reduction has yielded promising results. However, traditional NLM denoising operates under the assumption that image noise is spatially uniform noise, while in CT images the noise level varies significantly within and across slices. Therefore, applying NLM filtering to CT data using a global filtering strength cannot achieve optimal denoising performance. In this work, we have developed a technique for efficiently estimating the local noise level for CT images, and have modified the NLM algorithm to adapt to local variations in noise level. The local noise level estimation technique matches the true noise distribution determined from multiple repetitive scans of a phantom object very well. The modified NLM algorithm provides more effective denoising of CT data throughout a volume, and may allow significant lowering of radiation dose. Both the noise map calculation and the adaptive NLM filtering can be performed in times that allow integration with the clinical workflow.

  17. Feasibility study of dose reduction in digital breast tomosynthesis using non-local denoising algorithms

    NASA Astrophysics Data System (ADS)

    Vieira, Marcelo A. C.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Borges, Lucas R.; Bakic, Predrag R.; Barufaldi, Bruno; Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2015-03-01

    The main purpose of this work is to study the ability of denoising algorithms to reduce the radiation dose in Digital Breast Tomosynthesis (DBT) examinations. Clinical use of DBT is normally performed in "combo-mode", in which, in addition to DBT projections, a 2D mammogram is taken with the standard radiation dose. As a result, patients have been exposed to radiation doses higher than used in digital mammography. Thus, efforts to reduce the radiation dose in DBT examinations are of great interest. However, a decrease in dose leads to an increased quantum noise level, and related decrease in image quality. This work is aimed at addressing this problem by the use of denoising techniques, which could allow for dose reduction while keeping the image quality acceptable. We have studied two "state of the art" denoising techniques for filtering the quantum noise due to the reduced dose in DBT projections: Non-local Means (NLM) and Block-matching 3D (BM3D). We acquired DBT projections at different dose levels of an anthropomorphic physical breast phantom with inserted simulated microcalcifications. Then, we found the optimal filtering parameters where the denoising algorithms are capable of recovering the quality from the DBT images acquired with the standard radiation dose. Results using objective image quality assessment metrics showed that BM3D algorithm achieved better noise adjustment (mean difference in peak signal to noise ratio < 0.1dB) and less blurring (mean difference in image sharpness ~ 6%) than the NLM for the projections acquired with lower radiation doses.

  18. The 4-D approach to visual control of autonomous systems

    NASA Technical Reports Server (NTRS)

    Dickmanns, Ernst D.

    1994-01-01

    Development of a 4-D approach to dynamic machine vision is described. Core elements of this method are spatio-temporal models oriented towards objects and laws of perspective projection in a foward mode. Integration of multi-sensory measurement data was achieved through spatio-temporal models as invariants for object recognition. Situation assessment and long term predictions were allowed through maintenance of a symbolic 4-D image of processes involving objects. Behavioral capabilities were easily realized by state feedback and feed-foward control.

  19. SU-E-CAMPUS-I-05: Internal Dosimetric Calculations for Several Imaging Radiopharmaceuticals in Preclinical Studies and Quantitative Assessment of the Mouse Size Impact On Them. Realistic Monte Carlo Simulations Based On the 4D-MOBY Model

    SciTech Connect

    Kostou, T; Papadimitroulas, P; Kagadis, GC; Loudos, G

    2014-06-15

    Purpose: Commonly used radiopharmaceuticals were tested to define the most important dosimetric factors in preclinical studies. Dosimetric calculations were applied in two different whole-body mouse models, with varying organ size, so as to determine their impact on absorbed doses and S-values. Organ mass influence was evaluated with computational models and Monte Carlo(MC) simulations. Methods: MC simulations were executed on GATE to determine dose distribution in the 4D digital MOBY mouse phantom. Two mouse models, 28 and 34 g respectively, were constructed based on realistic preclinical exams to calculate the absorbed doses and S-values of five commonly used radionuclides in SPECT/PET studies (18F, 68Ga, 177Lu, 111In and 99mTc).Radionuclide biodistributions were obtained from literature. Realistic statistics (uncertainty lower than 4.5%) were acquired using the standard physical model in Geant4. Comparisons of the dosimetric calculations on the two different phantoms for each radiopharmaceutical are presented. Results: Dose per organ in mGy was calculated for all radiopharmaceuticals. The two models introduced a difference of 0.69% in their brain masses, while the largest differences were observed in the marrow 18.98% and in the thyroid 18.65% masses.Furthermore, S-values of the most important target-organs were calculated for each isotope. Source-organ was selected to be the whole mouse body.Differences on the S-factors were observed in the 6.0–30.0% range. Tables with all the calculations as reference dosimetric data were developed. Conclusion: Accurate dose per organ and the most appropriate S-values are derived for specific preclinical studies. The impact of the mouse model size is rather high (up to 30% for a 17.65% difference in the total mass), and thus accurate definition of the organ mass is a crucial parameter for self-absorbed S values calculation.Our goal is to extent the study for accurate estimations in small animal imaging, whereas it is known

  20. Mapping motion from 4D-MRI to 3D-CT for use in 4D dose calculations: A technical feasibility study

    SciTech Connect

    Boye, Dirk; Lomax, Tony; Knopf, Antje

    2013-06-15

    Purpose: Target sites affected by organ motion require a time resolved (4D) dose calculation. Typical 4D dose calculations use 4D-CT as a basis. Unfortunately, 4D-CT images have the disadvantage of being a 'snap-shot' of the motion during acquisition and of assuming regularity of breathing. In addition, 4D-CT acquisitions involve a substantial additional dose burden to the patient making many, repeated 4D-CT acquisitions undesirable. Here the authors test the feasibility of an alternative approach to generate patient specific 4D-CT data sets. Methods: In this approach motion information is extracted from 4D-MRI. Simulated 4D-CT data sets [which the authors call 4D-CT(MRI)] are created by warping extracted deformation fields to a static 3D-CT data set. The employment of 4D-MRI sequences for this has the advantage that no assumptions on breathing regularity are made, irregularities in breathing can be studied and, if necessary, many repeat imaging studies (and consequently simulated 4D-CT data sets) can be performed on patients and/or volunteers. The accuracy of 4D-CT(MRI)s has been validated by 4D proton dose calculations. Our 4D dose algorithm takes into account displacements as well as deformations on the originating 4D-CT/4D-CT(MRI) by calculating the dose of each pencil beam based on an individual time stamp of when that pencil beam is applied. According to corresponding displacement and density-variation-maps the position and the water equivalent range of the dose grid points is adjusted at each time instance. Results: 4D dose distributions, using 4D-CT(MRI) data sets as input were compared to results based on a reference conventional 4D-CT data set capturing similar motion characteristics. Almost identical 4D dose distributions could be achieved, even though scanned proton beams are very sensitive to small differences in the patient geometry. In addition, 4D dose calculations have been performed on the same patient, but using 4D-CT(MRI) data sets based on

  1. Quantication and analysis of respiratory motion from 4D MRI

    NASA Astrophysics Data System (ADS)

    Aizzuddin Abd Rahni, Ashrani; Lewis, Emma; Wells, Kevin

    2014-11-01

    It is well known that respiratory motion affects image acquisition and also external beam radiotherapy (EBRT) treatment planning and delivery. However often the existing approaches for respiratory motion management are based on a generic view of respiratory motion such as the general movement of organ, tissue or fiducials. This paper thus aims to present a more in depth analysis of respiratory motion based on 4D MRI for further integration into motion correction in image acquisition or image based EBRT. Internal and external motion was first analysed separately, on a per-organ basis for internal motion. Principal component analysis (PCA) was then performed on the internal and external motion vectors separately and the relationship between the two PCA spaces was analysed. The motion extracted from 4D MRI on general was found to be consistent with what has been reported in literature.

  2. Experimental wavelet based denoising for indoor infrared wireless communications.

    PubMed

    Rajbhandari, Sujan; Ghassemlooy, Zabih; Angelova, Maia

    2013-06-01

    This paper reports the experimental wavelet denoising techniques carried out for the first time for a number of modulation schemes for indoor optical wireless communications in the presence of fluorescent light interference. The experimental results are verified using computer simulations, clearly illustrating the advantage of the wavelet denoising technique in comparison to the high pass filtering for all baseband modulation schemes. PMID:23736631

  3. Los Alamos National Laboratory 4D Database

    SciTech Connect

    Atencio, Julian J.

    2014-05-02

    4D is an integrated development platform - a single product comprised of the components you need to create and distribute professional applications. You get a graphical design environment, SQL database, a programming language, integrated PHP execution, HTTP server, application server, executable generator, and much more. 4D offers multi-platform development and deployment, meaning whatever you create on a Mac can be used on Windows, and vice-versa. Beyond productive development, 4D is renowned for its great flexibility in maintenance and modification of existing applications, and its extreme ease of implementation in its numerous deployment options. Your professional application can be put into production more quickly, at a lower cost, and will always be instantly scalable. 4D makes it easy, whether you're looking to create a classic desktop application, a client-server system, a distributed solution for Web or mobile clients - or all of the above!

  4. Computing Myocardial Motion in 4D Echocardiography

    PubMed Central

    Mukherjee, Ryan; Sprouse, Chad; Pinheiro, Aurélio; Abraham, Theodore; Burlina, Philippe

    2012-01-01

    4D (3D spatial+time) echocardiography is gaining widespread acceptance at clinical institutions for its high temporal resolution and relatively low cost. We describe a novel method for computing dense 3D myocardial motion with high accuracy. The method is based on a classical variational optical flow technique, but exploits modern developments in optical flow research to utilize the full capabilities of 4D echocardiography. Using a variety of metrics, we present an in-depth performance evaluation of the method on synthetic, phantom, and intraoperative 4D Transesophageal Echocardiographic (TEE) data. When compared with state-of-the-art optical flow and speckle tracking techniques currently found in 4D echocardiography, the method we present shows notable improvements in error. We believe the performance improvements shown can have a positive impact when the method is used as input for various applications, such as strain computation, biomechanical modeling, or automated diagnostics. PMID:22677256

  5. A connection between score matching and denoising autoencoders.

    PubMed

    Vincent, Pascal

    2011-07-01

    Denoising autoencoders have been previously shown to be competitive alternatives to restricted Boltzmann machines for unsupervised pretraining of each layer of a deep architecture. We show that a simple denoising autoencoder training criterion is equivalent to matching the score (with respect to the data) of a specific energy-based model to that of a nonparametric Parzen density estimator of the data. This yields several useful insights. It defines a proper probabilistic model for the denoising autoencoder technique, which makes it in principle possible to sample from them or rank examples by their energy. It suggests a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives. It justifies the use of tied weights between the encoder and decoder and suggests ways to extend the success of denoising autoencoders to a larger family of energy-based models. PMID:21492012

  6. On "new massive" 4D gravity

    NASA Astrophysics Data System (ADS)

    Bergshoeff, Eric A.; Fernández-Melgarejo, J. J.; Rosseel, Jan; Townsend, Paul K.

    2012-04-01

    We construct a four-dimensional (4D) gauge theory that propagates, unitarily, the five polarization modes of a massive spin-2 particle. These modes are described by a "dual" graviton gauge potential and the Lagrangian is 4th-order in derivatives. As the construction mimics that of 3D "new massive gravity", we call this 4D model (linearized) "new massive dual gravity". We analyse its massless limit, and discuss similarities to the Eddington-Schrödinger model.

  7. Speech signal denoising with wavelet-transforms and the mean opinion score characterizing the filtering quality

    NASA Astrophysics Data System (ADS)

    Yaseen, Alauldeen S.; Pavlov, Alexey N.; Hramov, Alexander E.

    2016-03-01

    Speech signal processing is widely used to reduce noise impact in acquired data. During the last decades, wavelet-based filtering techniques are often applied in communication systems due to their advantages in signal denoising as compared with Fourier-based methods. In this study we consider applications of a 1-D double density complex wavelet transform (1D-DDCWT) and compare the results with the standard 1-D discrete wavelet-transform (1DDWT). The performances of the considered techniques are compared using the mean opinion score (MOS) being the primary metric for the quality of the processed signals. A two-dimensional extension of this approach can be used for effective image denoising.

  8. 4D MRI for the Localization of Parathyroid Adenoma: A Novel Method in Evolution.

    PubMed

    Merchavy, Shlomo; Luckman, Judith; Guindy, Michal; Segev, Yoram; Khafif, Avi

    2016-03-01

    The sestamibi scan (MIBI) and ultrasound (US) are used for preoperative localization of parathyroid adenoma (PTA), with sensitivity as high as 90%. We developed 4-dimensional magnetic resonance imaging (4D MRI) as a novel tool for identifying PTAs. Eleven patients with PTA were enrolled. 4D MRI from the mandible to the aortic arch was used. Optimization of the timing of image acquisition was obtained by changing dynamic and static sequences. PTAs were identified in all except 1 patient. In 9 patients, there was a complete match between the 4D MRI and the US and MIBI, as well as with the operative finding. In 1 patient, the adenoma was correctly localized by 4D MRI, in contrast to the US and MIBI scan. The sensitivity of the 4D MRI was 90% and after optimization, 100%. Specificity was 100%. We concluded that 4D MRI is a reliable technique for identification of PTAs, although more studies are needed. PMID:26598499

  9. Impact of incorporating visual biofeedback in 4D MRI.

    PubMed

    To, David T; Kim, Joshua P; Price, Ryan G; Chetty, Indrin J; Glide-Hurst, Carri K

    2016-01-01

    Precise radiation therapy (RT) for abdominal lesions is complicated by respiratory motion and suboptimal soft tissue contrast in 4D CT. 4D MRI offers improved con-trast although long scan times and irregular breathing patterns can be limiting. To address this, visual biofeedback (VBF) was introduced into 4D MRI. Ten volunteers were consented to an IRB-approved protocol. Prospective respiratory-triggered, T2-weighted, coronal 4D MRIs were acquired on an open 1.0T MR-SIM. VBF was integrated using an MR-compatible interactive breath-hold control system. Subjects visually monitored their breathing patterns to stay within predetermined tolerances. 4D MRIs were acquired with and without VBF for 2- and 8-phase acquisitions. Normalized respiratory waveforms were evaluated for scan time, duty cycle (programmed/acquisition time), breathing period, and breathing regularity (end-inhale coefficient of variation, EI-COV). Three reviewers performed image quality assessment to compare artifacts with and without VBF. Respiration-induced liver motion was calculated via centroid difference analysis of end-exhale (EE) and EI liver contours. Incorporating VBF reduced 2-phase acquisition time (4.7 ± 1.0 and 5.4 ± 1.5 min with and without VBF, respectively) while reducing EI-COV by 43.8% ± 16.6%. For 8-phase acquisitions, VBF reduced acquisition time by 1.9 ± 1.6 min and EI-COVs by 38.8% ± 25.7% despite breathing rate remaining similar (11.1 ± 3.8 breaths/min with vs. 10.5 ± 2.9 without). Using VBF yielded higher duty cycles than unguided free breathing (34.4% ± 5.8% vs. 28.1% ± 6.6%, respectively). Image grading showed that out of 40 paired evaluations, 20 cases had equivalent and 17 had improved image quality scores with VBF, particularly for mid-exhale and EI. Increased liver excursion was observed with VBF, where superior-inferior, anterior-posterior, and left-right EE-EI displacements were 14.1± 5.8, 4.9 ± 2.1, and 1.5 ± 1.0 mm, respectively, with VBF compared to 11.9

  10. Bayesian Inference for Neighborhood Filters With Application in Denoising.

    PubMed

    Huang, Chao-Tsung

    2015-11-01

    Range-weighted neighborhood filters are useful and popular for their edge-preserving property and simplicity, but they are originally proposed as intuitive tools. Previous works needed to connect them to other tools or models for indirect property reasoning or parameter estimation. In this paper, we introduce a unified empirical Bayesian framework to do both directly. A neighborhood noise model is proposed to reason and infer the Yaroslavsky, bilateral, and modified non-local means filters by joint maximum a posteriori and maximum likelihood estimation. Then, the essential parameter, range variance, can be estimated via model fitting to the empirical distribution of an observable chi scale mixture variable. An algorithm based on expectation-maximization and quasi-Newton optimization is devised to perform the model fitting efficiently. Finally, we apply this framework to the problem of color-image denoising. A recursive fitting and filtering scheme is proposed to improve the image quality. Extensive experiments are performed for a variety of configurations, including different kernel functions, filter types and support sizes, color channel numbers, and noise types. The results show that the proposed framework can fit noisy images well and the range variance can be estimated successfully and efficiently. PMID:26259244

  11. Optimal wavelet denoising for smart biomonitor systems

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-03-01

    Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.

  12. Denoising solar radiation data using coiflet wavelets

    SciTech Connect

    Karim, Samsul Ariffin Abdul Janier, Josefina B. Muthuvalu, Mohana Sundaram; Hasan, Mohammad Khatim; Sulaiman, Jumat; Ismail, Mohd Tahir

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  13. 4D Bioprinting for Biomedical Applications.

    PubMed

    Gao, Bin; Yang, Qingzhen; Zhao, Xin; Jin, Guorui; Ma, Yufei; Xu, Feng

    2016-09-01

    3D bioprinting has been developed to effectively and rapidly pattern living cells and biomaterials, aiming to create complex bioconstructs. However, placing biocompatible materials or cells into direct contact via bioprinting is necessary but insufficient for creating these constructs. Therefore, '4D bioprinting' has emerged recently, where 'time' is integrated with 3D bioprinting as the fourth dimension, and the printed objects can change their shapes or functionalities when an external stimulus is imposed or when cell fusion or postprinting self-assembly occurs. In this review, we highlight recent developments in 4D bioprinting technology. Additionally, we review the uses of 4D bioprinting in tissue engineering and drug delivery. Finally, we discuss the major roadblocks to this approach, together with possible solutions, to provide future perspectives on this technology. PMID:27056447

  14. A sinogram warping strategy for pre-reconstruction 4D PET optimization.

    PubMed

    Gianoli, Chiara; Riboldi, Marco; Fontana, Giulia; Kurz, Christopher; Parodi, Katia; Baroni, Guido

    2016-03-01

    A novel strategy for 4D PET optimization in the sinogram domain is proposed, aiming at motion model application before image reconstruction ("sinogram warping" strategy). Compared to state-of-the-art 4D-MLEM reconstruction, the proposed strategy is able to optimize the image SNR, avoiding iterative direct and inverse warping procedures, which are typical of the 4D-MLEM algorithm. A full-count statistics sinogram of the motion-compensated 4D PET reference phase is generated by warping the sinograms corresponding to the different PET phases. This is achieved relying on a motion model expressed in the sinogram domain. The strategy was tested on the anthropomorphic 4D PET-CT NCAT phantom in comparison with the 4D-MLEM algorithm, with particular reference to robustness to PET-CT co-registrations artefacts. The MLEM reconstruction of the warped sinogram according to the proposed strategy exhibited better accuracy (up to +40.90 % with respect to the ideal value), whereas images reconstructed according to the 4D-MLEM reconstruction resulted in less noisy (down to -26.90 % with respect to the ideal value) but more blurred. The sinogram warping strategy demonstrates advantages with respect to 4D-MLEM algorithm. These advantages are paid back by introducing approximation of the deformation field, and further efforts are required to mitigate the impact of such an approximation in clinical 4D PET reconstruction. PMID:26126871

  15. Lung Segmentation in 4D CT Volumes Based on Robust Active Shape Model Matching

    PubMed Central

    Gill, Gurman; Beichel, Reinhard R.

    2015-01-01

    Dynamic and longitudinal lung CT imaging produce 4D lung image data sets, enabling applications like radiation treatment planning or assessment of response to treatment of lung diseases. In this paper, we present a 4D lung segmentation method that mutually utilizes all individual CT volumes to derive segmentations for each CT data set. Our approach is based on a 3D robust active shape model and extends it to fully utilize 4D lung image data sets. This yields an initial segmentation for the 4D volume, which is then refined by using a 4D optimal surface finding algorithm. The approach was evaluated on a diverse set of 152 CT scans of normal and diseased lungs, consisting of total lung capacity and functional residual capacity scan pairs. In addition, a comparison to a 3D segmentation method and a registration based 4D lung segmentation approach was performed. The proposed 4D method obtained an average Dice coefficient of 0.9773 ± 0.0254, which was statistically significantly better (p value ≪0.001) than the 3D method (0.9659 ± 0.0517). Compared to the registration based 4D method, our method obtained better or similar performance, but was 58.6% faster. Also, the method can be easily expanded to process 4D CT data sets consisting of several volumes. PMID:26557844

  16. Denoising ECG signal based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Zhi-dong, Zhao; Liu, Juan; Wang, Sheng-tao

    2011-10-01

    The electrocardiogram (ECG) has been used extensively for detection of heart disease. Frequently the signal is corrupted by various kinds of noise such as muscle noise, electromyogram (EMG) interference, instrument noise etc. In this paper, a new ECG denoising method is proposed based on the recently developed ensemble empirical mode decomposition (EEMD). Noisy ECG signal is decomposed into a series of intrinsic mode functions (IMFs). The statistically significant information content is build by the empirical energy model of IMFs. Noisy ECG signal collected from clinic recording is processed using the method. The results show that on contrast with traditional methods, the novel denoising method can achieve the optimal denoising of the ECG signal.

  17. A new denoising method in high-dimensional PCA-space

    NASA Astrophysics Data System (ADS)

    Do, Quoc Bao; Beghdadi, Azeddine; Luong, Marie

    2012-03-01

    Kernel-design based method such as Bilateral filter (BIL), non-local means (NLM) filter is known as one of the most attractive approaches for denoising. We propose in this paper a new noise filtering method inspired by BIL, NLM filters and principal component analysis (PCA). The main idea here is to perform the BIL in a multidimensional PCA-space using an anisotropic kernel. The filtered multidimensional signal is then transformed back onto the image spatial domain to yield the desired enhanced image. In this work, it is demonstrated that the proposed method is a generalization of kernel-design based methods. The obtained results are highly promising.

  18. 4D-Var Developement at GMAO

    NASA Technical Reports Server (NTRS)

    Pelc, Joanna S.; Todling, Ricardo; Akkraoui, Amal El

    2014-01-01

    The Global Modeling and Assimilation Offce (GMAO) is currently using an IAU-based 3D-Var data assimilation system. GMAO has been experimenting with a 3D-Var-hybrid version of its data assimilation system (DAS) for over a year now, which will soon become operational and it will rapidly progress toward a 4D-EnVar. Concurrently, the machinery to exercise traditional 4DVar is in place and it is desirable to have a comparison of the traditional 4D approach with the other available options, and evaluate their performance in the Goddard Earth Observing System (GEOS) DAS. This work will also explore the possibility for constructing a reduced order model (ROM) to make traditional 4D-Var computationally attractive for increasing model resolutions. Part of the research on ROM will be to search for a suitably acceptable space to carry on the corresponding reduction. This poster illustrates how the IAU-based 4D-Var assimilation compares with our currently used IAU-based 3D-Var.

  19. Multicolor 4D Fluorescence Microscopy using Ultrathin Bessel Light Sheets

    PubMed Central

    Zhao, Teng; Lau, Sze Cheung; Wang, Ying; Su, Yumian; Wang, Hao; Cheng, Aifang; Herrup, Karl; Ip, Nancy Y.; Du, Shengwang; Loy, M. M. T.

    2016-01-01

    We demonstrate a simple and efficient method for producing ultrathin Bessel (‘non-diffracting’) light sheets of any color using a line-shaped beam and an annulus filter. With this robust and cost-effective technology, we obtained two-color, 3D images of biological samples with lateral/axial resolution of 250 nm/400 nm, and high-speed, 4D volume imaging of 20 μm sized live sample at 1 Hz temporal resolution. PMID:27189786

  20. 4D micro-CT using fast prospective gating

    NASA Astrophysics Data System (ADS)

    Guo, Xiaolian; Johnston, Samuel M.; Qi, Yi; Johnson, G. Allan; Badea, Cristian T.

    2012-01-01

    Micro-CT is currently used in preclinical studies to provide anatomical information. But, there is also significant interest in using this technology to obtain functional information. We report here a new sampling strategy for 4D micro-CT for functional cardiac and pulmonary imaging. Rapid scanning of free-breathing mice is achieved with fast prospective gating (FPG) implemented on a field programmable gate array. The method entails on-the-fly computation of delays from the R peaks of the ECG signals or the peaks of the respiratory signals for the triggering pulses. Projection images are acquired for all cardiac or respiratory phases at each angle before rotating to the next angle. FPG can deliver the faster scan time of retrospective gating (RG) with the regular angular distribution of conventional prospective gating for cardiac or respiratory gating. Simultaneous cardio-respiratory gating is also possible with FPG in a hybrid retrospective/prospective approach. We have performed phantom experiments to validate the new sampling protocol and compared the results from FPG and RG in cardiac imaging of a mouse. Additionally, we have evaluated the utility of incorporating respiratory information in 4D cardiac micro-CT studies with FPG. A dual-source micro-CT system was used for image acquisition with pulsed x-ray exposures (80 kVp, 100 mA, 10 ms). The cardiac micro-CT protocol involves the use of a liposomal blood pool contrast agent containing 123 mg I ml-1 delivered via a tail vein catheter in a dose of 0.01 ml g-1 body weight. The phantom experiment demonstrates that FPG can distinguish the successive phases of phantom motion with minimal motion blur, and the animal study demonstrates that respiratory FPG can distinguish inspiration and expiration. 4D cardiac micro-CT imaging with FPG provides image quality superior to RG at an isotropic voxel size of 88 µm and 10 ms temporal resolution. The acquisition time for either sampling approach is less than 5 min. The

  1. Multicolor 4D Fluorescence Microscopy using Ultrathin Bessel Light Sheets.

    PubMed

    Zhao, Teng; Lau, Sze Cheung; Wang, Ying; Su, Yumian; Wang, Hao; Cheng, Aifang; Herrup, Karl; Ip, Nancy Y; Du, Shengwang; Loy, M M T

    2016-01-01

    We demonstrate a simple and efficient method for producing ultrathin Bessel ('non-diffracting') light sheets of any color using a line-shaped beam and an annulus filter. With this robust and cost-effective technology, we obtained two-color, 3D images of biological samples with lateral/axial resolution of 250 nm/400 nm, and high-speed, 4D volume imaging of 20 μm sized live sample at 1 Hz temporal resolution. PMID:27189786

  2. 4D-Flow validation, numerical and experimental framework

    NASA Astrophysics Data System (ADS)

    Sansom, Kurt; Liu, Haining; Canton, Gador; Aliseda, Alberto; Yuan, Chun

    2015-11-01

    This work presents a group of assessment metrics of new 4D MRI flow sequences, an imaging modality that allows for visualization of three-dimensional pulsatile flow in the cardiovascular anatomy through time-resolved three-dimensional blood velocity measurements from cardiac-cycle synchronized MRI acquisition. This is a promising tool for clinical assessment but lacks a robust validation framework. First, 4D-MRI flow in a subject's stenotic carotid bifurcation is compared with a patient-specific CFD model using two different boundary condition methods. Second, Particle Image Velocimetry in a patient-specific phantom is used as a benchmark to compare the 4D-MRI in vivo measurements and CFD simulations under the same conditions. Comparison of estimated and measureable flow parameters such as wall shear stress, fluctuating velocity rms, Lagrangian particle residence time, will be discussed, with justification for their biomechanics relevance and the insights they can provide on the pathophysiology of arterial disease: atherosclerosis and intimal hyperplasia. Lastly, the framework is applied to a new sequence to provide a quantitative assessment. A parametric analysis on the carotid bifurcation pulsatile flow conditions will be presented and an accuracy assessment provided.

  3. Denoising time-domain induced polarisation data using wavelet techniques

    NASA Astrophysics Data System (ADS)

    Deo, Ravin N.; Cull, James P.

    2016-05-01

    Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.

  4. R4D Parked on Ramp

    NASA Technical Reports Server (NTRS)

    1956-01-01

    This Photograph taken in 1956 shows the first of three R4D Skytrain aircraft on the ramp behind the NACA High-Speed Flight Station. NACA stood for the National Advisory Committee for Aeronautics, which evolved into the National Aeronautics and Space Administration (NASA) in 1958. The R4D Skytrain was one of the early workhorses for NACA and NASA at Edwards Air Force Base, California, from 1952 to 1984. Designated the R4D by the U.S. Navy, the aircraft was called the C-47 by the U.S. Army and U.S. Air Force and the DC-3 by its builder, Douglas Aircraft. Nearly everyone called it the 'Gooney Bird.' In 1962, Congress consolidated the military-service designations and called all of them the C-47. After that date, the R4D at NASA's Flight Research Center (itself redesignated the Dryden Flight Research Center in 1976) was properly called a C-47. Over the 32 years it was used at Edwards, three different R4D/C-47s were used to shuttle personnel and equipment between NACA/NASA Centers and test locations throughout the country and for other purposes. One purpose was landing on 'dry' lakebeds used as alternate landing sites for the X-15, to determine whether their surfaces were hard (dry) enough for the X-15 to land on in case an emergency occurred after its launch and before it could reach Rogers Dry Lake at Edwards Air Force Base. The R4D/C-47 served a variety of needs, including serving as the first air-tow vehicle for the M2-F1 lifting body (which was built of mahogany plywood). The C-47 (as it was then called) was used for 77 tows before the M2-F1 was retired for more advanced lifting bodies that were dropped from the NASA B-52 'Mothership.' The R4D also served as a research aircraft. It was used to conduct early research on wing-tip-vortex flow visualization as well as checking out the NASA Uplink Control System. The first Gooney Bird was at the NACA High-Speed Flight Research Station (now the Dryden Flight Research Center) from 1952 to 1956 and flew at least one cross

  5. Brain tissue segmentation in 4D CT using voxel classification

    NASA Astrophysics Data System (ADS)

    van den Boom, R.; Oei, M. T. H.; Lafebre, S.; Oostveen, L. J.; Meijer, F. J. A.; Steens, S. C. A.; Prokop, M.; van Ginneken, B.; Manniesing, R.

    2012-02-01

    A method is proposed to segment anatomical regions of the brain from 4D computer tomography (CT) patient data. The method consists of a three step voxel classification scheme, each step focusing on structures that are increasingly difficult to segment. The first step classifies air and bone, the second step classifies vessels and the third step classifies white matter, gray matter and cerebrospinal fluid. As features the time averaged intensity value and the temporal intensity change value were used. In each step, a k-Nearest-Neighbor classifier was used to classify the voxels. Training data was obtained by placing regions of interest in reconstructed 3D image data. The method has been applied to ten 4D CT cerebral patient data. A leave-one-out experiment showed consistent and accurate segmentation results.

  6. SU-E-J-241: Creation of Ventilation CT From Daily 4D CTs Or 4D Conebeam CTs Acquired During IGRT for Thoracic Cancers

    SciTech Connect

    Tai, A; Ahunbay, E; Li, X

    2014-06-01

    Purpose: To develop a method to create ventilation CTs from daily 4D CTs or 4D KV conebeam CTs (4DCBCT) acquired during image-guided radiation therapy (IGRT) for thoracic tumors, and to explore the potential for using the ventilation CTs as a means for early detection of lung injury during radiation treatment. Methods: 4DCT acquired using an in-room CT (CTVision, Siemens) and 4DCBCT acquired using the X-ray Volume Imaging (XVI) system (Infinity, Elekta) for representative lung cancer patients were analyzed. These 4D data sets were sorted into 10 phase images. A newly-available deformable image registration tool (ADMIRE, Elekta) is used to deform the phase images at the end of exhale (EE) to the phase images at the end of inhale (EI). The lung volumes at EI and EE were carefully contoured using an intensity-based auto-contour tool and then manually edited. The ventilation images were calculated from the variations of CT numbers of those voxels masked by the lung contour at EI between the registered phase images. The deformable image registration is also performed between the daily 4D images and planning 4DCT, and the resulting deformable field vector (DFV) is used to deform the planning doses to the daily images by an in-house Matlab program. Results: The ventilation images were successfully created. The tide volumes calculated using the ventilation images agree with those measured through volume difference of contours at EE and EI, indicating the accuracy of ventilation images. The association between the delivered doses and the change of lung ventilation from the daily ventilation CTs is identified. Conclusions: A method to create the ventilation CT using daily 4DCTs or 4D KV conebeam CTs was developed and demonstrated.

  7. Actively triggered 4d cone-beam CT acquisition

    SciTech Connect

    Fast, Martin F.; Wisotzky, Eric; Oelfke, Uwe; Nill, Simeon

    2013-09-15

    Purpose: 4d cone-beam computed tomography (CBCT) scans are usually reconstructed by extracting the motion information from the 2d projections or an external surrogate signal, and binning the individual projections into multiple respiratory phases. In this “after-the-fact” binning approach, however, projections are unevenly distributed over respiratory phases resulting in inefficient utilization of imaging dose. To avoid excess dose in certain respiratory phases, and poor image quality due to a lack of projections in others, the authors have developed a novel 4d CBCT acquisition framework which actively triggers 2d projections based on the forward-predicted position of the tumor.Methods: The forward-prediction of the tumor position was independently established using either (i) an electromagnetic (EM) tracking system based on implanted EM-transponders which act as a surrogate for the tumor position, or (ii) an external motion sensor measuring the chest-wall displacement and correlating this external motion to the phase-shifted diaphragm motion derived from the acquired images. In order to avoid EM-induced artifacts in the imaging detector, the authors devised a simple but effective “Faraday” shielding cage. The authors demonstrated the feasibility of their acquisition strategy by scanning an anthropomorphic lung phantom moving on 1d or 2d sinusoidal trajectories.Results: With both tumor position devices, the authors were able to acquire 4d CBCTs free of motion blurring. For scans based on the EM tracking system, reconstruction artifacts stemming from the presence of the EM-array and the EM-transponders were greatly reduced using newly developed correction algorithms. By tuning the imaging frequency independently for each respiratory phase prior to acquisition, it was possible to harmonize the number of projections over respiratory phases. Depending on the breathing period (3.5 or 5 s) and the gantry rotation time (4 or 5 min), between ∼90 and 145

  8. Interactive animation of 4D performance capture.

    PubMed

    Casas, Dan; Tejera, Margara; Guillemaut, Jean-Yves; Hilton, Adrian

    2013-05-01

    A 4D parametric motion graph representation is presented for interactive animation from actor performance capture in a multiple camera studio. The representation is based on a 4D model database of temporally aligned mesh sequence reconstructions for multiple motions. High-level movement controls such as speed and direction are achieved by blending multiple mesh sequences of related motions. A real-time mesh sequence blending approach is introduced, which combines the realistic deformation of previous nonlinear solutions with efficient online computation. Transitions between different parametric motion spaces are evaluated in real time based on surface shape and motion similarity. Four-dimensional parametric motion graphs allow real-time interactive character animation while preserving the natural dynamics of the captured performance. PMID:23492379

  9. Nondipole Effects in Xe 4d Photoemission

    SciTech Connect

    Hemmers, O; Guillemin, R; Wolska, A; Lindle, D W; Rolles, D; Cheng, K T; Johnson, W R; Zhou, H L; Manson, S T

    2004-07-14

    We measured the nondipole parameters for the spin-orbit doublets Xe 4d{sub 5/2} and Xe 4d{sub 3/2} over a photon-energy range from 100 eV to 250 eV at beamline 8.0.1.3 of the Advanced Light Source at the Lawrence Berkeley National Laboratory. Significant nondipole effects are found at relatively low energies as a result of Cooper minima in dipole channels and interchannel coupling in quadrupole channels. Most importantly, sharp disagreement between experiment and theory, when otherwise excellent agreement was expected, has provided the first evidence of satellite two-electron quadrupole photoionization transitions, along with their crucial importance for a quantitatively accurate theory.

  10. Abdominal organ motion measured using 4D CT

    SciTech Connect

    Brandner, Edward D.; Wu, Andrew . E-mail: andrew.wu@jefferson.edu; Chen, Hungcheng; Heron, Dwight; Kalnicki, Shalom; Komanduri, Krishna; Gerszten, Kristina; Burton, Steve; Ahmed, Irfan; Shou, Zhenyu

    2006-06-01

    Purpose: To measure respiration-induced abdominal organ motion using four-dimensional computed tomography (4D CT) scanning and to examine the organ paths. Methods and Materials: During 4D CT scanning, consecutive CT images are acquired of the patient at each couch position. Simultaneously, the patient's respiratory pattern is recorded using an external marker block taped to the patient's abdomen. This pattern is used to retrospectively organize the CT images into multiple three-dimensional images, each representing one breathing phase. These images are analyzed to measure organ motion between each phase. The displacement from end expiration is compared to a displacement limit that represents acceptable dosimetric results (5 mm). Results: The organs measured in 13 patients were the liver, spleen, and left and right kidneys. Their average superior to inferior absolute displacements were 1.3 cm for the liver, 1.3 cm for the spleen, 1.1 cm for the left kidney, and 1.3 cm for the right kidney. Although the organ paths varied among patients, 5 mm of superior to inferior displacement from end expiration resulted in less than 5 mm of displacement in the other directions for 41 of 43 organs measured. Conclusions: Four-dimensional CT scanning can accurately measure abdominal organ motion throughout respiration. This information may result in greater organ sparing and planning target volume coverage.

  11. Discrete shearlet transform on GPU with applications in anomaly detection and denoising

    NASA Astrophysics Data System (ADS)

    Gibert, Xavier; Patel, Vishal M.; Labate, Demetrio; Chellappa, Rama

    2014-12-01

    Shearlets have emerged in recent years as one of the most successful methods for the multiscale analysis of multidimensional signals. Unlike wavelets, shearlets form a pyramid of well-localized functions defined not only over a range of scales and locations, but also over a range of orientations and with highly anisotropic supports. As a result, shearlets are much more effective than traditional wavelets in handling the geometry of multidimensional data, and this was exploited in a wide range of applications from image and signal processing. However, despite their desirable properties, the wider applicability of shearlets is limited by the computational complexity of current software implementations. For example, denoising a single 512 × 512 image using a current implementation of the shearlet-based shrinkage algorithm can take between 10 s and 2 min, depending on the number of CPU cores, and much longer processing times are required for video denoising. On the other hand, due to the parallel nature of the shearlet transform, it is possible to use graphics processing units (GPU) to accelerate its implementation. In this paper, we present an open source stand-alone implementation of the 2D discrete shearlet transform using CUDA C++ as well as GPU-accelerated MATLAB implementations of the 2D and 3D shearlet transforms. We have instrumented the code so that we can analyze the running time of each kernel under different GPU hardware. In addition to denoising, we describe a novel application of shearlets for detecting anomalies in textured images. In this application, computation times can be reduced by a factor of 50 or more, compared to multicore CPU implementations.

  12. A 4D Hyperspherical Interpretation of q-Space

    PubMed Central

    Hosseinbor, A. Pasha; Chung, Moo K.; Wu, Yu-Chien; Bendlin, Barbara B.; Alexander, Andrew L.

    2015-01-01

    3D q-space can be viewed as the surface of a 4D hypersphere. In this paper, we seek to develop a 4D hyperspherical interpretation of q-space by projecting it onto a hypersphere and subsequently modeling the q-space signal via 4D hyperspherical harmonics (HSH). Using this orthonormal basis, we derive several well-established q-space indices and numerically estimate the diffusion orientation distribution function (dODF). We also derive the integral transform describing the relationship between the diffusion signal and propagator on a hypersphere. Most importantly, we will demonstrate that for hybrid diffusion imaging (HYDI) acquisitions low order linear expansion of the HSH basis is sufficient to characterize diffusion in neural tissue. In fact, the HSH basis achieves comparable signal and better dODF reconstructions than other well-established methods, such as Bessel Fourier orientation reconstruction (BFOR), using fewer fitting parameters. All in all, this work provides a new way of looking at q-space. PMID:25624043

  13. Evaluation of a 4D cone-beam CT reconstruction approach using a simulation framework.

    PubMed

    Hartl, Alexander; Yaniv, Ziv

    2009-01-01

    Current image-guided navigation systems for thoracic abdominal interventions utilize three dimensional (3D) images acquired at breath-hold. As a result they can only provide guidance at a specific point in the respiratory cycle. The intervention is thus performed in a gated manner, with the physician advancing only when the patient is at the same respiratory cycle in which the 3D image was acquired. To enable a more continuous workflow we propose to use 4D image data. We describe an approach to constructing a set of 4D images from a diagnostic CT acquired at breath-hold and a set of intraoperative cone-beam CT (CBCT) projection images acquired while the patient is freely breathing. Our approach is based on an initial reconstruction of a gated 4D CBCT data set. The 3D CBCT images for each respiratory phase are then non-rigidly registered to the diagnostic CT data. Finally the diagnostic CT is deformed based on the registration results, providing a 4D data set with sufficient quality for navigation purposes. In this work we evaluate the proposed reconstruction approach using a simulation framework. A 3D CBCT dataset of an anthropomorphic phantom is deformed using internal motion data acquired from an animal model to create a ground truth 4D CBCT image. Simulated projection images are then created from the 4D image and the known CBCT scan parameters. Finally, the original 3D CBCT and the simulated X-ray images are used as input to our reconstruction method. The resulting 4D data set is then compared to the known ground truth by normalized cross correlation(NCC). We show that the deformed diagnostic CTs are of better quality than the gated reconstructions with a mean NCC value of 0.94 versus a mean 0.81 for the reconstructions. PMID:19964143

  14. Geometric validation of self-gating k-space-sorted 4D-MRI vs 4D-CT using a respiratory motion phantom

    SciTech Connect

    Yue, Yong Yang, Wensha; McKenzie, Elizabeth; Tuli, Richard; Wallace, Robert; Fraass, Benedick; Fan, Zhaoyang; Pang, Jianing; Deng, Zixin; Li, Debiao

    2015-10-15

    Purpose: MRI is increasingly being used for radiotherapy planning, simulation, and in-treatment-room motion monitoring. To provide more detailed temporal and spatial MR data for these tasks, we have recently developed a novel self-gated (SG) MRI technique with advantage of k-space phase sorting, high isotropic spatial resolution, and high temporal resolution. The current work describes the validation of this 4D-MRI technique using a MRI- and CT-compatible respiratory motion phantom and comparison to 4D-CT. Methods: The 4D-MRI sequence is based on a spoiled gradient echo-based 3D projection reconstruction sequence with self-gating for 4D-MRI at 3 T. Respiratory phase is resolved by using SG k-space lines as the motion surrogate. 4D-MRI images are reconstructed into ten temporal bins with spatial resolution 1.56 × 1.56 × 1.56 mm{sup 3}. A MRI-CT compatible phantom was designed to validate the performance of the 4D-MRI sequence and 4D-CT imaging. A spherical target (diameter 23 mm, volume 6.37 ml) filled with high-concentration gadolinium (Gd) gel is embedded into a plastic box (35 × 40 × 63 mm{sup 3}) and stabilized with low-concentration Gd gel. The phantom, driven by an air pump, is able to produce human-type breathing patterns between 4 and 30 respiratory cycles/min. 4D-CT of the phantom has been acquired in cine mode, and reconstructed into ten phases with slice thickness 1.25 mm. The 4D images sets were imported into a treatment planning software for target contouring. The geometrical accuracy of the 4D MRI and CT images has been quantified using target volume, flattening, and eccentricity. The target motion was measured by tracking the centroids of the spheres in each individual phase. Motion ground-truth was obtained from input signals and real-time video recordings. Results: The dynamic phantom has been operated in four respiratory rate (RR) settings, 6, 10, 15, and 20/min, and was scanned with 4D-MRI and 4D-CT. 4D-CT images have target

  15. 4D VMAT, gated VMAT, and 3D VMAT for stereotactic body radiation therapy in lung

    NASA Astrophysics Data System (ADS)

    Chin, E.; Loewen, S. K.; Nichol, A.; Otto, K.

    2013-02-01

    Four-dimensional volumetric modulated arc therapy (4D VMAT) is a treatment strategy for lung cancers that aims to exploit relative target and tissue motion to improve organ at risk (OAR) sparing. The algorithm incorporates the entire patient respiratory cycle using 4D CT data into the optimization process. Resulting treatment plans synchronize the delivery of each beam aperture to a specific phase of target motion. Stereotactic body radiation therapy treatment plans for 4D VMAT, gated VMAT, and 3D VMAT were generated on three patients with non-small cell lung cancer. Tumour motion ranged from 1.4-3.4 cm. The dose and fractionation scheme was 48 Gy in four fractions. A B-spline transformation model registered the 4D CT images. 4D dose volume histograms (4D DVH) were calculated from total dose accumulated at the maximum exhalation. For the majority of OARs, gated VMAT achieved the most radiation sparing but treatment times were 77-148% longer than 3D VMAT. 4D VMAT plan qualities were comparable to gated VMAT, but treatment times were only 11-25% longer than 3D VMAT. 4D VMAT's improvement of healthy tissue sparing can allow for further dose escalation. Future study could potentially adapt 4D VMAT to irregular patient breathing patterns.

  16. The EM Method in a Probabilistic Wavelet-Based MRI Denoising.

    PubMed

    Martin-Fernandez, Marcos; Villullas, Sergio

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  17. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    PubMed Central

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  18. Clinical evaluation of 4D PET motion compensation strategies for treatment verification in ion beam therapy.

    PubMed

    Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia

    2016-06-01

    A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra

  19. Clinical evaluation of 4D PET motion compensation strategies for treatment verification in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia

    2016-06-01

    A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra

  20. Parallel Wavefront Analysis for a 4D Interferometer

    NASA Technical Reports Server (NTRS)

    Rao, Shanti R.

    2011-01-01

    This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.

  1. Live 4D optical coherence tomography for early embryonic mouse cardiac phenotyping

    NASA Astrophysics Data System (ADS)

    Lopez, Andrew L.; Wang, Shang; Larin, Kirill V.; Overbeek, Paul A.; Larina, Irina V.

    2016-03-01

    Studying embryonic mouse development is important for our understanding of normal human embryogenesis and the underlying causes of congenital defects. Our research focuses on imaging early development in the mouse embryo to specifically understand cardiovascular development using optical coherence tomography (OCT). We have previously developed imaging approaches that combine static embryo culture, OCT imaging and advanced image processing to visualize the whole live mouse embryos and obtain 4D (3D+time) cardiodynamic datasets with cellular resolution. Here, we present the study of using 4D OCT for dynamic imaging of early embryonic heart in live mouse embryos to assess mutant cardiac phenotypes during development, including a cardiac looping defect. Our results indicate that the live 4D OCT imaging approach is an efficient phenotyping tool that can reveal structural and functional cardiac defects at very early stages. Further studies integrating live embryonic cardiodynamic phenotyping with molecular and genetic approaches in mouse mutants will help to elucidate the underlying signaling defects.

  2. Active origami by 4D printing

    NASA Astrophysics Data System (ADS)

    Ge, Qi; Dunn, Conner K.; Qi, H. Jerry; Dunn, Martin L.

    2014-09-01

    Recent advances in three dimensional (3D) printing technology that allow multiple materials to be printed within each layer enable the creation of materials and components with precisely controlled heterogeneous microstructures. In addition, active materials, such as shape memory polymers, can be printed to create an active microstructure within a solid. These active materials can subsequently be activated in a controlled manner to change the shape or configuration of the solid in response to an environmental stimulus. This has been termed 4D printing, with the 4th dimension being the time-dependent shape change after the printing. In this paper, we advance the 4D printing concept to the design and fabrication of active origami, where a flat sheet automatically folds into a complicated 3D component. Here we print active composites with shape memory polymer fibers precisely printed in an elastomeric matrix and use them as intelligent active hinges to enable origami folding patterns. We develop a theoretical model to provide guidance in selecting design parameters such as fiber dimensions, hinge length, and programming strains and temperature. Using the model, we design and fabricate several active origami components that assemble from flat polymer sheets, including a box, a pyramid, and two origami airplanes. In addition, we directly print a 3D box with active composite hinges and program it to assume a temporary flat shape that subsequently recovers to the 3D box shape on demand.

  3. 4D Proton treatment planning strategy for mobile lung tumors

    SciTech Connect

    Kang Yixiu; Zhang Xiaodong; Chang, Joe Y.; Wang He; Wei Xiong; Liao Zhongxing; Komaki, Ritsuko; Cox, James D.; Balter, Peter A.; Liu, Helen; Zhu, X. Ronald; Mohan, Radhe; Dong Lei . E-mail: ldong@mdanderson.org

    2007-03-01

    Purpose: To investigate strategies for designing compensator-based 3D proton treatment plans for mobile lung tumors using four-dimensional computed tomography (4DCT) images. Methods and Materials: Four-dimensional CT sets for 10 lung cancer patients were used in this study. The internal gross tumor volume (IGTV) was obtained by combining the tumor volumes at different phases of the respiratory cycle. For each patient, we evaluated four planning strategies based on the following dose calculations: (1) the average (AVE) CT; (2) the free-breathing (FB) CT; (3) the maximum intensity projection (MIP) CT; and (4) the AVE CT in which the CT voxel values inside the IGTV were replaced by a constant density (AVE{sub R}IGTV). For each strategy, the resulting cumulative dose distribution in a respiratory cycle was determined using a deformable image registration method. Results: There were dosimetric differences between the apparent dose distribution, calculated on a single CT dataset, and the motion-corrected 4D dose distribution, calculated by combining dose distributions delivered to each phase of the 4DCT. The AVE{sub R}IGTV plan using a 1-cm smearing parameter had the best overall target coverage and critical structure sparing. The MIP plan approach resulted in an unnecessarily large treatment volume. The AVE and FB plans using 1-cm smearing did not provide adequate 4D target coverage in all patients. By using a larger smearing value, adequate 4D target coverage could be achieved; however, critical organ doses were increased. Conclusion: The AVE{sub R}IGTV approach is an effective strategy for designing proton treatment plans for mobile lung tumors.

  4. Wavelet Denoising of Mobile Radiation Data

    SciTech Connect

    Campbell, D B

    2008-10-31

    The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems.

  5. Functional organization of the human 4D Nucleome

    PubMed Central

    Chen, Haiming; Chen, Jie; Muir, Lindsey A.; Ronquist, Scott; Meixner, Walter; Ljungman, Mats; Ried, Thomas; Smale, Stephen; Rajapakse, Indika

    2015-01-01

    The 4D organization of the interphase nucleus, or the 4D Nucleome (4DN), reflects a dynamical interaction between 3D genome structure and function and its relationship to phenotype. We present initial analyses of the human 4DN, capturing genome-wide structure using chromosome conformation capture and 3D imaging, and function using RNA-sequencing. We introduce a quantitative index that measures underlying topological stability of a genomic region. Our results show that structural features of genomic regions correlate with function with surprising persistence over time. Furthermore, constructing genome-wide gene-level contact maps aided in identifying gene pairs with high potential for coregulation and colocalization in a manner consistent with expression via transcription factories. We additionally use 2D phase planes to visualize patterns in 4DN data. Finally, we evaluated gene pairs within a circadian gene module using 3D imaging, and found periodicity in the movement of clock circadian regulator and period circadian clock 2 relative to each other that followed a circadian rhythm and entrained with their expression. PMID:26080430

  6. Complete valvular heart apparatus model from 4D cardiac CT.

    PubMed

    Grbic, Sasa; Ionasec, Razvan; Vitanovski, Dime; Voigt, Ingmar; Wang, Yang; Georgescu, Bogdan; Navab, Nassir; Comaniciu, Dorin

    2012-07-01

    The cardiac valvular apparatus, composed of the aortic, mitral, pulmonary and tricuspid valves, is an essential part of the anatomical, functional and hemodynamic characteristics of the heart and the cardiovascular system as a whole. Valvular heart diseases often involve multiple dysfunctions and require joint assessment and therapy of the valves. In this paper, we propose a complete and modular patient-specific model of the cardiac valvular apparatus estimated from 4D cardiac CT data. A new constrained Multi-linear Shape Model (cMSM), conditioned by anatomical measurements, is introduced to represent the complex spatio-temporal variation of the heart valves. The cMSM is exploited within a learning-based framework to efficiently estimate the patient-specific valve parameters from cine images. Experiments on 64 4D cardiac CT studies demonstrate the performance and clinical potential of the proposed method. Our method enables automatic quantitative evaluation of the complete valvular apparatus based on non-invasive imaging techniques. In conjunction with existent patient-specific chamber models, the presented valvular model enables personalized computation modeling and realistic simulation of the entire cardiac system. PMID:22481023

  7. Functional organization of the human 4D Nucleome.

    PubMed

    Chen, Haiming; Chen, Jie; Muir, Lindsey A; Ronquist, Scott; Meixner, Walter; Ljungman, Mats; Ried, Thomas; Smale, Stephen; Rajapakse, Indika

    2015-06-30

    The 4D organization of the interphase nucleus, or the 4D Nucleome (4DN), reflects a dynamical interaction between 3D genome structure and function and its relationship to phenotype. We present initial analyses of the human 4DN, capturing genome-wide structure using chromosome conformation capture and 3D imaging, and function using RNA-sequencing. We introduce a quantitative index that measures underlying topological stability of a genomic region. Our results show that structural features of genomic regions correlate with function with surprising persistence over time. Furthermore, constructing genome-wide gene-level contact maps aided in identifying gene pairs with high potential for coregulation and colocalization in a manner consistent with expression via transcription factories. We additionally use 2D phase planes to visualize patterns in 4DN data. Finally, we evaluated gene pairs within a circadian gene module using 3D imaging, and found periodicity in the movement of clock circadian regulator and period circadian clock 2 relative to each other that followed a circadian rhythm and entrained with their expression. PMID:26080430

  8. 4-D XRD for strain in many grains using triangulation

    SciTech Connect

    Bale, Hrishikesh A.; Hanan, Jay C.; Tamura, Nobumichi

    2006-12-31

    Determination of the strains in a polycrystalline materialusing 4-D XRD reveals sub-grain and grain-to-grain behavior as a functionof stress. Here 4-D XRD involves an experimental procedure usingpolychromatic micro-beam X-radiation (micro-Laue) to characterizepolycrystalline materials in spatial location as well as with increasingstress. The in-situ tensile loading experiment measured strain in a modelaluminum-sapphire metal matrix composite using the Advanced Light Source,Beam-line 7.3.3. Micro-Laue resolves individual grains in thepolycrystalline matrix. Results obtained from a list of grains sorted bycrystallographic orientation depict the strain states within and amongindividual grains. Locating the grain positions in the planeperpendicular to the incident beam is trivial. However, determining theexact location of grains within a 3-D space is challenging. Determiningthe depth of the grains within the matrix (along the beam direction)involved a triangulation method tracing individual rays that producespots on the CCD back to the point of origin. Triangulation wasexperimentally implemented by simulating a 3-D detector capturingmultiple diffraction images while increasing the camera to sampledistance. Hence by observing the intersection of rays from multiple spotsbelonging to the corresponding grain, depth is calculated. Depthresolution is a function of the number of images collected, grain to beamsize ratio, and the pixel resolution of the CCD. The 4DXRD methodprovides grain morphologies, strain behavior of each grain, andinteractions of the matrix grains with each other and the centrallylocated single crystal fiber.

  9. Soft Route to 4D Tomography

    NASA Astrophysics Data System (ADS)

    Taillandier-Thomas, Thibault; Roux, Stéphane; Hild, François

    2016-07-01

    Based on the assumption that the time evolution of a sample observed by computed tomography requires many less parameters than the definition of the microstructure itself, it is proposed to reconstruct these changes based on the initial state (using computed tomography) and very few radiographs acquired at fixed intervals of time. This Letter presents a proof of concept that for a fatigue cracked sample its kinematics can be tracked from no more than two radiographs in situations where a complete 3D view would require several hundreds of radiographs. This 2 order of magnitude gain opens the way to a "computed" 4D tomography, which complements the recent progress achieved in fast or ultrafast computed tomography, which is based on beam brightness, detector sensitivity, and signal acquisition technologies.

  10. ICT4D: A Computer Science Perspective

    NASA Astrophysics Data System (ADS)

    Sutinen, Erkki; Tedre, Matti

    The term ICT4D refers to the opportunities of Information and Communication Technology (ICT) as an agent of development. Research in that field is often focused on evaluating the feasibility of existing technologies, mostly of Western or Far East Asian origin, in the context of developing regions. A computer science perspective is complementary to that agenda. The computer science perspective focuses on exploring the resources, or inputs, of a particular context and on basing the design of a technical intervention on the available resources, so that the output makes a difference in the development context. The modus operandi of computer science, construction, interacts with evaluation and exploration practices. An analysis of a contextualized information technology curriculum of Tumaini University in southern Tanzania shows the potential of the computer science perspective for designing meaningful information and communication technology for a developing region.

  11. Non-local MRI denoising using random sampling.

    PubMed

    Hu, Jinrong; Zhou, Jiliu; Wu, Xi

    2016-09-01

    In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's. PMID:27114338

  12. Adaptive nonlocal means filtering based on local noise level for CT denoising

    SciTech Connect

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-15

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  13. A New Approach to Inverting and De-Noising Backscatter from Lidar Observations

    NASA Astrophysics Data System (ADS)

    Marais, Willem; Hen Hu, Yu; Holz, Robert; Eloranta, Edwin

    2016-06-01

    Atmospheric lidar observations provide a unique capability to directly observe the vertical profile of cloud and aerosol scattering properties and have proven to be an important capability for the atmospheric science community. For this reason NASA and ESA have put a major emphasis on developing both space and ground based lidar instruments. Measurement noise (solar background and detector noise) has proven to be a significant limitation and is typically reduced by temporal and vertical averaging. This approach has significant limitations as it results in significant reduction in the spatial information and can introduce biases due to the non-linear relationship between the signal and the retrieved scattering properties. This paper investigates a new approach to de-noising and retrieving cloud and aerosol backscatter properties from lidar observations that leverages a technique developed for medical imaging to de-blur and de-noise images; the accuracy is defined as the error between the true and inverted photon rates. Hence non-linear bias errors can be mitigated and spatial information can be preserved.

  14. Opening the Black Box of ICT4D: Advancing Our Understanding of ICT4D Partnerships

    ERIC Educational Resources Information Center

    Park, Sung Jin

    2013-01-01

    The term, Information and Communication Technologies for Development (ICT4D), pertains to programs or projects that strategically use ICTs (e.g. mobile phones, computers, and the internet) as a means toward the socio-economic betterment for the poor in developing contexts. Gaining the political and financial support of the international community…

  15. Comparison of automatic denoising methods for phonocardiograms with extraction of signal parameters via the Hilbert Transform

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-05-01

    Phonocardiograms (PCGs) have many advantages over traditional auscultation (listening to the heart) because they may be replayed, may be analyzed for spectral and frequency content, and frequencies inaudible to the human ear may be recorded. However, various sources of noise may pollute a PCG including lung sounds, environmental noise and noise generated from contact between the recording device and the skin. Because PCG signals are known to be nonlinear and it is often not possible to determine their noise content, traditional de-noising methods may not be effectively applied. However, other methods including wavelet de-noising, wavelet packet de-noising and averaging can be employed to de-noise the PCG. This study examines and compares these de-noising methods. This study answers such questions as to which de-noising method gives a better SNR, the magnitude of signal information that is lost as a result of the de-noising process, the appropriate uses of the different methods down to such specifics as to which wavelets and decomposition levels give best results in wavelet and wavelet packet de-noising. In general, the wavelet and wavelet packet de-noising performed roughly equally with optimal de-noising occurring at 3-5 levels of decomposition. Averaging also proved a highly useful de- noising technique; however, in some cases averaging is not appropriate. The Hilbert Transform is used to illustrate the results of the de-noising process and to extract instantaneous features including instantaneous amplitude, frequency, and phase.

  16. Respiratory regularity gated 4D CT acquisition: concepts and proof of principle.

    PubMed

    Keall, P J; Vedam, S S; George, R; Williamson, J F

    2007-09-01

    Four-dimensional CT images are generally sorted through a post-acquisition procedure correlating images with a time-synchronized external respiration signal. The patient's ability to maintain reproducible respiration is the limiting factor during 4D CT, where artifacts occur in approximately 85% of scans with current technology. To reduce these artifacts and their subsequent effects during radiotherapy planning, a method for improved 4D CT image acquisition that relies on gating 4D CT acquisition based on the real time monitoring of the respiration signal has been proposed. The respiration signal and CT data acquisition are linked, such that data from irregular breathing cycles, which cause artifacts, are not acquired by gating CT acquisition by the respiratory signal. A proof-of-principle application of the respiratory regularity gated 4D CT method using patient respiratory signals demonstrates the potential of this method to reduce artifacts currently found in 4D CT scans. Numerical simulations indicate a potential reduction in motion within a respiratory phase bin by 20-40% depending on tolerances chosen. Additional advantages of the proposed method are dose reduction by eliminating unnecessary oversampling and obviating the need for post-processing to create the 4D CT data set. PMID:18044305

  17. Perspective: 4D ultrafast electron microscopy--Evolutions and revolutions.

    PubMed

    Shorokhov, Dmitry; Zewail, Ahmed H

    2016-02-28

    In this Perspective, the evolutionary and revolutionary developments of ultrafast electron imaging are overviewed with focus on the "single-electron concept" for probing methodology. From the first electron microscope of Knoll and Ruska [Z. Phys. 78, 318 (1932)], constructed in the 1930s, to aberration-corrected instruments and on, to four-dimensional ultrafast electron microscopy (4D UEM), the developments over eight decades have transformed humans' scope of visualization. The changes in the length and time scales involved are unimaginable, beginning with the micrometer and second domains, and now reaching the space and time dimensions of atoms in matter. With these advances, it has become possible to follow the elementary structural dynamics as it unfolds in real time and to provide the means for visualizing materials behavior and biological functions. The aim is to understand emergent phenomena in complex systems, and 4D UEM is now central for the visualization of elementary processes involved, as illustrated here with examples from past achievements and future outlook. PMID:26931672

  18. Perspective: 4D ultrafast electron microscopy—Evolutions and revolutions

    NASA Astrophysics Data System (ADS)

    Shorokhov, Dmitry; Zewail, Ahmed H.

    2016-02-01

    In this Perspective, the evolutionary and revolutionary developments of ultrafast electron imaging are overviewed with focus on the "single-electron concept" for probing methodology. From the first electron microscope of Knoll and Ruska [Z. Phys. 78, 318 (1932)], constructed in the 1930s, to aberration-corrected instruments and on, to four-dimensional ultrafast electron microscopy (4D UEM), the developments over eight decades have transformed humans' scope of visualization. The changes in the length and time scales involved are unimaginable, beginning with the micrometer and second domains, and now reaching the space and time dimensions of atoms in matter. With these advances, it has become possible to follow the elementary structural dynamics as it unfolds in real time and to provide the means for visualizing materials behavior and biological functions. The aim is to understand emergent phenomena in complex systems, and 4D UEM is now central for the visualization of elementary processes involved, as illustrated here with examples from past achievements and future outlook.

  19. Performance prediction for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Rubel, Oleksii; Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2015-10-01

    Performance of denoising based on discrete cosine transform applied to multichannel remote sensing images corrupted by additive white Gaussian noise is analyzed. Images obtained by satellite Earth Observing-1 (EO-1) mission using hyperspectral imager instrument (Hyperion) that have high input SNR are taken as test images. Denoising performance is characterized by improvement of PSNR. For hard-thresholding 3D DCT-based denoising, simple statistics (probabilities to be less than a certain threshold) are used to predict denoising efficiency using curves fitted into scatterplots. It is shown that the obtained curves (approximations) provide prediction of denoising efficiency with high accuracy. Analysis is carried out for different numbers of channels processed jointly. Universality of prediction for different number of channels is proven.

  20. 4D multiple-cathode ultrafast electron microscopy

    PubMed Central

    Baskin, John Spencer; Liu, Haihua; Zewail, Ahmed H.

    2014-01-01

    Four-dimensional multiple-cathode ultrafast electron microscopy is developed to enable the capture of multiple images at ultrashort time intervals for a single microscopic dynamic process. The dynamic process is initiated in the specimen by one femtosecond light pulse and probed by multiple packets of electrons generated by one UV laser pulse impinging on multiple, spatially distinct, cathode surfaces. Each packet is distinctly recorded, with timing and detector location controlled by the cathode configuration. In the first demonstration, two packets of electrons on each image frame (of the CCD) probe different times, separated by 19 picoseconds, in the evolution of the diffraction of a gold film following femtosecond heating. Future elaborations of this concept to extend its capabilities and expand the range of applications of 4D ultrafast electron microscopy are discussed. The proof-of-principle demonstration reported here provides a path toward the imaging of irreversible ultrafast phenomena of materials, and opens the door to studies involving the single-frame capture of ultrafast dynamics using single-pump/multiple-probe, embedded stroboscopic imaging. PMID:25006261

  1. Electrocardiogram signal denoising based on a new improved wavelet thresholding.

    PubMed

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method. PMID:27587134

  2. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    NASA Astrophysics Data System (ADS)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  3. A procedure for denoising dual-axis swallowing accelerometry signals.

    PubMed

    Sejdić, Ervin; Steele, Catriona M; Chau, Tom

    2010-01-01

    Dual-axis swallowing accelerometry is an emerging tool for the assessment of dysphagia (swallowing difficulties). These signals however can be very noisy as a result of physiological and motion artifacts. In this note, we propose a novel scheme for denoising those signals, i.e. a computationally efficient search for the optimal denoising threshold within a reduced wavelet subspace. To determine a viable subspace, the algorithm relies on the minimum value of the estimated upper bound for the reconstruction error. A numerical analysis of the proposed scheme using synthetic test signals demonstrated that the proposed scheme is computationally more efficient than minimum noiseless description length (MNDL)-based denoising. It also yields smaller reconstruction errors than MNDL, SURE and Donoho denoising methods. When applied to dual-axis swallowing accelerometry signals, the proposed scheme exhibits improved performance for dry, wet and wet chin tuck swallows. These results are important for the further development of medical devices based on dual-axis swallowing accelerometry signals. PMID:19940343

  4. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 17 Commodity and Securities Exchanges 3 2013-04-01 2013-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  5. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 4 2014-04-01 2014-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  6. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  7. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2011 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 3 2012-04-01 2012-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  8. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2014 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 3 2011-04-01 2011-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  9. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2011 CFR

    2005-04-01

    ... 17 Commodity and Securities Exchanges 3 2005-04-01 2005-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  10. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2014 CFR

    2000-04-01

    ... 17 Commodity and Securities Exchanges 3 2000-04-01 2000-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a) Each application for an order under section 304(d)...

  11. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2010 CFR

    2015-04-01

    ... 17 Commodity and Securities Exchanges 4 2015-04-01 2015-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  12. Denoising and dimensionality reduction of genomic data

    NASA Astrophysics Data System (ADS)

    Capobianco, Enrico

    2005-05-01

    Genomics represents a challenging research field for many quantitative scientists, and recently a vast variety of statistical techniques and machine learning algorithms have been proposed and inspired by cross-disciplinary work with computational and systems biologists. In genomic applications, the researcher deals with noisy and complex high-dimensional feature spaces; a wealth of genes whose expression levels are experimentally measured, can often be observed for just a few time points, thus limiting the available samples. This unbalanced combination suggests that it might be hard for standard statistical inference techniques to come up with good general solutions, likewise for machine learning algorithms to avoid heavy computational work. Thus, one naturally turns to two major aspects of the problem: sparsity and intrinsic dimensionality. These two aspects are studied in this paper, where for both denoising and dimensionality reduction, a very efficient technique, i.e., Independent Component Analysis, is used. The numerical results are very promising, and lead to a very good quality of gene feature selection, due to the signal separation power enabled by the decomposition technique. We investigate how the use of replicates can improve these results, and deal with noise through a stabilization strategy which combines the estimated components and extracts the most informative biological information from them. Exploiting the inherent level of sparsity is a key issue in genetic regulatory networks, where the connectivity matrix needs to account for the real links among genes and discard many redundancies. Most experimental evidence suggests that real gene-gene connections represent indeed a subset of what is usually mapped onto either a huge gene vector or a typically dense and highly structured network. Inferring gene network connectivity from the expression levels represents a challenging inverse problem that is at present stimulating key research in biomedical