Science.gov

Sample records for 4d image denoising

  1. Performance evaluation and optimization of BM4D-AV denoising algorithm for cone-beam CT images

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Tian, Xiaofei; Zhang, Dinghua; Zhang, Hua

    2015-12-01

    The broadening application of cone-beam Computed Tomography (CBCT) in medical diagnostics and nondestructive testing, necessitates advanced denoising algorithms for its 3D images. The block-matching and four dimensional filtering algorithm with adaptive variance (BM4D-AV) is applied to the 3D image denoising in this research. To optimize it, the key filtering parameters of the BM4D-AV algorithm are assessed firstly based on the simulated CBCT images and a table of optimized filtering parameters is obtained. Then, considering the complexity of the noise in realistic CBCT images, possible noise standard deviations in BM4D-AV are evaluated to attain the chosen principle for the realistic denoising. The results of corresponding experiments demonstrate that the BM4D-AV algorithm with optimized parameters presents excellent denosing effect on the realistic 3D CBCT images.

  2. Multicomponent MR Image Denoising

    PubMed Central

    Manjón, José V.; Thacker, Neil A.; Lull, Juan J.; Garcia-Martí, Gracian; Martí-Bonmatí, Luís; Robles, Montserrat

    2009-01-01

    Magnetic Resonance images are normally corrupted by random noise from the measurement process complicating the automatic feature extraction and analysis of clinical data. It is because of this reason that denoising methods have been traditionally applied to improve MR image quality. Many of these methods use the information of a single image without taking into consideration the intrinsic multicomponent nature of MR images. In this paper we propose a new filter to reduce random noise in multicomponent MR images by spatially averaging similar pixels using information from all available image components to perform the denoising process. The proposed algorithm also uses a local Principal Component Analysis decomposition as a postprocessing step to remove more noise by using information not only in the spatial domain but also in the intercomponent domain dealing in a higher noise reduction without significantly affecting the original image resolution. The proposed method has been compared with similar state-of-art methods over synthetic and real clinical multicomponent MR images showing an improved performance in all cases analyzed. PMID:19888431

  3. An image denoising application using shearlets

    NASA Astrophysics Data System (ADS)

    Sevindir, Hulya Kodal; Yazici, Cuneyt

    2013-10-01

    Medical imaging is a multidisciplinary field related to computer science, electrical/electronic engineering, physics, mathematics and medicine. There has been dramatic increase in variety, availability and resolution of medical imaging devices for the last half century. For proper medical imaging highly trained technicians and clinicians are needed to pull out clinically pertinent information from medical data correctly. Artificial systems must be designed to analyze medical data sets either in a partially or even a fully automatic manner to fulfil the need. For this purpose there has been numerous ongoing research for finding optimal representations in image processing and computer vision [1, 18]. Medical images almost always contain artefacts and it is crucial to remove these artefacts to obtain healthy results. Out of many methods for denoising images, in this paper, two denoising methods, wavelets and shearlets, have been applied to mammography images. Comparing these two methods, shearlets give better results for denoising such data.

  4. Geodesic denoising for optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Shahrian Varnousfaderani, Ehsan; Vogl, Wolf-Dieter; Wu, Jing; Gerendas, Bianca S.; Simader, Christian; Langs, Georg; Waldstein, Sebastian M.; Schmidt-Erfurth, Ursula

    2016-03-01

    Optical coherence tomography (OCT) is an optical signal acquisition method capturing micrometer resolution, cross-sectional three-dimensional images. OCT images are used widely in ophthalmology to diagnose and monitor retinal diseases such as age-related macular degeneration (AMD) and Glaucoma. While OCT allows the visualization of retinal structures such as vessels and retinal layers, image quality and contrast is reduced by speckle noise, obfuscating small, low intensity structures and structural boundaries. Existing denoising methods for OCT images may remove clinically significant image features such as texture and boundaries of anomalies. In this paper, we propose a novel patch based denoising method, Geodesic Denoising. The method reduces noise in OCT images while preserving clinically significant, although small, pathological structures, such as fluid-filled cysts in diseased retinas. Our method selects optimal image patch distribution representations based on geodesic patch similarity to noisy samples. Patch distributions are then randomly sampled to build a set of best matching candidates for every noisy sample, and the denoised value is computed based on a geodesic weighted average of the best candidate samples. Our method is evaluated qualitatively on real pathological OCT scans and quantitatively on a proposed set of ground truth, noise free synthetic OCT scans with artificially added noise and pathologies. Experimental results show that performance of our method is comparable with state of the art denoising methods while outperforming them in preserving the critical clinically relevant structures.

  5. Cardiac 4D Ultrasound Imaging

    NASA Astrophysics Data System (ADS)

    D'hooge, Jan

    Volumetric cardiac ultrasound imaging has steadily evolved over the last 20 years from an electrocardiography (ECC) gated imaging technique to a true real-time imaging modality. Although the clinical use of echocardiography is still to a large extent based on conventional 2D ultrasound imaging it can be anticipated that the further developments in image quality, data visualization and interaction and image quantification of three-dimensional cardiac ultrasound will gradually make volumetric ultrasound the modality of choice. In this chapter, an overview is given of the technological developments that allow for volumetric imaging of the beating heart by ultrasound.

  6. Multiresolution Bilateral Filtering for Image Denoising

    PubMed Central

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multiresolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) subbands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided. PMID:19004705

  7. Magnetic resonance image denoising using multiple filters

    NASA Astrophysics Data System (ADS)

    Ai, Danni; Wang, Jinjuan; Miwa, Yuichi

    2013-07-01

    We introduced and compared ten denoisingfilters which are all proposed during last fifteen years. Especially, the state-of-art denoisingalgorithms, NLM and BM3D, have attracted much attention. Several expansions are proposed to improve the noise reduction based on these two algorithms. On the other hand, optimal dictionaries, sparse representations and appropriate shapes of the transform's support are also considered for the image denoising. The comparison among variousfiltersis implemented by measuring the SNR of a phantom image and denoising effectiveness of a clinical image. The computational time is finally evaluated.

  8. Nonlocal Markovian models for image denoising

    NASA Astrophysics Data System (ADS)

    Salvadeo, Denis H. P.; Mascarenhas, Nelson D. A.; Levada, Alexandre L. M.

    2016-01-01

    Currently, the state-of-the art methods for image denoising are patch-based approaches. Redundant information present in nonlocal regions (patches) of the image is considered for better image modeling, resulting in an improved quality of filtering. In this respect, nonlocal Markov random field (MRF) models are proposed by redefining the energy functions of classical MRF models to adopt a nonlocal approach. With the new energy functions, the pairwise pixel interaction is weighted according to the similarities between the patches corresponding to each pair. Also, a maximum pseudolikelihood estimation of the spatial dependency parameter (β) for these models is presented here. For evaluating this proposal, these models are used as an a priori model in a maximum a posteriori estimation to denoise additive white Gaussian noise in images. Finally, results display a notable improvement in both quantitative and qualitative terms in comparison with the local MRFs.

  9. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  10. Iterative Regularization Denoising Method Based on OSV Model for BioMedical Image Denoising

    NASA Astrophysics Data System (ADS)

    Guan-nan, Chen; Rong, Chen; Zu-fang, Huang; Ju-qiang, Lin; Shang-yuan, Feng; Yong-zeng, Li; Zhong-jian, Teng

    2011-01-01

    Biomedical image denoising algorithm based on gradient dependent energy functional often compromised the biomedical image features like textures or certain details. This paper proposes an iterative regularization denoising method based on OSV model for biomedical image denoising. By using iterative regularization, the oscillating patterns of texture and detail are added back to fit and compute the original OSV model,and the iterative behavior avoids overfull smoothing while denoising the features of textures and details to a certain extent. In addition, the iterative procedure is proposed in this paper, and the proposed algorithm also be proved the convergence property. Experimental results show that the proposed method can achieve a batter result in preserving not only the features of textures for biomedical image denoising but also the details for biomedical image.

  11. Infrared image denoising by nonlocal means filtering

    NASA Astrophysics Data System (ADS)

    Dee-Noor, Barak; Stern, Adrian; Yitzhaky, Yitzhak; Kopeika, Natan

    2012-05-01

    The recently introduced non-local means (NLM) image denoising technique broke the traditional paradigm according to which image pixels are processed by their surroundings. Non-local means technique was demonstrated to outperform state-of-the art denoising techniques when applied to images in the visible. This technique is even more powerful when applied to low contrast images, which makes it tractable for denoising infrared (IR) images. In this work we investigate the performance of NLM applied to infrared images. We also present a new technique designed to speed-up the NLM filtering process. The main drawback of the NLM is the large computational time required by the process of searching similar patches. Several techniques were developed during the last years to reduce the computational burden. Here we present a new techniques designed to reduce computational cost and sustain optimal filtering results of NLM technique. We show that the new technique, which we call Multi-Resolution Search NLM (MRS-NLM), reduces significantly the computational cost of the filtering process and we present a study of its performance on IR images.

  12. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  13. 4D flow imaging with MRI

    PubMed Central

    Stankovic, Zoran; Allen, Bradley D.; Garcia, Julio; Jarvis, Kelly B.

    2014-01-01

    Magnetic resonance imaging (MRI) has become an important tool for the clinical evaluation of patients with cardiovascular disease. Since its introduction in the late 1980s, 2-dimensional phase contrast MRI (2D PC-MRI) has become a routine part of standard-of-care cardiac MRI for the assessment of regional blood flow in the heart and great vessels. More recently, time-resolved PC-MRI with velocity encoding along all three flow directions and three-dimensional (3D) anatomic coverage (also termed ‘4D flow MRI’) has been developed and applied for the evaluation of cardiovascular hemodynamics in multiple regions of the human body. 4D flow MRI allows for the comprehensive evaluation of complex blood flow patterns by 3D blood flow visualization and flexible retrospective quantification of flow parameters. Recent technical developments, including the utilization of advanced parallel imaging techniques such as k-t GRAPPA, have resulted in reasonable overall scan times, e.g., 8-12 minutes for 4D flow MRI of the aorta and 10-20 minutes for whole heart coverage. As a result, the application of 4D flow MRI in a clinical setting has become more feasible, as documented by an increased number of recent reports on the utility of the technique for the assessment of cardiac and vascular hemodynamics in patient studies. A number of studies have demonstrated the potential of 4D flow MRI to provide an improved assessment of hemodynamics which might aid in the diagnosis and therapeutic management of cardiovascular diseases. The purpose of this review is to describe the methods used for 4D flow MRI acquisition, post-processing and data analysis. In addition, the article provides an overview of the clinical applications of 4D flow MRI and includes a review of applications in the heart, thoracic aorta and hepatic system. PMID:24834414

  14. Analysis the application of several denoising algorithm in the astronomical image denoising

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Geng, Ze-xun; Bao, Yong-qiang; Wei, Xiao-feng; Pan, Ying-feng

    2014-02-01

    Image denoising is an important method of preprocessing, it is one of the forelands in the field of Computer Graphic and Computer Vision. Astronomical target imaging are most vulnerable to atmospheric turbulence and noise interference, in order to reconstruct the high quality image of the target, we need to restore the high frequency signal of image, but noise also belongs to the high frequency signal, so there will be noise amplification in the reconstruction process. In order to avoid this phenomenon, join image denoising in the process of reconstruction is a feasible solution. This paper mainly research on the principle of four classic denoising algorithm, which are TV, BLS - GSM, NLM and BM3D, we use simulate data for image denoising to analysis the performance of the four algorithms, experiments demonstrate that the four algorithms can remove the noise, the BM3D algorithm not only have high quality of denosing, but also have the highest efficiency at the same time.

  15. 4D Confocal Imaging of Yeast Organelles.

    PubMed

    Day, Kasey J; Papanikou, Effrosyni; Glick, Benjamin S

    2016-01-01

    Yeast cells are well suited to visualizing organelles by 4D confocal microscopy. Typically, one or more cellular compartments are labeled with a fluorescent protein or dye, and a stack of confocal sections spanning the entire cell volume is captured every few seconds. Under appropriate conditions, organelle dynamics can be observed for many minutes with only limited photobleaching. Images are captured at a relatively low signal-to-noise ratio and are subsequently processed to generate movies that can be analyzed and quantified. Here, we describe methods for acquiring and processing 4D data using conventional scanning confocal microscopy. PMID:27631997

  16. Advances in 4D radiation therapy for managing respiration: part I - 4D imaging.

    PubMed

    Hugo, Geoffrey D; Rosu, Mihaela

    2012-12-01

    Techniques for managing respiration during imaging and planning of radiation therapy are reviewed, concentrating on free-breathing (4D) approaches. First, we focus on detailing the historical development and basic operational principles of currently-available "first generation" 4D imaging modalities: 4D computed tomography, 4D cone beam computed tomography, 4D magnetic resonance imaging, and 4D positron emission tomography. Features and limitations of these first generation systems are described, including necessity of breathing surrogates for 4D image reconstruction, assumptions made in acquisition and reconstruction about the breathing pattern, and commonly-observed artifacts. Both established and developmental methods to deal with these limitations are detailed. Finally, strategies to construct 4D targets and images and, alternatively, to compress 4D information into static targets and images for radiation therapy planning are described.

  17. Advances in 4D Radiation Therapy for Managing Respiration: Part I – 4D Imaging

    PubMed Central

    Hugo, Geoffrey D.; Rosu, Mihaela

    2014-01-01

    Techniques for managing respiration during imaging and planning of radiation therapy are reviewed, concentrating on free-breathing (4D) approaches. First, we focus on detailing the historical development and basic operational principles of currently-available “first generation” 4D imaging modalities: 4D computed tomography, 4D cone beam computed tomography, 4D magnetic resonance imaging, and 4D positron emission tomography. Features and limitations of these first generation systems are described, including necessity of breathing surrogates for 4D image reconstruction, assumptions made in acquisition and reconstruction about the breathing pattern, and commonly-observed artifacts. Both established and developmental methods to deal with these limitations are detailed. Finally, strategies to construct 4D targets and images and, alternatively, to compress 4D information into static targets and images for radiation therapy planning are described. PMID:22784929

  18. Robust 4D Flow Denoising Using Divergence-Free Wavelet Transform

    PubMed Central

    Ong, Frank; Uecker, Martin; Tariq, Umar; Hsiao, Albert; Alley, Marcus T; Vasanawala, Shreyas S.; Lustig, Michael

    2014-01-01

    Purpose To investigate four-dimensional flow denoising using the divergence-free wavelet (DFW) transform and compare its performance with existing techniques. Theory and Methods DFW is a vector-wavelet that provides a sparse representation of flow in a generally divergence-free field and can be used to enforce “soft” divergence-free conditions when discretization and partial voluming result in numerical nondivergence-free components. Efficient denoising is achieved by appropriate shrinkage of divergence-free wavelet and nondivergence-free coefficients. SureShrink and cycle spinning are investigated to further improve denoising performance. Results DFW denoising was compared with existing methods on simulated and phantom data and was shown to yield better noise reduction overall while being robust to segmentation errors. The processing was applied to in vivo data and was demonstrated to improve visualization while preserving quantifications of flow data. Conclusion DFW denoising of four-dimensional flow data was shown to reduce noise levels in flow data both quantitatively and visually. PMID:24549830

  19. Combining interior and exterior characteristics for remote sensing image denoising

    NASA Astrophysics Data System (ADS)

    Peng, Ni; Sun, Shujin; Wang, Runsheng; Zhong, Ping

    2016-04-01

    Remote sensing image denoising faces many challenges since a remote sensing image usually covers a wide area and thus contains complex contents. Using the patch-based statistical characteristics is a flexible method to improve the denoising performance. There are usually two kinds of statistical characteristics available: interior and exterior characteristics. Different statistical characteristics have their own strengths to restore specific image contents. Combining different statistical characteristics to use their strengths together may have the potential to improve denoising results. This work proposes a method combining statistical characteristics to adaptively select statistical characteristics for different image contents. The proposed approach is implemented through a new characteristics selection criterion learned over training data. Moreover, with the proposed combination method, this work develops a denoising algorithm for remote sensing images. Experimental results show that our method can make full use of the advantages of interior and exterior characteristics for different image contents and thus improve the denoising performance.

  20. 4D Clinical Imaging for Dynamic CAD

    PubMed Central

    McIntyre, Frederick

    2013-01-01

    A basic 4D imaging system to capture the jaw motion has been developed that produces high resolution 3D surface data. Fluorescent microspheres are brushed onto the areas of the upper and the lower arches to be imaged, producing a high-contrast random optical pattern. A hand-held imaging device operated at about 10 cm from the mouth captures time-based perspective images of the fluorescent areas. Each set of images, containing both upper and the lower arch data, is converted to a 3d point mesh using photogrammetry, thereby providing an instantaneous relative jaw position. Eight 3d positions per second are captured. Using one of the 3d frames as a reference, incremental transforms are derived to express the free body motion of the mandible. Conventional 3d models of the dentition are directly registered to the reference frame, allowing them to be animated using the derived transforms. PMID:24082882

  1. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  2. A new method for mobile phone image denoising

    NASA Astrophysics Data System (ADS)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  3. [DR image denoising based on Laplace-Impact mixture model].

    PubMed

    Feng, Guo-Dong; He, Xiang-Bin; Zhou, He-Qin

    2009-07-01

    A novel DR image denoising algorithm based on Laplace-Impact mixture model in dual-tree complex wavelet domain is proposed in this paper. It uses local variance to build probability density function of Laplace-Impact model fitted to the distribution of high-frequency subband coefficients well. Within Laplace-Impact framework, this paper describes a novel method for image denoising based on designing minimum mean squared error (MMSE) estimators, which relies on strong correlation between amplitudes of nearby coefficients. The experimental results show that the algorithm proposed in this paper outperforms several state-of-art denoising methods such as Bayes least squared Gaussian scale mixture and Laplace prior.

  4. 4-D ultrafast shear-wave imaging.

    PubMed

    Gennisson, Jean-Luc; Provost, Jean; Deffieux, Thomas; Papadacci, Clément; Imbault, Marion; Pernot, Mathieu; Tanter, Mickael

    2015-06-01

    Over the last ten years, shear wave elastography (SWE) has seen considerable development and is now routinely used in clinics to provide mechanical characterization of tissues to improve diagnosis. The most advanced technique relies on the use of an ultrafast scanner to generate and image shear waves in real time in a 2-D plane at several thousands of frames per second. We have recently introduced 3-D ultrafast ultrasound imaging to acquire with matrix probes the 3-D propagation of shear waves generated by a dedicated radiation pressure transducer in a single acquisition. In this study, we demonstrate 3-D SWE based on ultrafast volumetric imaging in a clinically applicable configuration. A 32 × 32 matrix phased array driven by a customized, programmable, 1024-channel ultrasound system was designed to perform 4-D shear-wave imaging. A matrix phased array was used to generate and control in 3-D the shear waves inside the medium using the acoustic radiation force. The same matrix array was used with 3-D coherent plane wave compounding to perform high-quality ultrafast imaging of the shear wave propagation. Volumetric ultrafast acquisitions were then beamformed in 3-D using a delay-and-sum algorithm. 3-D volumetric maps of the shear modulus were reconstructed using a time-of-flight algorithm based on local multiscale cross-correlation of shear wave profiles in the three main directions using directional filters. Results are first presented in an isotropic homogeneous and elastic breast phantom. Then, a full 3-D stiffness reconstruction of the breast was performed in vivo on healthy volunteers. This new full 3-D ultrafast ultrasound system paves the way toward real-time 3-D SWE. PMID:26067040

  5. Edge-preserving image denoising via optimal color space projection.

    PubMed

    Lian, Nai-Xiang; Zagorodnov, Vitali; Tan, Yap-Peng

    2006-09-01

    Denoising of color images can be done on each color component independently. Recent work has shown that exploiting strong correlation between high-frequency content of different color components can improve the denoising performance. We show that for typical color images high correlation also means similarity, and propose to exploit this strong intercolor dependency using an optimal luminance/color-difference space projection. Experimental results confirm that performing denoising on the projected color components yields superior denoising performance, both in peak signal-to-noise ratio and visual quality sense, compared to that of existing solutions. We also develop a novel approach to estimate directly from the noisy image data the image and noise statistics, which are required to determine the optimal projection.

  6. Computed tomography perfusion imaging denoising using Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna

    2012-06-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study.

  7. Image denoising filter based on patch-based difference refinement

    NASA Astrophysics Data System (ADS)

    Park, Sang Wook; Kang, Moon Gi

    2012-06-01

    In the denoising literature, research based on the nonlocal means (NLM) filter has been done and there have been many variations and improvements regarding weight function and parameter optimization. Here, a NLM filter with patch-based difference (PBD) refinement is presented. PBD refinement, which is the weighted average of the PBD values, is performed with respect to the difference images of all the locations in a refinement kernel. With refined and denoised PBD values, pattern adaptive smoothing threshold and noise suppressed NLM filter weights are calculated. Owing to the refinement of the PBD values, the patterns are divided into flat regions and texture regions by comparing the sorted values in the PBD domain to the threshold value including the noise standard deviation. Then, two different smoothing thresholds are utilized for each region denoising, respectively, and the NLM filter is applied finally. Experimental results of the proposed scheme are shown in comparison with several state-of-the-arts NLM based denoising methods.

  8. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.

    PubMed

    Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng

    2015-11-01

    Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.

  9. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  10. Pixon Based Image Denoising Scheme by Preserving Exact Edge Locations

    NASA Astrophysics Data System (ADS)

    Srikrishna, Atluri; Reddy, B. Eswara; Pompapathi, Manasani

    2016-09-01

    Denoising of an image is an essential step in many image processing applications. In any image de-noising algorithm, it is a major concern to keep interesting structures of the image like abrupt changes in image intensity values (edges). In this paper an efficient algorithm for image de-noising is proposed that obtains integrated and consecutive original image from noisy image using diffusion equations in pixon domain. The process mainly consists of two steps. In first step, the pixons for noisy image are obtained by using K-means clustering process and next step includes applying diffusion equations on the pixonal model of the image to obtain new intensity values for the restored image. The process has been applied on a variety of standard images and the objective fidelity has been compared with existing algorithms. The experimental results show that the proposed algorithm has a better performance by preserving edge details compared in terms of Figure of Merit and improved Peak-to-Signal-Noise-Ratio value. The proposed method brings out a denoising technique which preserves edge details.

  11. Adaptively Tuned Iterative Low Dose CT Image Denoising.

    PubMed

    Hashemi, SayedMasoud; Paul, Narinder S; Beheshti, Soosan; Cobbold, Richard S C

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  12. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  13. Blind source separation based x-ray image denoising from an image sequence.

    PubMed

    Yu, Chun-Yu; Li, Yan; Fei, Bin; Li, Wei-Liang

    2015-09-01

    Blind source separation (BSS) based x-ray image denoising from an image sequence is proposed. Without priori knowledge, the useful image signal can be separated from an x-ray image sequence, for original images are supposed as different combinations of stable image signal and random image noise. The BSS algorithms such as fixed-point independent component analysis and second-order statistics singular value decomposition are used and compared with multi-frame averaging which is a common algorithm for improving image's signal-to-noise ratio (SNR). Denoising performance is evaluated in SNR, standard deviation, entropy, and runtime. Analysis indicates that BSS is applicable to image denoising; the denoised image's quality will get better when more frames are included in an x-ray image sequence, but it will cost more time; there should be trade-off between denoising performance and runtime, which means that the number of frames included in an image sequence is enough. PMID:26429442

  14. Blind source separation based x-ray image denoising from an image sequence

    NASA Astrophysics Data System (ADS)

    Yu, Chun-Yu; Li, Yan; Fei, Bin; Li, Wei-Liang

    2015-09-01

    Blind source separation (BSS) based x-ray image denoising from an image sequence is proposed. Without priori knowledge, the useful image signal can be separated from an x-ray image sequence, for original images are supposed as different combinations of stable image signal and random image noise. The BSS algorithms such as fixed-point independent component analysis and second-order statistics singular value decomposition are used and compared with multi-frame averaging which is a common algorithm for improving image's signal-to-noise ratio (SNR). Denoising performance is evaluated in SNR, standard deviation, entropy, and runtime. Analysis indicates that BSS is applicable to image denoising; the denoised image's quality will get better when more frames are included in an x-ray image sequence, but it will cost more time; there should be trade-off between denoising performance and runtime, which means that the number of frames included in an image sequence is enough.

  15. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  16. A New Method for Nonlocal Means Image Denoising Using Multiple Images

    PubMed Central

    Wang, Xingzheng; Wang, Haoqian; Yang, Jiangfeng; Zhang, Yongbing

    2016-01-01

    The basic principle of nonlocal means is to denoise a pixel using the weighted average of the neighbourhood pixels, while the weight is decided by the similarity of these pixels. The key issue of the nonlocal means method is how to select similar patches and design the weight of them. There are two main contributions of this paper: The first contribution is that we use two images to denoise the pixel. These two noised images are with the same noise deviation. Instead of using only one image, we calculate the weight from two noised images. After the first denoising process, we get a pre-denoised image and a residual image. The second contribution is combining the nonlocal property between residual image and pre-denoised image. The improved nonlocal means method pays more attention on the similarity than the original one, which turns out to be very effective in eliminating gaussian noise. Experimental results with simulated data are provided. PMID:27459293

  17. Dictionary-based image denoising for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Allner, Sebastian; Mei, Kai; Pfeiffer, Franz; Noël, Peter B.

    2016-03-01

    Compared to conventional computed tomography (CT), dual energy CT allows for improved material decomposition by conducting measurements at two distinct energy spectra. Since radiation exposure is a major concern in clinical CT, there is a need for tools to reduce the noise level in images while preserving diagnostic information. One way to achieve this goal is the application of image-based denoising algorithms after an analytical reconstruction has been performed. We have developed a modified dictionary denoising algorithm for dual energy CT aimed at exploiting the high spatial correlation between between images obtained from different energy spectra. Both the low-and high energy image are partitioned into small patches which are subsequently normalized. Combined patches with improved signal-to-noise ratio are formed by a weighted addition of corresponding normalized patches from both images. Assuming that corresponding low-and high energy image patches are related by a linear transformation, the signal in both patches is added coherently while noise is neglected. Conventional dictionary denoising is then performed on the combined patches. Compared to conventional dictionary denoising and bilateral filtering, our algorithm achieved superior performance in terms of qualitative and quantitative image quality measures. We demonstrate, in simulation studies, that this approach can produce 2d-histograms of the high- and low-energy reconstruction which are characterized by significantly improved material features and separation. Moreover, in comparison to other approaches that attempt denoising without simultaneously using both energy signals, superior similarity to the ground truth can be found with our proposed algorithm.

  18. 4-D display of satellite cloud images

    NASA Technical Reports Server (NTRS)

    Hibbard, William L.

    1987-01-01

    A technique has been developed to display GOES satellite cloud images in perspective over a topographical map. Cloud heights are estimated using temperatures from an infrared (IR) satellite image, surface temperature observations, and a climatological model of vertical temperature profiles. Cloud levels are discriminated from each other and from the ground using a pattern recognition algorithm based on the brightness variance technique of Coakley and Bretherton. The cloud regions found by the pattern recognizer are rendered in three-dimensional perspective over a topographical map by an efficient remap of the visible image. The visible shades are mixed with an artificial shade based on the geometry of the cloud-top surface, in order to enhance the texture of the cloud top.

  19. Streak image denoising and segmentation using adaptive Gaussian guided filter.

    PubMed

    Jiang, Zhuocheng; Guo, Baoping

    2014-09-10

    In streak tube imaging lidar (STIL), streak images are obtained using a CCD camera. However, noise in the captured streak images can greatly affect the quality of reconstructed 3D contrast and range images. The greatest challenge for streak image denoising is reducing the noise while preserving details. In this paper, we propose an adaptive Gaussian guided filter (AGGF) for noise removal and detail enhancement of streak images. The proposed algorithm is based on a guided filter (GF) and part of an adaptive bilateral filter (ABF). In the AGGF, the details are enhanced by optimizing the offset parameter. AGGF-denoised streak images are significantly sharper than those denoised by the GF. Moreover, the AGGF is a fast linear time algorithm achieved by recursively implementing a Gaussian filter kernel. Experimentally, AGGF demonstrates its capacity to preserve edges and thin structures and outperforms the existing bilateral filter and domain transform filter in terms of both visual quality and peak signal-to-noise ratio performance.

  20. Streak image denoising and segmentation using adaptive Gaussian guided filter.

    PubMed

    Jiang, Zhuocheng; Guo, Baoping

    2014-09-10

    In streak tube imaging lidar (STIL), streak images are obtained using a CCD camera. However, noise in the captured streak images can greatly affect the quality of reconstructed 3D contrast and range images. The greatest challenge for streak image denoising is reducing the noise while preserving details. In this paper, we propose an adaptive Gaussian guided filter (AGGF) for noise removal and detail enhancement of streak images. The proposed algorithm is based on a guided filter (GF) and part of an adaptive bilateral filter (ABF). In the AGGF, the details are enhanced by optimizing the offset parameter. AGGF-denoised streak images are significantly sharper than those denoised by the GF. Moreover, the AGGF is a fast linear time algorithm achieved by recursively implementing a Gaussian filter kernel. Experimentally, AGGF demonstrates its capacity to preserve edges and thin structures and outperforms the existing bilateral filter and domain transform filter in terms of both visual quality and peak signal-to-noise ratio performance. PMID:25321679

  1. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  2. Diffusion weighted image denoising using overcomplete local PCA.

    PubMed

    Manjón, José V; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  3. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  4. Comparison of de-noising techniques for FIRST images

    SciTech Connect

    Fodor, I K; Kamath, C

    2001-01-22

    Data obtained through scientific observations are often contaminated by noise and artifacts from various sources. As a result, a first step in mining these data is to isolate the signal of interest by minimizing the effects of the contaminations. Once the data has been cleaned or de-noised, data mining can proceed as usual. In this paper, we describe our work in denoising astronomical images from the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey. We are mining this survey to detect radio-emitting galaxies with a bent-double morphology. This task is made difficult by the noise in the images caused by the processing of the sensor data. We compare three different approaches to de-noising: thresholding of wavelet coefficients advocated in the statistical community, traditional Altering methods used in the image processing community, and a simple thresholding scheme proposed by FIRST astronomers. While each approach has its merits and pitfalls, we found that for our purpose, the simple thresholding scheme worked relatively well for the FIRST dataset.

  5. Improved deadzone modeling for bivariate wavelet shrinkage-based image denoising

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2016-05-01

    Modern image processing performed on-board low Size, Weight, and Power (SWaP) platforms, must provide high- performance while simultaneously reducing memory footprint, power consumption, and computational complexity. Image preprocessing, along with downstream image exploitation algorithms such as object detection and recognition, and georegistration, place a heavy burden on power and processing resources. Image preprocessing often includes image denoising to improve data quality for downstream exploitation algorithms. High-performance image denoising is typically performed in the wavelet domain, where noise generally spreads and the wavelet transform compactly captures high information-bearing image characteristics. In this paper, we improve modeling fidelity of a previously-developed, computationally-efficient wavelet-based denoising algorithm. The modeling improvements enhance denoising performance without significantly increasing computational cost, thus making the approach suitable for low-SWAP platforms. Specifically, this paper presents modeling improvements to the Sendur-Selesnick model (SSM) which implements a bivariate wavelet shrinkage denoising algorithm that exploits interscale dependency between wavelet coefficients. We formulate optimization problems for parameters controlling deadzone size which leads to improved denoising performance. Two formulations are provided; one with a simple, closed form solution which we use for numerical result generation, and the second as an integral equation formulation involving elliptic integrals. We generate image denoising performance results over different image sets drawn from public domain imagery, and investigate the effect of wavelet filter tap length on denoising performance. We demonstrate denoising performance improvement when using the enhanced modeling over performance obtained with the baseline SSM model.

  6. Microarray image enhancement by denoising using stationary wavelet transform.

    PubMed

    Wang, X H; Istepanian, Robert S H; Song, Yong Hua

    2003-12-01

    Microarray imaging is considered an important tool for large scale analysis of gene expression. The accuracy of the gene expression depends on the experiment itself and further image processing. It's well known that the noises introduced during the experiment will greatly affect the accuracy of the gene expression. How to eliminate the effect of the noise constitutes a challenging problem in microarray analysis. Traditionally, statistical methods are used to estimate the noises while the microarray images are being processed. In this paper, we present a new approach to deal with the noise inherent in the microarray image processing procedure. That is, to denoise the image noises before further image processing using stationary wavelet transform (SWT). The time invariant characteristic of SWT is particularly useful in image denoising. The testing result on sample microarray images has shown an enhanced image quality. The results also show that it has a superior performance than conventional discrete wavelet transform and widely used adaptive Wiener filter in this procedure.

  7. Efficient bias correction for magnetic resonance image denoising.

    PubMed

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. PMID:23074149

  8. A novel de-noising method for B ultrasound images

    NASA Astrophysics Data System (ADS)

    Tian, Da-Yong; Mo, Jia-qing; Yu, Yin-Feng; Lv, Xiao-Yi; Yu, Xiao; Jia, Zhen-Hong

    2015-12-01

    B ultrasound as a kind of ultrasonic imaging, which has become the indispensable diagnosis method in clinical medicine. However, the presence of speckle noise in ultrasound image greatly reduces the image quality and interferes with the accuracy of the diagnosis. Therefore, how to construct a method which can eliminate the speckle noise effectively, and at the same time keep the image details effectively is the research target of the current ultrasonic image de-noising. This paper is intended to remove the inherent speckle noise of B ultrasound image. The novel algorithm proposed is based on both wavelet transformation of B ultrasound images and data fusion of B ultrasound images, with a smaller mean squared error (MSE) and greater signal to noise ratio (SNR) compared with other algorithms. The results of this study can effectively remove speckle noise from B ultrasound images, and can well preserved the details and edge information which will produce better visual effects.

  9. 4D MR imaging using robust internal respiratory signal

    NASA Astrophysics Data System (ADS)

    Hui, CheukKai; Wen, Zhifei; Stemkens, Bjorn; Tijssen, R. H. N.; van den Berg, C. A. T.; Hwang, Ken-Pin; Beddar, Sam

    2016-05-01

    The purpose of this study is to investigate the feasibility of using internal respiratory (IR) surrogates to sort four-dimensional (4D) magnetic resonance (MR) images. The 4D MR images were constructed by acquiring fast 2D cine MR images sequentially, with each slice scanned for more than one breathing cycle. The 4D volume was then sorted retrospectively using the IR signal. In this study, we propose to use multiple low-frequency components in the Fourier space as well as the anterior body boundary as potential IR surrogates. From these potential IR surrogates, we used a clustering algorithm to identify those that best represented the respiratory pattern to derive the IR signal. A study with healthy volunteers was performed to assess the feasibility of the proposed IR signal. We compared this proposed IR signal with the respiratory signal obtained using respiratory bellows. Overall, 99% of the IR signals matched the bellows signals. The average difference between the end inspiration times in the IR signal and bellows signal was 0.18 s in this cohort of matching signals. For the acquired images corresponding to the other 1% of non-matching signal pairs, the respiratory motion shown in the images was coherent with the respiratory phases determined by the IR signal, but not the bellows signal. This suggested that the IR signal determined by the proposed method could potentially correct the faulty bellows signal. The sorted 4D images showed minimal mismatched artefacts and potential clinical applicability. The proposed IR signal therefore provides a feasible alternative to effectively sort MR images in 4D.

  10. Total variation versus wavelet-based methods for image denoising in fluorescence lifetime imaging microscopy

    PubMed Central

    Chang, Ching-Wei; Mycek, Mary-Ann

    2014-01-01

    We report the first application of wavelet-based denoising (noise removal) methods to time-domain box-car fluorescence lifetime imaging microscopy (FLIM) images and compare the results to novel total variation (TV) denoising methods. Methods were tested first on artificial images and then applied to low-light live-cell images. Relative to undenoised images, TV methods could improve lifetime precision up to 10-fold in artificial images, while preserving the overall accuracy of lifetime and amplitude values of a single-exponential decay model and improving local lifetime fitting in live-cell images. Wavelet-based methods were at least 4-fold faster than TV methods, but could introduce significant inaccuracies in recovered lifetime values. The denoising methods discussed can potentially enhance a variety of FLIM applications, including live-cell, in vivo animal, or endoscopic imaging studies, especially under challenging imaging conditions such as low-light or fast video-rate imaging. PMID:22415891

  11. Denoising portal images by means of wavelet techniques

    NASA Astrophysics Data System (ADS)

    Gonzalez Lopez, Antonio Francisco

    Portal images are used in radiotherapy for the verification of patient positioning. The distinguishing feature of this image type lies in its formation process: the same beam used for patient treatment is used for image formation. The high energy of the photons used in radiotherapy strongly limits the quality of portal images: Low contrast between tissues, low spatial resolution and low signal to noise ratio. This Thesis studies the enhancement of these images, in particular denoising of portal images. The statistical properties of portal images and noise are studied: power spectra, statistical dependencies between image and noise and marginal, joint and conditional distributions in the wavelet domain. Later, various denoising methods are applied to noisy portal images. Methods operating in the wavelet domain are the basis of this Thesis. In addition, the Wiener filter and the non local means filter (NLM), operating in the image domain, are used as a reference. Other topics studied in this Thesis are spatial resolution, wavelet processing and image processing in dosimetry in radiotherapy. In this regard, the spatial resolution of portal imaging systems is studied; a new method for determining the spatial resolution of the imaging equipments in digital radiology is presented; the calculation of the power spectrum in the wavelet domain is studied; reducing uncertainty in film dosimetry is investigated; a method for the dosimetry of small radiation fields with radiochromic film is presented; the optimal signal resolution is determined, as a function of the noise level and the quantization step, in the digitization process of films and the useful optical density range is set, as a function of the required uncertainty level, for a densitometric system. Marginal distributions of portal images are similar to those of natural images. This also applies to the statistical relationships between wavelet coefficients, intra-band and inter-band. These facts result in a better

  12. Noise distribution and denoising of current density images

    PubMed Central

    Beheshti, Mohammadali; Foomany, Farbod H.; Magtibay, Karl; Jaffray, David A.; Krishnan, Sridhar; Nanthakumar, Kumaraswamy; Umapathy, Karthikeyan

    2015-01-01

    Abstract. Current density imaging (CDI) is a magnetic resonance (MR) imaging technique that could be used to study current pathways inside the tissue. The current distribution is measured indirectly as phase changes. The inherent noise in the MR imaging technique degrades the accuracy of phase measurements leading to imprecise current variations. The outcome can be affected significantly, especially at a low signal-to-noise ratio (SNR). We have shown the residual noise distribution of the phase to be Gaussian-like and the noise in CDI images approximated as a Gaussian. This finding matches experimental results. We further investigated this finding by performing comparative analysis with denoising techniques, using two CDI datasets with two different currents (20 and 45 mA). We found that the block-matching and three-dimensional (BM3D) technique outperforms other techniques when applied on current density (J). The minimum gain in noise power by BM3D applied to J compared with the next best technique in the analysis was found to be around 2 dB per pixel. We characterize the noise profile in CDI images and provide insights on the performance of different denoising techniques when applied at two different stages of current density reconstruction. PMID:26158100

  13. Denoising Stimulated Raman Spectroscopic Images by Total Variation Minimization

    PubMed Central

    Liao, Chien-Sheng; Choi, Joon Hee; Zhang, Delong; Chan, Stanley H.; Cheng, Ji-Xin

    2016-01-01

    High-speed coherent Raman scattering imaging is opening a new avenue to unveiling the cellular machinery by visualizing the spatio-temporal dynamics of target molecules or intracellular organelles. By extracting signals from the laser at MHz modulation frequency, current stimulated Raman scattering (SRS) microscopy has reached shot noise limited detection sensitivity. The laser-based local oscillator in SRS microscopy not only generates high levels of signal, but also delivers a large shot noise which degrades image quality and spectral fidelity. Here, we demonstrate a denoising algorithm that removes the noise in both spatial and spectral domains by total variation minimization. The signal-to-noise ratio of SRS spectroscopic images was improved by up to 57 times for diluted dimethyl sulfoxide solutions and by 15 times for biological tissues. Weak Raman peaks of target molecules originally buried in the noise were unraveled. Coupling the denoising algorithm with multivariate curve resolution allowed discrimination of fat stores from protein-rich organelles in C. elegans. Together, our method significantly improved detection sensitivity without frame averaging, which can be useful for in vivo spectroscopic imaging. PMID:26955400

  14. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  15. Automatic Denoising and Unmixing in Hyperspectral Image Processing

    NASA Astrophysics Data System (ADS)

    Peng, Honghong

    This thesis addresses two important aspects in hyperspectral image processing: automatic hyperspectral image denoising and unmixing. The first part of this thesis is devoted to a novel automatic optimized vector bilateral filter denoising algorithm, while the remainder concerns nonnegative matrix factorization with deterministic annealing for unsupervised unmixing in remote sensing hyperspectral images. The need for automatic hyperspectral image processing has been promoted by the development of potent hyperspectral systems, with hundreds of narrow contiguous bands, spanning the visible to the long wave infrared range of the electromagnetic spectrum. Due to the large volume of raw data generated by such sensors, automatic processing in the hyperspectral images processing chain is preferred to minimize human workload and achieve optimal result. Two of the mostly researched processing for such automatic effort are: hyperspectral image denoising, which is an important preprocessing step for almost all remote sensing tasks, and unsupervised unmixing, which decomposes the pixel spectra into a collection of endmember spectral signatures and their corresponding abundance fractions. Two new methodologies are introduced in this thesis to tackle the automatic processing problems described above. Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios. Typical vector bilateral filtering usage does not employ parameters that have been determined to satisfy optimality criteria. This thesis also introduces an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimizing the Stein

  16. Phase and amplitude binning for 4D-CT imaging

    NASA Astrophysics Data System (ADS)

    Abdelnour, A. F.; Nehmeh, S. A.; Pan, T.; Humm, J. L.; Vernon, P.; Schöder, H.; Rosenzweig, K. E.; Mageras, G. S.; Yorke, E.; Larson, S. M.; Erdi, Y. E.

    2007-07-01

    We compare the consistency and accuracy of two image binning approaches used in 4D-CT imaging. One approach, phase binning (PB), assigns each breathing cycle 2π rad, within which the images are grouped. In amplitude binning (AB), the images are assigned bins according to the breathing signal's full amplitude. To quantitate both approaches we used a NEMA NU2-2001 IEC phantom oscillating in the axial direction and at random frequencies and amplitudes, approximately simulating a patient's breathing. 4D-CT images were obtained using a four-slice GE Lightspeed CT scanner operating in cine mode. We define consistency error as a measure of ability to correctly bin over repeated cycles in the same field of view. Average consistency error μe ± σe in PB ranged from 18% ± 20% to 30% ± 35%, while in AB the error ranged from 11% ± 14% to 20% ± 24%. In PB nearly all bins contained sphere slices. AB was more accurate, revealing empty bins where no sphere slices existed. As a proof of principle, we present examples of two non-small cell lung carcinoma patients' 4D-CT lung images binned by both approaches. While AB can lead to gaps in the coronal images, depending on the patient's breathing pattern, PB exhibits no gaps but suffers visible artifacts due to misbinning, yielding images that cover a relatively large amplitude range. AB was more consistent, though often resulted in gaps when no data existed due to patients' breathing pattern. We conclude AB is more accurate than PB. This has important consequences to treatment planning and diagnosis.

  17. Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data.

    PubMed

    Pnevmatikakis, Eftychios A; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M; Peterka, Darcy S; Yuste, Rafael; Paninski, Liam

    2016-01-20

    We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data. PMID:26774160

  18. Locally adaptive bilateral clustering for universal image denoising

    NASA Astrophysics Data System (ADS)

    Toh, K. K. V.; Mat Isa, N. A.

    2012-12-01

    This paper presents a novel and efficient locally adaptive denoising method based on clustering of pixels into regions of similar geometric and radiometric structures. Clustering is performed by adaptively segmenting pixels in the local kernel based on their augmented variational series. Then, noise pixels are restored by selectively considering the radiometric and spatial properties of every pixel in the formed clusters. The proposed method is exceedingly robust in conveying reliable local structural information even in the presence of noise. As a result, the proposed method substantially outperforms other state-of-the-art methods in terms of image restoration and computational cost. We support our claims with ample simulated and real data experiments. The relatively fast runtime from extensive simulations also suggests that the proposed method is suitable for a variety of image-based products — either embedded in image capturing devices or applied as image enhancement software.

  19. Myocardial motion and function assessment using 4D images

    NASA Astrophysics Data System (ADS)

    Shi, Peng-Cheng; Robinson, Glynn P.; Duncan, James S.

    1994-09-01

    This paper describes efforts aimed at more objectively and accurately quantifying the local, regional and global function of the left ventricle (LV) of the heart from 4D image data. Using our shape-based image analysis methods, point-wise myocardial motion vector fields between successive image frames through the entire cardiac cycle will be computed. Quantitative LV motion, thickening, and strain measurements will then be established from the point correspondence maps. In the paper, we will also briefly describe an in vivo experimental model which uses implanted imaging-opaque markers to validate the results of our image analysis methods. Finally, initial experimental results using image sequences from two different modalities will be presented.

  20. 4D Light Field Imaging System Using Programmable Aperture

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam

    2012-01-01

    Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need

  1. Denoising human cardiac diffusion tensor magnetic resonance images using sparse representation combined with segmentation.

    PubMed

    Bao, L J; Zhu, Y M; Liu, W Y; Croisille, P; Pu, Z B; Robini, M; Magnin, I E

    2009-03-21

    Cardiac diffusion tensor magnetic resonance imaging (DT-MRI) is noise sensitive, and the noise can induce numerous systematic errors in subsequent parameter calculations. This paper proposes a sparse representation-based method for denoising cardiac DT-MRI images. The method first generates a dictionary of multiple bases according to the features of the observed image. A segmentation algorithm based on nonstationary degree detector is then introduced to make the selection of atoms in the dictionary adapted to the image's features. The denoising is achieved by gradually approximating the underlying image using the atoms selected from the generated dictionary. The results on both simulated image and real cardiac DT-MRI images from ex vivo human hearts show that the proposed denoising method performs better than conventional denoising techniques by preserving image contrast and fine structures. PMID:19218737

  2. Evaluating image denoising methods in myocardial perfusion single photon emission computed tomography (SPECT) imaging

    NASA Astrophysics Data System (ADS)

    Skiadopoulos, S.; Karatrantou, A.; Korfiatis, P.; Costaridou, L.; Vassilakos, P.; Apostolopoulos, D.; Panayiotakis, G.

    2009-10-01

    The statistical nature of single photon emission computed tomography (SPECT) imaging, due to the Poisson noise effect, results in the degradation of image quality, especially in the case of lesions of low signal-to-noise ratio (SNR). A variety of well-established single-scale denoising methods applied on projection raw images have been incorporated in SPECT imaging applications, while multi-scale denoising methods with promising performance have been proposed. In this paper, a comparative evaluation study is performed between a multi-scale platelet denoising method and the well-established Butterworth filter applied as a pre- and post-processing step on images reconstructed without and/or with attenuation correction. Quantitative evaluation was carried out employing (i) a cardiac phantom containing two different size cold defects, utilized in two experiments conducted to simulate conditions without and with photon attenuation from myocardial surrounding tissue and (ii) a pilot-verified clinical dataset of 15 patients with ischemic defects. Image noise, defect contrast, SNR and defect contrast-to-noise ratio (CNR) metrics were computed for both phantom and patient defects. In addition, an observer preference study was carried out for the clinical dataset, based on rankings from two nuclear medicine clinicians. Without photon attenuation conditions, denoising by platelet and Butterworth post-processing methods outperformed Butterworth pre-processing for large size defects, while for small size defects, as well as with photon attenuation conditions, all methods have demonstrated similar denoising performance. Under both attenuation conditions, the platelet method showed improved performance with respect to defect contrast, SNR and defect CNR in the case of images reconstructed without attenuation correction, however not statistically significant (p > 0.05). Quantitative as well as preference results obtained from clinical data showed similar performance of the

  3. 4D XCAT phantom for multimodality imaging research

    SciTech Connect

    Segars, W. P.; Sturgeon, G.; Mendonca, S.; Grimes, Jason; Tsui, B. M. W.

    2010-09-15

    Purpose: The authors develop the 4D extended cardiac-torso (XCAT) phantom for multimodality imaging research. Methods: Highly detailed whole-body anatomies for the adult male and female were defined in the XCAT using nonuniform rational B-spline (NURBS) and subdivision surfaces based on segmentation of the Visible Male and Female anatomical datasets from the National Library of Medicine as well as patient datasets. Using the flexibility of these surfaces, the Visible Human anatomies were transformed to match body measurements and organ volumes for a 50th percentile (height and weight) male and female. The desired body measurements for the models were obtained using the PEOPLESIZE program that contains anthropometric dimensions categorized from 1st to the 99th percentile for US adults. The desired organ volumes were determined from ICRP Publication 89 [ICRP, ''Basic anatomical and physiological data for use in radiological protection: reference values,'' ICRP Publication 89 (International Commission on Radiological Protection, New York, NY, 2002)]. The male and female anatomies serve as standard templates upon which anatomical variations may be modeled in the XCAT through user-defined parameters. Parametrized models for the cardiac and respiratory motions were also incorporated into the XCAT based on high-resolution cardiac- and respiratory-gated multislice CT data. To demonstrate the usefulness of the phantom, the authors show example simulation studies in PET, SPECT, and CT using publicly available simulation packages. Results: As demonstrated in the pilot studies, the 4D XCAT (which includes thousands of anatomical structures) can produce realistic imaging data when combined with accurate models of the imaging process. With the flexibility of the NURBS surface primitives, any number of different anatomies, cardiac or respiratory motions or patterns, and spatial resolutions can be simulated to perform imaging research. Conclusions: With the ability to produce

  4. 4D XCAT phantom for multimodality imaging research

    PubMed Central

    Segars, W. P.; Sturgeon, G.; Mendonca, S.; Grimes, Jason; Tsui, B. M. W.

    2010-01-01

    Purpose: The authors develop the 4D extended cardiac-torso (XCAT) phantom for multimodality imaging research. Methods: Highly detailed whole-body anatomies for the adult male and female were defined in the XCAT using nonuniform rational B-spline (NURBS) and subdivision surfaces based on segmentation of the Visible Male and Female anatomical datasets from the National Library of Medicine as well as patient datasets. Using the flexibility of these surfaces, the Visible Human anatomies were transformed to match body measurements and organ volumes for a 50th percentile (height and weight) male and female. The desired body measurements for the models were obtained using the PEOPLESIZE program that contains anthropometric dimensions categorized from 1st to the 99th percentile for US adults. The desired organ volumes were determined from ICRP Publication 89 [ICRP, ‘‘Basic anatomical and physiological data for use in radiological protection: reference values,” ICRP Publication 89 (International Commission on Radiological Protection, New York, NY, 2002)]. The male and female anatomies serve as standard templates upon which anatomical variations may be modeled in the XCAT through user-defined parameters. Parametrized models for the cardiac and respiratory motions were also incorporated into the XCAT based on high-resolution cardiac- and respiratory-gated multislice CT data. To demonstrate the usefulness of the phantom, the authors show example simulation studies in PET, SPECT, and CT using publicly available simulation packages. Results: As demonstrated in the pilot studies, the 4D XCAT (which includes thousands of anatomical structures) can produce realistic imaging data when combined with accurate models of the imaging process. With the flexibility of the NURBS surface primitives, any number of different anatomies, cardiac or respiratory motions or patterns, and spatial resolutions can be simulated to perform imaging research. Conclusions: With the ability to produce

  5. Parameter optimization for image denoising based on block matching and 3D collaborative filtering

    NASA Astrophysics Data System (ADS)

    Pedada, Ramu; Kugu, Emin; Li, Jiang; Yue, Zhanfeng; Shen, Yuzhong

    2009-02-01

    Clinical MRI images are generally corrupted by random noise during acquisition with blurred subtle structure features. Many denoising methods have been proposed to remove noise from corrupted images at the expense of distorted structure features. Therefore, there is always compromise between removing noise and preserving structure information for denoising methods. For a specific denoising method, it is crucial to tune it so that the best tradeoff can be obtained. In this paper, we define several cost functions to assess the quality of noise removal and that of structure information preserved in the denoised image. Strength Pareto Evolutionary Algorithm 2 (SPEA2) is utilized to simultaneously optimize the cost functions by modifying parameters associated with the denoising methods. The effectiveness of the algorithm is demonstrated by applying the proposed optimization procedure to enhance the image denoising results using block matching and 3D collaborative filtering. Experimental results show that the proposed optimization algorithm can significantly improve the performance of image denoising methods in terms of noise removal and structure information preservation.

  6. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  7. Simultaneous Fusion and Denoising of Panchromatic and Multispectral Satellite Images

    NASA Astrophysics Data System (ADS)

    Ragheb, Amr M.; Osman, Heba; Abbas, Alaa M.; Elkaffas, Saleh M.; El-Tobely, Tarek A.; Khamis, S.; Elhalawany, Mohamed E.; Nasr, Mohamed E.; Dessouky, Moawad I.; Al-Nuaimy, Waleed; Abd El-Samie, Fathi E.

    2012-12-01

    To identify objects in satellite images, multispectral (MS) images with high spectral resolution and low spatial resolution, and panchromatic (Pan) images with high spatial resolution and low spectral resolution need to be fused. Several fusion methods such as the intensity-hue-saturation (IHS), the discrete wavelet transform, the discrete wavelet frame transform (DWFT), and the principal component analysis have been proposed in recent years to obtain images with both high spectral and spatial resolutions. In this paper, a hybrid fusion method for satellite images comprising both the IHS transform and the DWFT is proposed. This method tries to achieve the highest possible spectral and spatial resolutions with as small distortion in the fused image as possible. A comparison study between the proposed hybrid method and the traditional methods is presented in this paper. Different MS and Pan images from Landsat-5, Spot, Landsat-7, and IKONOS satellites are used in this comparison. The effect of noise on the proposed hybrid fusion method as well as the traditional fusion methods is studied. Experimental results show the superiority of the proposed hybrid method to the traditional methods. The results show also that a wavelet denoising step is required when fusion is performed at low signal-to-noise ratios.

  8. TU-C-BRD-01: Image Guided SBRT I: Multi-Modality 4D Imaging

    SciTech Connect

    Cai, J; Mageras, G; Pan, T

    2014-06-15

    Motion management is one of the critical technical challenges for radiation therapy. 4D imaging has been rapidly adopted as essential tool to assess organ motion associated with respiratory breathing. A variety of 4D imaging techniques have been developed and are currently under development based on different imaging modalities such as CT, MRI, PET, and CBCT. Each modality provides specific and complementary information about organ and tumor respiratory motion. Effective use of each different technique or combined use of different techniques can introduce a comprehensive management of tumor motion. Specifically, these techniques have afforded tremendous opportunities to better define and delineate tumor volumes, more accurately perform patient positioning, and effectively apply highly conformal therapy techniques such as IMRT and SBRT. Successful implementation requires good understanding of not only each technique, including unique features, limitations, artifacts, imaging acquisition and process, but also how to systematically apply the information obtained from different imaging modalities using proper tools such as deformable image registration. Furthermore, it is important to understand the differences in the effects of breathing variation between different imaging modalities. A comprehensive motion management strategy using multi-modality 4D imaging has shown promise in improving patient care, but at the same time faces significant challenges. This session will focuses on the current status and advances in imaging respiration-induced organ motion with different imaging modalities: 4D-CT, 4D-MRI, 4D-PET, and 4D-CBCT/DTS. Learning Objectives: Understand the need and role of multimodality 4D imaging in radiation therapy. Understand the underlying physics behind each 4D imaging technique. Recognize the advantages and limitations of each 4D imaging technique.

  9. Exploiting the self-similarity in ERP images by nonlocal means for single-trial denoising.

    PubMed

    Strauss, Daniel J; Teuber, Tanja; Steidl, Gabriele; Corona-Strauss, Farah I

    2013-07-01

    Event related potentials (ERPs) represent a noninvasive and widely available means to analyze neural correlates of sensory and cognitive processing. Recent developments in neural and cognitive engineering proposed completely new application fields of this well-established measurement technique when using an advanced single-trial processing. We have recently shown that 2-D diffusion filtering methods from image processing can be used for the denoising of ERP single-trials in matrix representations, also called ERP images. In contrast to conventional 1-D transient ERP denoising techniques, the 2-D restoration of ERP images allows for an integration of regularities over multiple stimulations into the denoising process. Advanced anisotropic image restoration methods may require directional information for the ERP denoising process. This is especially true if there is a lack of a priori knowledge about possible traces in ERP images. However due to the use of event related experimental paradigms, ERP images are characterized by a high degree of self-similarity over the individual trials. In this paper, we propose the simple and easy to apply nonlocal means method for ERP image denoising in order to exploit this self-similarity rather than focusing on the edge-based extraction of directional information. Using measured and simulated ERP data, we compare our method to conventional approaches in ERP denoising. It is concluded that the self-similarity in ERP images can be exploited for single-trial ERP denoising by the proposed approach. This method might be promising for a variety of evoked and event-related potential applications, including nonstationary paradigms such as changing exogeneous stimulus characteristics or endogenous states during the experiment. As presented, the proposed approach is for the a posteriori denoising of single-trial sequences.

  10. Denoising of Ultrasound Cervix Image Using Improved Anisotropic Diffusion Filter

    PubMed Central

    Rose, R Jemila; Allwin, S

    2015-01-01

    ABSTRACT Objective: The purpose of this study was to evaluate an improved oriented speckle reducing anisotropic diffusion (IADF) filter that suppress the speckle noise from ultrasound B-mode images and shows better result than previous filters such as anisotropic diffusion, wavelet denoising and local statistics. Methods: The clinical ultrasound images of the cervix were obtained by ATL HDI 5000 ultrasound machine from the Regional Cancer Centre, Medical College campus, Thiruvananthapuram. The standardized ways of organizing and storing the image were in the format of bmp and the dimensions of 256 × 256 with the help of an improved oriented speckle reducing anisotropic diffusion filter. For analysis, 24 ultrasound cervix images were tested and the performance measured. Results: This provides quality metrics in the case of maximum peak signal-to-noise ratio (PSNR) of 31 dB, structural similarity index map (SSIM) of 0.88 and edge preservation accuracy of 88%. Conclusion: The IADF filter is the optimal method and it is capable of strong speckle suppression with less computational complexity. PMID:26624591

  11. Respiratory triggered 4D cone-beam computed tomography: A novel method to reduce imaging dose

    PubMed Central

    Cooper, Benjamin J.; O’Brien, Ricky T.; Balik, Salim; Hugo, Geoffrey D.; Keall, Paul J.

    2013-01-01

    Purpose: A novel method called respiratory triggered 4D cone-beam computed tomography (RT 4D CBCT) is described whereby imaging dose can be reduced without degrading image quality. RT 4D CBCT utilizes a respiratory signal to trigger projections such that only a single projection is assigned to a given respiratory bin for each breathing cycle. In contrast, commercial 4D CBCT does not actively use the respiratory signal to minimize image dose. Methods: To compare RT 4D CBCT with conventional 4D CBCT, 3600 CBCT projections of a thorax phantom were gathered and reconstructed to generate a ground truth CBCT dataset. Simulation pairs of conventional 4D CBCT acquisitions and RT 4D CBCT acquisitions were developed assuming a sinusoidal respiratory signal which governs the selection of projections from the pool of 3600 original projections. The RT 4D CBCT acquisition triggers a single projection when the respiratory signal enters a desired acquisition bin; the conventional acquisition does not use a respiratory trigger and projections are acquired at a constant frequency. Acquisition parameters studied were breathing period, acquisition time, and imager frequency. The performance of RT 4D CBCT using phase based and displacement based sorting was also studied. Image quality was quantified by calculating difference images of the test dataset from the ground truth dataset. Imaging dose was calculated by counting projections. Results: Using phase based sorting RT 4D CBCT results in 47% less imaging dose on average compared to conventional 4D CBCT. Image quality differences were less than 4% at worst. Using displacement based sorting RT 4D CBCT results in 57% less imaging dose on average, than conventional 4D CBCT methods; however, image quality was 26% worse with RT 4D CBCT. Conclusions: Simulation studies have shown that RT 4D CBCT reduces imaging dose while maintaining comparable image quality for phase based 4D CBCT; image quality is degraded for displacement based RT 4D

  12. Edge-preserving image denoising via group coordinate descent on the GPU

    PubMed Central

    McGaffin, Madison G.; Fessler, Jeffrey A.

    2015-01-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel one-dimensional pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates inplace and only store the noisy data, denoised image and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation (TV). Both algorithms use the majorize-minimize (MM) framework to solve the one-dimensional pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time. PMID:25675454

  13. Edge-preserving image denoising via group coordinate descent on the GPU.

    PubMed

    McGaffin, Madison Gray; Fessler, Jeffrey A

    2015-04-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel 1D pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates in-place and only store the noisy data, denoised image, and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation. Both algorithms use the majorize-minimize framework to solve the 1D pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time.

  14. Patch-based and multiresolution optimum bilateral filters for denoising images corrupted by Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kishan, Harini; Seelamantula, Chandra Sekhar

    2015-09-01

    We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques.

  15. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  16. 4D imaging of protein aggregation in live cells.

    PubMed

    Spokoini, Rachel; Shamir, Maya; Keness, Alma; Kaganovich, Daniel

    2013-01-01

    ubiquitinated are diverted to the IPOD, where they are actively aggregated in a protective compartment. Up until this point, the methodological paradigm of live-cell fluorescence microscopy has largely been to label proteins and track their locations in the cell at specific time-points and usually in two dimensions. As new technologies have begun to grant experimenters unprecedented access to the submicron scale in living cells, the dynamic architecture of the cytosol has come into view as a challenging new frontier for experimental characterization. We present a method for rapidly monitoring the 3D spatial distributions of multiple fluorescently labeled proteins in the yeast cytosol over time. 3D timelapse (4D imaging) is not merely a technical challenge; rather, it also facilitates a dramatic shift in the conceptual framework used to analyze cellular structure. We utilize a cytosolic folding sensor protein in live yeast to visualize distinct fates for misfolded proteins in cellular aggregation quality control, using rapid 4D fluorescent imaging. The temperature sensitive mutant of the Ubc9 protein (Ubc9(ts)) is extremely effective both as a sensor of cellular proteostasis, and a physiological model for tracking aggregation quality control. As with most ts proteins, Ubc9(ts) is fully folded and functional at permissive temperatures due to active cellular chaperones. Above 30 ° C, or when the cell faces misfolding stress, Ubc9(ts) misfolds and follows the fate of a native globular protein that has been misfolded due to mutation, heat denaturation, or oxidative damage. By fusing it to GFP or other fluorophores, it can be tracked in 3D as it forms Stress Foci, or is directed to JUNQ or IPOD. PMID:23608881

  17. Improved extreme value weighted sparse representational image denoising with random perturbation

    NASA Astrophysics Data System (ADS)

    Xuan, Shibin; Han, Yulan

    2015-11-01

    Research into the removal of mixed noise is a hot topic in the field of image denoising. Currently, weighted encoding with sparse nonlocal regularization represents an excellent mixed noise removal method. To make the fitting function closer to the requirements of a robust estimation technique, an extreme value technique is used that allows the fitting function to satisfy three conditions of robust estimation on a larger interval. Moreover, a random disturbance sequence is integrated into the denoising model to prevent the iterative solving process from falling into local optima. A radon transform-based noise detection algorithm and an adaptive median filter are used to obtain a high-quality initial solution for the iterative procedure of the image denoising model. Experimental results indicate that this improved method efficiently enhances the weighted encoding with a sparse nonlocal regularization model. The proposed method can effectively remove mixed noise from corrupted images, while better preserving the edges and details of the processed image.

  18. A new study on mammographic image denoising using multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju

    2015-12-01

    Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.

  19. Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction.

    PubMed

    Holan, Scott H; Viator, John A

    2008-06-21

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples. PMID:18495977

  20. NOTE: Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction

    NASA Astrophysics Data System (ADS)

    Holan, Scott H.; Viator, John A.

    2008-06-01

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.

  1. Computed Tomography Images De-noising using a Novel Two Stage Adaptive Algorithm

    PubMed Central

    Fadaee, Mojtaba; Shamsi, Mousa; Saberkari, Hamidreza; Sedaaghi, Mohammad Hossein

    2015-01-01

    In this paper, an optimal algorithm is presented for de-noising of medical images. The presented algorithm is based on improved version of local pixels grouping and principal component analysis. In local pixels grouping algorithm, blocks matching based on L2 norm method is utilized, which leads to matching performance improvement. To evaluate the performance of our proposed algorithm, peak signal to noise ratio (PSNR) and structural similarity (SSIM) evaluation criteria have been used, which are respectively according to the signal to noise ratio in the image and structural similarity of two images. The proposed algorithm has two de-noising and cleanup stages. The cleanup stage is carried out comparatively; meaning that it is alternately repeated until the two conditions based on PSNR and SSIM are established. Implementation results show that the presented algorithm has a significant superiority in de-noising. Furthermore, the quantities of SSIM and PSNR values are higher in comparison to other methods. PMID:26955565

  2. A multi-scale non-local means algorithm for image de-noising

    NASA Astrophysics Data System (ADS)

    Nercessian, Shahan; Panetta, Karen A.; Agaian, Sos S.

    2012-06-01

    A highly studied problem in image processing and the field of electrical engineering in general is the recovery of a true signal from its noisy version. Images can be corrupted by noise during their acquisition or transmission stages. As noisy images are visually very poor in quality, and complicate further processing stages of computer vision systems, it is imperative to develop algorithms which effectively remove noise in images. In practice, it is a difficult task to effectively remove the noise while simultaneously retaining the edge structures within the image. Accordingly, many de-noising algorithms have been considered attempt to intelligent smooth the image while still preserving its details. Recently, a non-local means (NLM) de-noising algorithm was introduced, which exploited the redundant nature of images to achieve image de-noising. The algorithm was shown to outperform current de-noising standards, including Gaussian filtering, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding. However, the NLM algorithm was developed in the spatial domain, and therefore, does not leverage the benefit that multi-scale transforms provide a framework in which signals can be better distinguished by noise. Accordingly, in this paper, a multi-scale NLM (MS-NLM) algorithm is proposed, which combines the advantage of the NLM algorithm and multi-scale image processing techniques. Experimental results via computer simulations illustrate that the MS-NLM algorithm outperforms the NLM, both visually and quantitatively.

  3. [A novel denoising approach to SVD filtering based on DCT and PCA in CT image].

    PubMed

    Feng, Fuqiang; Wang, Jun

    2013-10-01

    Because of various effects of the imaging mechanism, noises are inevitably introduced in medical CT imaging process. Noises in the images will greatly degrade the quality of images and bring difficulties to clinical diagnosis. This paper presents a new method to improve singular value decomposition (SVD) filtering performance in CT image. Filter based on SVD can effectively analyze characteristics of the image in horizontal (and/or vertical) directions. According to the features of CT image, we can make use of discrete cosine transform (DCT) to extract the region of interest and to shield uninterested region so as to realize the extraction of structure characteristics of the image. Then we transformed SVD to the image after DCT, constructing weighting function for image reconstruction adaptively weighted. The algorithm for the novel denoising approach in this paper was applied in CT image denoising, and the experimental results showed that the new method could effectively improve the performance of SVD filtering.

  4. MR images denoising using DCT-based unbiased nonlocal means filter

    NASA Astrophysics Data System (ADS)

    Zheng, Xiuqing; Hu, Jinrong; Zhou, Jiuliu

    2013-03-01

    The non-local means (NLM) filter has been proven to be an efficient feature-preserved denoising method and can be applied to remove noise in the magnetic resonance (MR) images. To suppress noise more efficiently, we present a novel NLM filter by using a low-pass filtered and low dimensional version of neighborhood for calculating the similarity weights. The discrete cosine transform (DCT) is used as a smoothing kernel, allowing both improvements in similarity estimation and computational speed-up. Experimental results show that the proposed filter achieves better denoising performance in MR Images compared to others filters, such as recently proposed NLM filter and unbiased NLM (UNLM) filter.

  5. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  6. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    PubMed

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  7. Total variation-regularized weighted nuclear norm minimization for hyperspectral image mixed denoising

    NASA Astrophysics Data System (ADS)

    Wu, Zhaojun; Wang, Qiang; Wu, Zhenghua; Shen, Yi

    2016-01-01

    Many nuclear norm minimization (NNM)-based methods have been proposed for hyperspectral image (HSI) mixed denoising due to the low-rank (LR) characteristics of clean HSI. However, the NNM-based methods regularize each eigenvalue equally, which is unsuitable for the denoising problem, where each eigenvalue stands for special physical meaning and should be regularized differently. However, the NNM-based methods only exploit the high spectral correlation, while ignoring the local structure of HSI and resulting in spatial distortions. To address these problems, a total variation (TV)-regularized weighted nuclear norm minimization (TWNNM) method is proposed. To obtain the desired denoising performance, two issues are included. First, to exploit the high spectral correlation, the HSI is restricted to be LR, and different eigenvalues are minimized with different weights based on the WNNM. Second, to preserve the local structure of HSI, the TV regularization is incorporated, and the alternating direction method of multipliers is used to solve the resulting optimization problem. Both simulated and real data experiments demonstrate that the proposed TWNNM approach produces superior denoising results for the mixed noise case in comparison with several state-of-the-art denoising methods.

  8. Improved DCT-based nonlocal means filter for MR images denoising.

    PubMed

    Hu, Jinrong; Pu, Yifei; Wu, Xi; Zhang, Yi; Zhou, Jiliu

    2012-01-01

    The nonlocal means (NLM) filter has been proven to be an efficient feature-preserved denoising method and can be applied to remove noise in the magnetic resonance (MR) images. To suppress noise more efficiently, we present a novel NLM filter based on the discrete cosine transform (DCT). Instead of computing similarity weights using the gray level information directly, the proposed method calculates similarity weights in the DCT subspace of neighborhood. Due to promising characteristics of DCT, such as low data correlation and high energy compaction, the proposed filter is naturally endowed with more accurate estimation of weights thus enhances denoising effectively. The performance of the proposed filter is evaluated qualitatively and quantitatively together with two other NLM filters, namely, the original NLM filter and the unbiased NLM (UNLM) filter. Experimental results demonstrate that the proposed filter achieves better denoising performance in MRI compared to the others.

  9. Subject-specific patch-based denoising for contrast-enhanced cardiac MR images

    NASA Astrophysics Data System (ADS)

    Ma, Lorraine; Ebrahimi, Mehran; Pop, Mihaela

    2016-03-01

    Many patch-based techniques in imaging, e.g., Non-local means denoising, require tuning parameters to yield optimal results. In real-world applications, e.g., denoising of MR images, ground truth is not generally available and the process of choosing an appropriate set of parameters is a challenge. Recently, Zhu et al. proposed a method to define an image quality measure, called Q, that does not require ground truth. In this manuscript, we evaluate the effect of various parameters of the NL-means denoising on this quality metric Q. Our experiments are based on the late-gadolinium enhancement (LGE) cardiac MR images that are inherently noisy. Our described exhaustive evaluation approach can be used in tuning parameters of patch-based schemes. Even in the case that an estimation of optimal parameters is provided using another existing approach, our described method can be used as a secondary validation step. Our preliminary results suggest that denoising parameters should be case-specific rather than generic.

  10. Biomedical image and signal de-noising using dual tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Rizi, F. Yousefi; Noubari, H. Ahmadi; Setarehdan, S. K.

    2011-10-01

    Dual tree complex wavelet transform(DTCWT) is a form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The purposes of de-noising are reducing noise level and improving signal to noise ratio (SNR) without distorting the signal or image. This paper proposes a method for removing white Gaussian noise from ECG signals and biomedical images. The discrete wavelet transform (DWT) is very valuable in a large scope of de-noising problems. However, it has limitations such as oscillations of the coefficients at a singularity, lack of directional selectivity in higher dimensions, aliasing and consequent shift variance. The complex wavelet transform CWT strategy that we focus on in this paper is Kingsbury's and Selesnick's dual tree CWT (DTCWT) which outperforms the critically decimated DWT in a range of applications, such as de-noising. Each complex wavelet is oriented along one of six possible directions, and the magnitude of each complex wavelet has a smooth bell-shape. In the final part of this paper, we present biomedical image and signal de-noising by the means of thresholding magnitude of the wavelet coefficients.

  11. Translation invariant directional framelet transform combined with Gabor filters for image denoising.

    PubMed

    Shi, Yan; Yang, Xiaoyuan; Guo, Yuhua

    2014-01-01

    This paper is devoted to the study of a directional lifting transform for wavelet frames. A nonsubsampled lifting structure is developed to maintain the translation invariance as it is an important property in image denoising. Then, the directionality of the lifting-based tight frame is explicitly discussed, followed by a specific translation invariant directional framelet transform (TIDFT). The TIDFT has two framelets ψ1, ψ2 with vanishing moments of order two and one respectively, which are able to detect singularities in a given direction set. It provides an efficient and sparse representation for images containing rich textures along with properties of fast implementation and perfect reconstruction. In addition, an adaptive block-wise orientation estimation method based on Gabor filters is presented instead of the conventional minimization of residuals. Furthermore, the TIDFT is utilized to exploit the capability of image denoising, incorporating the MAP estimator for multivariate exponential distribution. Consequently, the TIDFT is able to eliminate the noise effectively while preserving the textures simultaneously. Experimental results show that the TIDFT outperforms some other frame-based denoising methods, such as contourlet and shearlet, and is competitive to the state-of-the-art denoising approaches.

  12. Constrain static target kinetic iterative image reconstruction for 4D cardiac CT imaging

    NASA Astrophysics Data System (ADS)

    Alessio, Adam M.; La Riviere, Patrick J.

    2011-03-01

    Iterative image reconstruction offers improved signal to noise properties for CT imaging. A primary challenge with iterative methods is the substantial computation time. This computation time is even more prohibitive in 4D imaging applications, such as cardiac gated or dynamic acquisition sequences. In this work, we propose only updating the time-varying elements of a 4D image sequence while constraining the static elements to be fixed or slowly varying in time. We test the method with simulations of 4D acquisitions based on measured cardiac patient data from a) a retrospective cardiac-gated CT acquisition and b) a dynamic perfusion CT acquisition. We target the kinetic elements with one of two methods: 1) position a circular ROI on the heart, assuming area outside ROI is essentially static throughout imaging time; and 2) select varying elements from the coefficient of variation image formed from fast analytic reconstruction of all time frames. Targeted kinetic elements are updated with each iteration, while static elements remain fixed at initial image values formed from the reconstruction of data from all time frames. Results confirm that the computation time is proportional to the number of targeted elements; our simulations suggest that <30% of elements need to be updated in each frame leading to >3 times reductions in reconstruction time. The images reconstructed with the proposed method have matched mean square error with full 4D reconstruction. The proposed method is amenable to most optimization algorithms and offers the potential for significant computation improvements, which could be traded off for more sophisticated system models or penalty terms.

  13. Voxel-Wise Functional Connectomics Using Arterial Spin Labeling Functional Magnetic Resonance Imaging: The Role of Denoising.

    PubMed

    Liang, Xiaoyun; Connelly, Alan; Calamante, Fernando

    2015-11-01

    The objective of this study was to investigate voxel-wise functional connectomics using arterial spin labeling (ASL) functional magnetic resonance imaging (fMRI). Since ASL signal has an intrinsically low signal-to-noise ratio (SNR), the role of denoising is evaluated; in particular, a novel denoising method, dual-tree complex wavelet transform (DT-CWT) combined with the nonlocal means (NLM) algorithm is implemented and evaluated. Simulations were conducted to evaluate the performance of the proposed method in denoising images and in detecting functional networks from noisy data (including the accuracy and sensitivity of detection). In addition, denoising was applied to in vivo ASL datasets, followed by network analysis using graph theoretical approaches. Efficiencies cost was used to evaluate the performance of denoising in detecting functional networks from in vivo ASL fMRI data. Simulations showed that denoising is effective in detecting voxel-wise functional networks from low SNR data and/or from data with small total number of time points. The capability of denoised voxel-wise functional connectivity analysis was also demonstrated with in vivo data. We concluded that denoising is important for voxel-wise functional connectivity using ASL fMRI and that the proposed DT-CWT-NLM method should be a useful ASL preprocessing step.

  14. The study of real-time denoising algorithm based on parallel computing for the MEMS IR imager

    NASA Astrophysics Data System (ADS)

    Gong, Cheng; Hui, Mei; Dong, Liquan; Zhao, Yuejin

    2011-11-01

    Recent years, the MEMS-based optical readout infrared imaging technology is becoming a research hotspot. Studies show that the MEMS-based optical readout infrared imager features a high frame rate. Considering the high data Throughput and computing complexity of denoising algorithm It's difficult to ensure real-time of the image processing. In order to improve processing speed and achieve real-time, we conducted a study of denoising algorithm based on parallel computing using FPGA (Field Programmable Gate Array). In the paper, we analyze the imaging characteristics of MEMS-based optical readout infrared imager and design parallel computing methods for real-time denoising using the hardware description language. The experiment shows that the parallel computing denoising algorithm can improve infrared image processing speed to meet real-time requirement.

  15. Denoising of hyperspectral images by best multilinear rank approximation of a tensor

    NASA Astrophysics Data System (ADS)

    Marin-McGee, Maider; Velez-Reyes, Miguel

    2010-04-01

    The hyperspectral image cube can be modeled as a three dimensional array. Tensors and the tools of multilinear algebra provide a natural framework to deal with this type of mathematical object. Singular value decomposition (SVD) and its variants have been used by the HSI community for denoising of hyperspectral imagery. Denoising of HSI using SVD is achieved by finding a low rank approximation of a matrix representation of the hyperspectral image cube. This paper investigates similar concepts in hyperspectral denoising by using a low multilinear rank approximation the given HSI tensor representation. The Best Multilinear Rank Approximation (BMRA) of a given tensor A is to find a lower multilinear rank tensor B that is as close as possible to A in the Frobenius norm. Different numerical methods to compute the BMRA using Alternating Least Square (ALS) method and Newton's Methods over product of Grassmann manifolds are presented. The effect of the multilinear rank, the numerical method used to compute the BMRA, and different parameter choices in those methods are studied. Results show that comparable results are achievable with both ALS and Newton type methods. Also, classification results using the filtered tensor are better than those obtained either with denoising using SVD or MNF.

  16. Multiresolution parametric estimation of transparent motions and denoising of fluoroscopic images.

    PubMed

    Auvray, Vincent; Liénard, Jean; Bouthemy, Patrick

    2005-01-01

    We describe a novel multiresolution parametric framework to estimate transparent motions typically present in X-Ray exams. Assuming the presence if two transparent layers, it computes two affine velocity fields by minimizing an appropriate objective function with an incremental Gauss-Newton technique. We have designed a realistic simulation scheme of fluoroscopic image sequences to validate our method on data with ground truth and different levels of noise. An experiment on real clinical images is also reported. We then exploit this transparent-motion estimation method to denoise two layers image sequences using a motion-compensated estimation method. In accordance with theory, we show that we reach a denoising factor of 2/3 in a few iterations without bringing any local artifacts in the image sequence.

  17. A New Image Denoising Algorithm that Preserves Structures of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Bressert, Eli; Edmonds, P.; Kowal Arcand, K.

    2007-05-01

    We have processed numerous x-ray data sets using several well-known algorithms such as Gaussian and adaptive smoothing for public related image releases. These algorithms are used to denoise/smooth images and retain the overall structure of observed objects. Recently, a new PDE algorithm and program, provided by Dr. David Tschumperle and referred to as GREYCstoration, has been tested and is in the progress of being implemented into the Chandra EPO imaging group. Results of GREYCstoration will be presented and compared to the currently used methods for x-ray and multiple wavelength images. What demarcates Tschumperle's algorithm from the current algorithms used by the EPO imaging group is its ability to preserve the main structures of an image strongly, while reducing noise. In addition to denoising images, GREYCstoration can be used to erase artifacts accumulated during observation and mosaicing stages. GREYCstoration produces results that are comparable and in some cases more preferable than the current denoising/smoothing algorithms. From our early stages of testing, the results of the new algorithm will provide insight on the algorithm's initial capabilities on multiple wavelength astronomy data sets.

  18. Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT

    SciTech Connect

    Wang, Jing; Gu, Xuejun

    2013-10-15

    Purpose: Image reconstruction and motion model estimation in four-dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4D-CBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR).Methods: The proposed SMEIR algorithm consists of two alternating steps: (1) model-based iterative image reconstruction to obtain a motion-compensated primary CBCT (m-pCBCT) and (2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction technique (SART) coupled with total variation minimization. During the forward- and backprojection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The quality of reconstructed 4D images and the accuracy of tumor motion trajectory are assessed by comparing with those resulting from conventional sequential 4D-CBCT reconstructions (FDK and total variation minimization) and motion estimation (demons algorithm). The performance of the SMEIR algorithm is further evaluated by reconstructing a lung cancer patient 4D-CBCT.Results: Image quality of 4D-CBCT is greatly improved by the SMEIR algorithm in both phantom and patient studies. When all projections are used to reconstruct a 3D-CBCT by FDK, motion

  19. Bayesian wavelet-based image denoising using the Gauss-Hermite expansion.

    PubMed

    Rahman, S M Mahbubur; Ahmad, M Omair; Swamy, M N S

    2008-10-01

    The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters.

  20. Projection domain denoising method based on dictionary learning for low-dose CT image reconstruction.

    PubMed

    Zhang, Haiyan; Zhang, Liyi; Sun, Yunshan; Zhang, Jingyu

    2015-01-01

    Reducing X-ray tube current is one of the widely used methods for decreasing the radiation dose. Unfortunately, the signal-to-noise ratio (SNR) of the projection data degrades simultaneously. To improve the quality of reconstructed images, a dictionary learning based penalized weighted least-squares (PWLS) approach is proposed for sinogram denoising. The weighted least-squares considers the statistical characteristic of noise and the penalty models the sparsity of sinogram based on dictionary learning. Then reconstruct CT image using filtered back projection (FBP) algorithm from the denoised sinogram. The proposed method is particularly suitable for the projection data with low SNR. Experimental results show that the proposed method can get high-quality CT images when the signal to noise ratio of projection data declines sharply.

  1. Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Gu, Xuejun

    2014-03-01

    Image reconstruction and motion model estimation in four dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4DCBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR). The proposed SMEIR algorithm consists of two alternating steps: 1) model-based iterative image reconstruction to obtain a motion-compensated primary CBCT (m-pCBCT) and 2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction (SART) technique coupled with total variation minimization. During the forward- and back-projection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The SMEIR algorithm improves image reconstruction accuracy of 4D-CBCT and tumor motion trajectory estimation accuracy as compared to conventional sequential 4D-CBCT reconstruction and motion estimation.

  2. Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function.

    PubMed

    Lahmiri, Salim

    2016-03-01

    Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications. PMID:27222723

  3. OPTICAL COHERENCE TOMOGRAPHY HEART TUBE IMAGE DENOISING BASED ON CONTOURLET TRANSFORM.

    PubMed

    Guo, Qing; Sun, Shuifa; Dong, Fangmin; Gao, Bruce Z; Wang, Rui

    2012-01-01

    Optical Coherence Tomography(OCT) gradually becomes a very important imaging technology in the Biomedical field for its noninvasive, nondestructive and real-time properties. However, the interpretation and application of the OCT images are limited by the ubiquitous noise. In this paper, a denoising algorithm based on contourlet transform for the OCT heart tube image is proposed. A bivariate function is constructed to model the joint probability density function (pdf) of the coefficient and its cousin in contourlet domain. A bivariate shrinkage function is deduced to denoise the image by the maximum a posteriori (MAP) estimation. Three metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and equivalent number of look (ENL), are used to evaluate the denoised image using the proposed algorithm. The results show that the signal-to-noise ratio is improved while the edges of object are preserved by the proposed algorithm. Systemic comparisons with other conventional algorithms, such as mean filter, median filter, RKT filter, Lee filter, as well as bivariate shrinkage function for wavelet-based algorithm are conducted. The advantage of the proposed algorithm over these methods is illustrated. PMID:25364626

  4. Neutral wind estimation from 4-D ionospheric electron density images

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Crowley, G.; Curtis, N.

    2009-06-01

    We develop a new inversion algorithm for Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE method uses four-dimensional images of global electron density to estimate the field-aligned neutral wind ionospheric driver when direct measurement is not available. We begin with a model of the electron continuity equation that includes production and loss rate estimates, as well as E × B drift, gravity, and diffusion effects. We use ion, electron, and neutral species temperatures and neutral densities from the Thermosphere Ionosphere Mesosphere Electrodynamics General Circulation Model (TIMEGCM-ASPEN) for estimating the magnitude of these effects. We then model the neutral wind as a power series at a given longitude for a range of latitudes and altitudes. As a test of our algorithm, we have input TIMEGCM electron densities to our algorithm. The model of the neutral wind is computed at hourly intervals and validated by comparing to the “true” TIMEGCM neutral wind fields. We show results for a storm day: 10 November 2004. The agreement between the winds derived from EMPIRE versus the TIMEGCM “true” winds appears to be time-dependent for the day under consideration. This may indicate that the diurnal variation in certain driving processes impacts the accuracy of our neutral wind model. Despite the potential temporal and spatial limits on accuracy, estimating neutral wind speed from measured electron density fields via our algorithm shows great promise as a complement to the more sparse radar and satellite measurements.

  5. Impact of 4D image quality on the accuracy of target definition.

    PubMed

    Nielsen, Tine Bjørn; Hansen, Christian Rønn; Westberg, Jonas; Hansen, Olfred; Brink, Carsten

    2016-03-01

    Delineation accuracy of target shape and position depends on the image quality. This study investigates whether the image quality on standard 4D systems has an influence comparable to the overall delineation uncertainty. A moving lung target was imaged using a dynamic thorax phantom on three different 4D computed tomography (CT) systems and a 4D cone beam CT (CBCT) system using pre-defined clinical scanning protocols. Peak-to-peak motion and target volume were registered using rigid registration and automatic delineation, respectively. A spatial distribution of the imaging uncertainty was calculated as the distance deviation between the imaged target and the true target shape. The measured motions were smaller than actual motions. There were volume differences of the imaged target between respiration phases. Imaging uncertainties of >0.4 cm were measured in the motion direction which showed that there was a large distortion of the imaged target shape. Imaging uncertainties of standard 4D systems are of similar size as typical GTV-CTV expansions (0.5-1 cm) and contribute considerably to the target definition uncertainty. Optimising and validating 4D systems is recommended in order to obtain the most optimal imaged target shape.

  6. Study of real-time image denoising and hole-filling for micro-cantilever IR FPA imaging system

    NASA Astrophysics Data System (ADS)

    Feng, Yun; Zhao, Yuejin; Dong, Liquan; Liu, Ming; Liu, Xiaohua; Li, Xiaomeng; Zhao, Zhu; Yu, Xiaomei; Hui, Mei; Wu, Hong

    2014-10-01

    This paper proposes and experimentally demonstrates a new denoising and hole-filling algorithm through discrete points removal and bilinear interpolation based on the bi-material cantilever FPA infrared imaging system. In practice, because of the limitation of FPA manufacturing process and optical readout system, the quality of obtained images is always not satisfying. A lot of noise and holes appear in the images, which restrict the application of the infrared imaging system. After analyzing the causes of noise and holes, an algorithm is presented to improve the quality of infrared images. Firstly, the statistic characteristics such as probability histograms of images with noise are analyzed in great detail. Then, IR images are denoised by the method of discrete points removal. Second, the holes are filled by bilinear interpolation. In this step, the reference points are found through partial derivative method instead of using the edge points of the holes simply. It can detect the real points effectively and enable the holes much closer to the true values. Finally, the algorithm is applied to different infrared images successfully. Experimental results show that the IR images can be denoised effectively and the SNRs are improved substantially. Meanwhile, the filling ratios of target holes reach as high as 95% and the visual quality is achieved well. It proves that the algorithm has the advantages of high speed, great precision and easy implement. It is a highly efficient real-time image processing algorithm for bi-material micro-cantilever FPA infrared imaging system.

  7. [A fast non-local means algorithm for denoising of computed tomography images].

    PubMed

    Kang, Changqing; Cao, Wenping; Fang, Lei; Hua, Li; Cheng, Hong

    2012-11-01

    A fast non-local means image denoising algorithm is presented based on the single motif of existing computed tomography images in medical archiving systems. The algorithm is carried out in two steps of prepossessing and actual possessing. The sample neighborhood database is created via the data structure of locality sensitive hashing in the prepossessing stage. The CT image noise is removed by non-local means algorithm based on the sample neighborhoods accessed fast by locality sensitive hashing. The experimental results showed that the proposed algorithm could greatly reduce the execution time, as compared to NLM, and effectively preserved the image edges and details.

  8. Generalized non-local means filtering for image denoising

    NASA Astrophysics Data System (ADS)

    Dolui, Sudipto; Salgado Patarroyo, Iván. C.; Michailovich, Oleg V.

    2014-02-01

    Non-local means (NLM) filtering has been shown to outperform alternative denoising methodologies under the model of additive white Gaussian noise contamination. Recently, several theoretical frameworks have been developed to extend this class of algorithms to more general types of noise statistics. However, many of these frameworks are specifically designed for a single noise contamination model, and are far from optimal across varying noise statistics. The NLM filtering techniques rely on the definition of a similarity measure, which quantifies the similarity of two neighbourhoods along with their respective centroids. The key to the unification of the NLM filter for different noise statistics lies in the definition of a universal similarity measure which is guaranteed to provide favourable performance irrespective of the statistics of the noise. Accordingly, the main contribution of this work is to provide a rigorous statistical framework to derive such a universal similarity measure, while highlighting some of its theoretical and practical favourable characteristics. Additionally, the closed form expressions of the proposed similarity measure are provided for a number of important noise scenarios and the practical utility of the proposed similarity measure is demonstrated through numerical experiments.

  9. The Use of Gated and 4D CT Imaging in Planning for Stereotactic Body Radiation Therapy

    SciTech Connect

    D'Souza, Warren D. . E-mail: wdsou001@umaryland.edu; Nazareth, Daryl P.; Zhang Bin; Deyoung, Chad; Suntharalingam, Mohan; Kwok, Young; Yu, Cedric X.; Regine, William F.

    2007-07-01

    The localization of treatment targets is of utmost importance for patients receiving stereotactic body radiation therapy (SBRT), where the dose per fraction is large. While both setup or respiration-induced motion components affect the localization of the treatment volume, the purpose of this work is to describe our management of the intrafraction localization uncertainty induced by normal respiration. At our institution, we have implemented gated computed tomography (CT) acquisition with an active breathing control system (ABC), and 4-dimensional (4D) CT using a skin-based marker and retrospective respiration phase-based image sorting. During gated simulation, 3D CT images were acquired corresponding to end-inhalation and end-exhalation. For 4D CT imaging, 3D CT images were acquired corresponding to 8 phases of the respiratory cycle. In addition to gated or 4D CT images, we acquired a conventional free-breathing CT (FB). For both gated and 4D CT images, the target contours were registered to the FB scan in the planning system. These contours were then combined in the FB image set to form the internal target volume (ITV). Dynamic conformal arc treatment plans were generated for the ITV using the FB scan and the gated or 4D scans with an additional 7-mm margin for patient setup uncertainty. We have described our results for a pancreas and a lung tumor case. Plans were normalized so that the PTV received 95% of the prescription dose. The dose distribution for all the critical structures in the pancreas and lung tumor cases resulted in increased sparing when the ITV was defined using gated or 4D CT images than when the FB scan was used. Our results show that patient-specific target definition using gated or 4D CT scans lead to improved normal tissue sparing.

  10. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  11. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time. PMID:26405887

  12. Four-dimensional (4D) PET/CT imaging of the thorax

    SciTech Connect

    Nehmeh, S.A.; Erdi, Y.E.; Pan, T.

    2004-12-01

    We have reported in our previous studies on the methodology, and feasibility of 4D-PET (Gated PET) acquisition, to reduce respiratory motion artifact in PET imaging of the thorax. In this study, we expand our investigation to address the problem of respiration motion in PET/CT imaging. The respiratory motion of four lung cancer patients were monitored by tracking external markers placed on the thorax. A 4D-CT acquisition was performed using a 'step-and-shoot' technique, in which computed tomography (CT) projection data were acquired over a complete respiratory cycle at each couch position. The period of each CT acquisition segment was time stamped with an 'x-ray ON' signal, which was recorded by the tracking system. 4D-CT data were then sorted into 10 groups, according to their corresponding phase of the breathing cycle. 4D-PET data were acquired in the gated mode, where each breathing cycle was divided into ten 0.5 s bins. For both CT and PET acquisitions, patients received audio prompting to regularize breathing. The 4D-CT and 4D-PET data were then correlated according to respiratory phase. The effect of 4D acquisition on improving the co-registration of PET and CT images, reducing motion smearing, and consequently increase the quantitation of the SUV, were investigated. Also, quantitation of the tumor motions in PET, and CT, were studied and compared. 4D-PET with matching phase 4D-CTAC showed an improved accuracy in PET-CT image co-registration of up to 41%, compared to measurements from 4D-PET with clinical-CTAC. Gating PET data in correlation with respiratory motion reduced motion-induced smearing, thereby decreasing the observed tumor volume, by as much as 43%. 4D-PET lesions volumes showed a maximum deviation of 19% between clinical CT and phase- matched 4D-CT attenuation corrected PET images. In CT, 4D acquisition resulted in increasing the tumor volume in two patients by up to 79%, and decreasing it in the other two by up to 35%. Consequently, these

  13. Improved image quality and computation reduction in 4-D reconstruction of cardiac-gated SPECT images.

    PubMed

    Narayanan, M V; King, M A; Wernick, M N; Byrne, C L; Soares, E J; Pretorius, P H

    2000-05-01

    Spatiotemporal reconstruction of cardiac-gated SPECT images permits us to obtain valuable information related to cardiac function. However, the task of reconstructing this four-dimensional (4-D) data set is computation intensive. Typically, these studies are reconstructed frame-by-frame: a nonoptimal approach because temporal correlations in the signal are not accounted for. In this work, we show that the compression and signal decorrelation properties of the Karhunen-Loève (KL) transform may be used to greatly simplify the spatiotemporal reconstruction problem. The gated projections are first KL transformed in the temporal direction. This results in a sequence of KL-transformed projection images for which the signal components are uncorrelated along the time axis. As a result, the 4-D reconstruction task is simplified to a series of three-dimensional (3-D) reconstructions in the KL domain. The reconstructed KL components are subsequently inverse KL transformed to obtain the entire spatiotemporal reconstruction set. Our simulation and clinical results indicate that KL processing provides image sequences that are less noisy than are conventional frame-by-frame reconstructions. Additionally, by discarding high-order KL components that are dominated by noise, we can achieve savings in computation time because fewer reconstructions are needed in comparison to conventional frame-by-frame reconstructions.

  14. Improvement of the cine-CT based 4D-CT imaging

    SciTech Connect

    Pan Tinsu; Sun Xiaojun; Luo Dershan

    2007-11-15

    An improved 4D-CT utility has been developed on the GE LightSpeed multislice CT (MSCT) and Discovery PET/CT scanners, which have the cine CT scan capability. Two new features have been added in this 4D-CT over the commercial Advantage 4D-CT from GE. One feature was a new tool for disabling parts of the respiratory signal with irregular respiration and improving the accuracy of phase determination for the respiratory signal from the Varian real-time positioning and monitoring (RPM) system before sorting of the cine CT images into the 4D-CT images. The second feature was to allow generation of the maximum-intensity-projection (MIP), average (AVG) and minimum-intensity-projection (mip) CT images from the cine CT images without a respiratory signal. The implementation enables the assessment of tumor motion in treatment planning with the MIP, AVG, and mip CT images on the GE MSCT and PET/CT scanners without the RPM and the Advantage 4D-CT with a GE Advantage windows workstation. Several clinical examples are included to illustrate this new application.

  15. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR).

    PubMed

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378

  16. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  17. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR).

    PubMed

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  18. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10–40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  19. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    SciTech Connect

    Yan, Hao; Folkerts, Michael; Jiang, Steve B. E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun E-mail: steve.jiang@UTSouthwestern.edu; Zhen, Xin; Li, Yongbao; Pan, Tinsu; Cervino, Laura

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  20. Statistics of Natural Stochastic Textures and Their Application in Image Denoising.

    PubMed

    Zachevsky, Ido; Zeevi, Yehoshua Y Josh

    2016-05-01

    Natural stochastic textures (NSTs), characterized by their fine details, are prone to corruption by artifacts, introduced during the image acquisition process by the combined effect of blur and noise. While many successful algorithms exist for image restoration and enhancement, the restoration of natural textures and textured images based on suitable statistical models has yet to be further improved. We examine the statistical properties of NST using three image databases. We show that the Gaussian distribution is suitable for many NST, while other natural textures can be properly represented by a model that separates the image into two layers; one of these layers contains the structural elements of smooth areas and edges, while the other contains the statistically Gaussian textural details. Based on these statistical properties, an algorithm for the denoising of natural images containing NST is proposed, using patch-based fractional Brownian motion model and regularization by means of anisotropic diffusion. It is illustrated that this algorithm successfully recovers both missing textural details and structural attributes that characterize natural images. The algorithm is compared with classical as well as the state-of-the-art denoising algorithms. PMID:27045423

  1. Statistics of Natural Stochastic Textures and Their Application in Image Denoising.

    PubMed

    Zachevsky, Ido; Zeevi, Yehoshua Y Josh

    2016-05-01

    Natural stochastic textures (NSTs), characterized by their fine details, are prone to corruption by artifacts, introduced during the image acquisition process by the combined effect of blur and noise. While many successful algorithms exist for image restoration and enhancement, the restoration of natural textures and textured images based on suitable statistical models has yet to be further improved. We examine the statistical properties of NST using three image databases. We show that the Gaussian distribution is suitable for many NST, while other natural textures can be properly represented by a model that separates the image into two layers; one of these layers contains the structural elements of smooth areas and edges, while the other contains the statistically Gaussian textural details. Based on these statistical properties, an algorithm for the denoising of natural images containing NST is proposed, using patch-based fractional Brownian motion model and regularization by means of anisotropic diffusion. It is illustrated that this algorithm successfully recovers both missing textural details and structural attributes that characterize natural images. The algorithm is compared with classical as well as the state-of-the-art denoising algorithms.

  2. Novel example-based method for super-resolution and denoising of medical images.

    PubMed

    Dinh-Hoan Trinh; Luong, Marie; Dibos, Francoise; Rocchisani, Jean-Marie; Canh-Duong Pham; Nguyen, Truong Q

    2014-04-01

    In this paper, we propose a novel example-based method for denoising and super-resolution of medical images. The objective is to estimate a high-resolution image from a single noisy low-resolution image, with the help of a given database of high and low-resolution image patch pairs. Denoising and super-resolution in this paper is performed on each image patch. For each given input low-resolution patch, its high-resolution version is estimated based on finding a nonnegative sparse linear representation of the input patch over the low-resolution patches from the database, where the coefficients of the representation strongly depend on the similarity between the input patch and the sample patches in the database. The problem of finding the nonnegative sparse linear representation is modeled as a nonnegative quadratic programming problem. The proposed method is especially useful for the case of noise-corrupted and low-resolution image. Experimental results show that the proposed method outperforms other state-of-the-art super-resolution methods while effectively removing noise.

  3. Denoising techniques combined to Monte Carlo simulations for the prediction of high-resolution portal images in radiotherapy treatment verification

    NASA Astrophysics Data System (ADS)

    Lazaro, D.; Barat, E.; Le Loirec, C.; Dautremer, T.; Montagu, T.; Guérin, L.; Batalla, A.

    2013-05-01

    This work investigates the possibility of combining Monte Carlo (MC) simulations to a denoising algorithm for the accurate prediction of images acquired using amorphous silicon (a-Si) electronic portal imaging devices (EPIDs). An accurate MC model of the Siemens OptiVue1000 EPID was first developed using the penelope code, integrating a non-uniform backscatter modelling. Two already existing denoising algorithms were then applied on simulated portal images, namely the iterative reduction of noise (IRON) method and the locally adaptive Savitzky-Golay (LASG) method. A third denoising method, based on a nonparametric Bayesian framework and called DPGLM (for Dirichlet process generalized linear model) was also developed. Performances of the IRON, LASG and DPGLM methods, in terms of smoothing capabilities and computation time, were compared for portal images computed for different values of the RMS pixel noise (up to 10%) in three different configurations, a heterogeneous phantom irradiated by a non-conformal 15 × 15 cm2 field, a conformal beam from a pelvis treatment plan, and an IMRT beam from a prostate treatment plan. For all configurations, DPGLM outperforms both IRON and LASG by providing better smoothing performances and demonstrating a better robustness with respect to noise. Additionally, no parameter tuning is required by DPGLM, which makes the denoising step very generic and easy to handle for any portal image. Concerning the computation time, the denoising of 1024 × 1024 images takes about 1 h 30 min, 2 h and 5 min using DPGLM, IRON, and LASG, respectively. This paper shows the feasibility to predict within a few hours and with the same resolution as real images accurate portal images, combining MC simulations with the DPGLM denoising algorithm.

  4. Graph-based retrospective 4D image construction from free-breathing MRI slice acquisitions

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Ciesielski, Krzysztof C.; McDonough, Joseph M.; Mong, Andrew; Campbell, Robert M.

    2014-03-01

    4D or dynamic imaging of the thorax has many potential applications [1, 2]. CT and MRI offer sufficient speed to acquire motion information via 4D imaging. However they have different constraints and requirements. For both modalities both prospective and retrospective respiratory gating and tracking techniques have been developed [3, 4]. For pediatric imaging, x-ray radiation becomes a primary concern and MRI remains as the de facto choice. The pediatric subjects we deal with often suffer from extreme malformations of their chest wall, diaphragm, and/or spine, as such patient cooperation needed by some of the gating and tracking techniques are difficult to realize without causing patient discomfort. Moreover, we are interested in the mechanical function of their thorax in its natural form in tidal breathing. Therefore free-breathing MRI acquisition is the ideal modality of imaging for these patients. In our set up, for each coronal (or sagittal) slice position, slice images are acquired at a rate of about 200-300 ms/slice over several natural breathing cycles. This produces typically several thousands of slices which contain both the anatomic and dynamic information. However, it is not trivial to form a consistent and well defined 4D volume from these data. In this paper, we present a novel graph-based combinatorial optimization solution for constructing the best possible 4D scene from such data entirely in the digital domain. Our proposed method is purely image-based and does not need breath holding or any external surrogates or instruments to record respiratory motion or tidal volume. Both adult and children patients' data are used to illustrate the performance of the proposed method. Experimental results show that the reconstructed 4D scenes are smooth and consistent spatially and temporally, agreeing with known shape and motion of the lungs.

  5. An unified framework for Bayesian denoising for several medical and biological imaging modalities.

    PubMed

    Sanches, João M; Nascimento, Jacinto C; Marques, Jorge S

    2007-01-01

    Multiplicative noise is often present in several medical and biological imaging modalities, such as MRI, Ultrasound, PET/SPECT and Fluorescence Microscopy. Noise removal and preserving the details is not a trivial task. Bayesian algorithms have been used to tackle this problem. They succeed to accomplish this task, however they lead to a computational burden as we increase the image dimensionality. Therefore, a significant effort has been made to accomplish this tradeoff, i.e., to develop fast and reliable algorithms to remove noise without distorting relevant clinical information. This paper provides a new unified framework for Bayesian denoising of images corrupted with additive and multiplicative multiplicative noise. This allows to deal with additive white Gaussian and multiplicative noise described by Poisson and Rayleigh distributions respectively. The proposed algorithm is based on the maximum a posteriori (MAP) criterion, and an edge preserving priors are used to avoid the distortion of the relevant image details. The denoising task is performed by an iterative scheme based on Sylvester/Lyapunov equation. This approach allows to use fast and efficient algorithms described in the literature to solve the Sylvester/Lyapunov equation developed in the context of the Control theory. Experimental results with synthetic and real data testify the performance of the proposed technique, and competitive results are achieved when comparing to the of the state-of-the-art methods.

  6. Entropy-based straight kernel filter for echocardiography image denoising.

    PubMed

    Rajalaxmi, S; Nirmala, S

    2014-10-01

    A new filter has been proposed with the aim of eliminating speckle noise from 2D echocardiography images. This speckle noise has to be eliminated to avoid the pseudo prediction of the underlying anatomical facts. The proposed filter uses entropy parameter to measure the disorganized occurrence of noise pixel in each row and column and to increase the image visibility. Straight kernels with 3 pixels each are chosen for the filtering process, and the filter is slided over the image to eliminate speckle. The peak signal-to-noise ratio (PSNR) is obtained in the range of 147 dB, and the root mean square error (RMSE) is very low of approximately 0.15. The proposed filter is implemented on 36 echocardiography images, and the filter has the competence to illuminate the actual anatomical facts without degrading the edges. PMID:24838117

  7. 4D rotational x-ray imaging of wrist joint dynamic motion

    SciTech Connect

    Carelsen, Bart; Bakker, Niels H.; Strackee, Simon D.; Boon, Sjirk N.; Maas, Mario; Sabczynski, Joerg; Grimbergen, Cornelis A.; Streekstra, Geert J.

    2005-09-15

    Current methods for imaging joint motion are limited to either two-dimensional (2D) video fluoroscopy, or to animated motions from a series of static three-dimensional (3D) images. 3D movement patterns can be detected from biplane fluoroscopy images matched with computed tomography images. This involves several x-ray modalities and sophisticated 2D to 3D matching for the complex wrist joint. We present a method for the acquisition of dynamic 3D images of a moving joint. In our method a 3D-rotational x-ray (3D-RX) system is used to image a cyclically moving joint. The cyclic motion is synchronized to the x-ray acquisition to yield multiple sets of projection images, which are reconstructed to a series of time resolved 3D images, i.e., four-dimensional rotational x ray (4D-RX). To investigate the obtained image quality parameters the full width at half maximum (FWHM) of the point spread function (PSF) via the edge spread function and the contrast to noise ratio between air and phantom were determined on reconstructions of a bullet and rod phantom, using 4D-RX as well as stationary 3D-RX images. The CNR in volume reconstructions based on 251 projection images in the static situation and on 41 and 34 projection images of a moving phantom were 6.9, 3.0, and 2.9, respectively. The average FWHM of the PSF of these same images was, respectively, 1.1, 1.7, and 2.2 mm orthogonal to the motion and parallel to direction of motion 0.6, 0.7, and 1.0 mm. The main deterioration of 4D-RX images compared to 3D-RX images is due to the low number of projection images used and not to the motion of the object. Using 41 projection images seems the best setting for the current system. Experiments on a postmortem wrist show the feasibility of the method for imaging 3D dynamic joint motion. We expect that 4D-RX will pave the way to improved assessment of joint disorders by detection of 3D dynamic motion patterns in joints.

  8. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. PMID:27084318

  9. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature.

  10. Population of anatomically variable 4D XCAT adult phantoms for imaging research and optimization

    SciTech Connect

    Segars, W. P.; Bond, Jason; Frush, Jack; Hon, Sylvia; Eckersley, Chris; Samei, E.; Williams, Cameron H.; Frush, D.; Feng Jianqiao; Tward, Daniel J.; Ratnanather, J. T.; Miller, M. I.

    2013-04-15

    Purpose: The authors previously developed the 4D extended cardiac-torso (XCAT) phantom for multimodality imaging research. The XCAT consisted of highly detailed whole-body models for the standard male and female adult, including the cardiac and respiratory motions. In this work, the authors extend the XCAT beyond these reference anatomies by developing a series of anatomically variable 4D XCAT adult phantoms for imaging research, the first library of 4D computational phantoms. Methods: The initial anatomy of each phantom was based on chest-abdomen-pelvis computed tomography data from normal patients obtained from the Duke University database. The major organs and structures for each phantom were segmented from the corresponding data and defined using nonuniform rational B-spline surfaces. To complete the body, the authors manually added on the head, arms, and legs using the original XCAT adult male and female anatomies. The structures were scaled to best match the age and anatomy of the patient. A multichannel large deformation diffeomorphic metric mapping algorithm was then used to calculate the transform from the template XCAT phantom (male or female) to the target patient model. The transform was applied to the template XCAT to fill in any unsegmented structures within the target phantom and to implement the 4D cardiac and respiratory models in the new anatomy. Each new phantom was refined by checking for anatomical accuracy via inspection of the models. Results: Using these methods, the authors created a series of computerized phantoms with thousands of anatomical structures and modeling cardiac and respiratory motions. The database consists of 58 (35 male and 23 female) anatomically variable phantoms in total. Like the original XCAT, these phantoms can be combined with existing simulation packages to simulate realistic imaging data. Each new phantom contains parameterized models for the anatomy and the cardiac and respiratory motions and can, therefore, serve

  11. Geometric moment based nonlocal-means filter for ultrasound image denoising

    NASA Astrophysics Data System (ADS)

    Dou, Yangchao; Zhang, Xuming; Ding, Mingyue; Chen, Yimin

    2011-06-01

    It is inevitable that there is speckle noise in ultrasound image. Despeckling is the important process. The original nonlocal means (NLM) filter can remove speckle noise and protect the texture information effectively when the image corruption degree is relatively low. But when the noise in the image is strong, NLM will produce fictitious texture information, which has the disadvantageous influence on its denoising performance. In this paper, a novel nonlocal means (NLM) filter is proposed. We introduce geometric moments into the NLM filter. Though geometric moments are not orthogonal moments, it is popular by its concision, and its restoration ability is not yet proved. Results on synthetic data and real ultrasound image show that the proposed method can get better despeckling performance than other state-of-the-art method.

  12. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    PubMed

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  13. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    SciTech Connect

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frederic

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a

  14. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  15. 4D STEM: High efficiency phase contrast imaging using a fast pixelated detector

    NASA Astrophysics Data System (ADS)

    Yang, H.; Jones, L.; Ryll, H.; Simson, M.; Soltau, H.; Kondo, Y.; Sagawa, R.; Banba, H.; MacLaren, I.; Nellist, P. D.

    2015-10-01

    Phase contrast imaging is widely used for imaging beam sensitive and weak phase objects in electron microscopy. In this work we demonstrate the achievement of high efficient phase contrast imaging in STEM using the pnCCD, a fast direct electron pixelated detector, which records the diffraction patterns at every probe position with a speed of 1000 to 4000 frames per second, forming a 4D STEM dataset simultaneously with the incoherent Z-contrast imaging. Ptychographic phase reconstruction has been applied and the obtained complex transmission function reveals the phase of the specimen. The results using GaN and Ti, Nd- doped BiFeO3 show that this imaging mode is especially powerful for imaging light elements in the presence of much heavier elements.

  16. Robust segmentation of 4D cardiac MRI-tagged images via spatio-temporal propagation

    NASA Astrophysics Data System (ADS)

    Qian, Zhen; Huang, Xiaolei; Metaxas, Dimitris N.; Axel, Leon

    2005-04-01

    In this paper we present a robust method for segmenting and tracking cardiac contours and tags in 4D cardiac MRI tagged images via spatio-temporal propagation. Our method is based on two main techniques: the Metamorphs Segmentation for robust boundary estimation, and the tunable Gabor filter bank for tagging lines enhancement, removal and myocardium tracking. We have developed a prototype system based on the integration of these two techniques, and achieved efficient, robust segmentation and tracking with minimal human interaction.

  17. Four-dimensional magnetic resonance imaging (4D-MRI) using image-based respiratory surrogate: A feasibility study

    PubMed Central

    Cai, Jing; Chang, Zheng; Wang, Zhiheng; Paul Segars, William; Yin, Fang-Fang

    2011-01-01

    Purpose: Four-dimensional computed tomography (4D-CT) has been widely used in radiation therapy to assess patient-specific breathing motion for determining individual safety margins. However, it has two major drawbacks: low soft-tissue contrast and an excessive imaging dose to the patient. This research aimed to develop a clinically feasible four-dimensional magnetic resonance imaging (4D-MRI) technique to overcome these limitations. Methods: The proposed 4D-MRI technique was achieved by continuously acquiring axial images throughout the breathing cycle using fast 2D cine-MR imaging, and then retrospectively sorting the images by respiratory phase. The key component of the technique was the use of body area (BA) of the axial MR images as an internal respiratory surrogate to extract the breathing signal. The validation of the BA surrogate was performed using 4D-CT images of 12 cancer patients by comparing the respiratory phases determined using the BA method to those determined clinically using the Real-time position management (RPM) system. The feasibility of the 4D-MRI technique was tested on a dynamic motion phantom, the 4D extended Cardiac Torso (XCAT) digital phantom, and two healthy human subjects. Results: Respiratory phases determined from the BA matched closely to those determined from the RPM: mean (±SD) difference in phase: −3.9% (±6.4%); mean (±SD) absolute difference in phase: 10.40% (±3.3%); mean (±SD) correlation coefficient: 0.93 (±0.04). In the motion phantom study, 4D-MRI clearly showed the sinusoidal motion of the phantom; image artifacts observed were minimal to none. Motion trajectories measured from 4D-MRI and 2D cine-MRI (used as a reference) matched excellently: the mean (±SD) absolute difference in motion amplitude: −0.3 (±0.5) mm. In the 4D-XCAT phantom study, the simulated “4D-MRI” images showed good consistency with the original 4D-XCAT phantom images. The motion trajectory of the hypothesized “tumor” matched

  18. 4D dynamic imaging of the eye using ultrahigh speed SS-OCT

    NASA Astrophysics Data System (ADS)

    Liu, Jonathan J.; Grulkowski, Ireneusz; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Cable, Alex E.; Kraus, Martin F.; Hornegger, Joachim; Duker, Jay S.; Fujimoto, James G.

    2013-03-01

    Recent advances in swept-source / Fourier domain optical coherence tomography (SS-OCT) technology enable in vivo ultrahigh speed imaging, offering a promising technique for four-dimensional (4-D) imaging of the eye. Using an ultrahigh speed tunable vertical cavity surface emitting laser (VCSEL) light source based SS-OCT prototype system, we performed imaging of human eye dynamics in four different imaging modes: 1) Pupillary reaction to light at 200,000 axial scans per second and 9 μm resolution in tissue. 2) Anterior eye focusing dynamics at 100,000 axial scans per second and 9 μm resolution in tissue. 3) Tear film break up at 50,000 axial scans per second and 19 μm resolution in tissue. 4) Retinal blood flow at 800,000 axial scans per second and 12 μm resolution in tissue. The combination of tunable ultrahigh speeds and long coherence length of the VCSEL along with the outstanding roll-off performance of SS-OCT makes this technology an ideal tool for time-resolved volumetric imaging of the eye. Visualization and quantitative analysis of 4-D OCT data can potentially provide insight to functional and structural changes in the eye during disease progression. Ultrahigh speed imaging using SS-OCT promises to enable novel 4-D visualization of realtime dynamic processes of the human eye. Furthermore, this non-invasive imaging technology is a promising tool for research to characterize and understand a variety of visual functions.

  19. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  20. Gaussian mixture model-based gradient field reconstruction for infrared image detail enhancement and denoising

    NASA Astrophysics Data System (ADS)

    Zhao, Fan; Zhao, Jian; Zhao, Wenda; Qu, Feng

    2016-05-01

    Infrared images are characterized by low signal-to-noise ratio and low contrast. Therefore, the edge details are easily immerged in the background and noise, making it much difficult to achieve infrared image edge detail enhancement and denoising. This article proposes a novel method of Gaussian mixture model-based gradient field reconstruction, which enhances image edge details while suppressing noise. First, by analyzing the gradient histogram of noisy infrared image, Gaussian mixture model is adopted to simulate the distribution of the gradient histogram, and divides the image information into three parts corresponding to faint details, noise and the edges of clear targets, respectively. Then, the piecewise function is constructed based on the characteristics of the image to increase gradients of faint details and suppress gradients of noise. Finally, anisotropic diffusion constraint is added while visualizing enhanced image from the transformed gradient field to further suppress noise. The experimental results show that the method possesses unique advantage of effectively enhancing infrared image edge details and suppressing noise as well, compared with the existing methods. In addition, it can be used to effectively enhance other types of images such as the visible and medical images.

  1. 4D micro-CT-based perfusion imaging in small animals

    NASA Astrophysics Data System (ADS)

    Badea, C. T.; Johnston, S. M.; Lin, M.; Hedlund, L. W.; Johnson, G. A.

    2009-02-01

    Quantitative in-vivo imaging of lung perfusion in rodents can provide critical information for preclinical studies. However, the combined challenges of high temporal and spatial resolution have made routine quantitative perfusion imaging difficult in rodents. We have recently developed a dual tube/detector micro-CT scanner that is well suited to capture first-pass kinetics of a bolus of contrast agent used to compute perfusion information. Our approach is based on the paradigm that the same time density curves can be reproduced in a number of consecutive, small (i.e. 50μL) injections of iodinated contrast agent at a series of different angles. This reproducibility is ensured by the high-level integration of the imaging components of our system, with a micro-injector, a mechanical ventilator, and monitoring applications. Sampling is controlled through a biological pulse sequence implemented in LabVIEW. Image reconstruction is based on a simultaneous algebraic reconstruction technique implemented on a GPU. The capabilities of 4D micro-CT imaging are demonstrated in studies on lung perfusion in rats. We report 4D micro-CT imaging in the rat lung with a heartbeat temporal resolution of 140 ms and reconstructed voxels of 88 μm. The approach can be readily extended to a wide range of important preclinical models, such as tumor perfusion and angiogenesis, and renal function.

  2. Integration of the denoising, inpainting and local harmonic Bz algorithm for MREIT imaging of intact animals

    NASA Astrophysics Data System (ADS)

    Jeon, Kiwan; Kim, Hyung Joong; Lee, Chang-Ock; Seo, Jin Keun; Woo, Eung Je

    2010-12-01

    Conductivity imaging based on the current-injection MRI technique has been developed in magnetic resonance electrical impedance tomography. Current injected through a pair of surface electrodes induces a magnetic flux density distribution inside an imaging object, which results in additional magnetic field inhomogeneity. We can extract phase changes related to the current injection and obtain an image of the induced magnetic flux density. Without rotating the object inside the bore, we can measure only one component Bz of the magnetic flux density B = (Bx, By, Bz). Based on a relation between the internal conductivity distribution and Bz data subject to multiple current injections, one may reconstruct cross-sectional conductivity images. As the image reconstruction algorithm, we have been using the harmonic Bz algorithm in numerous experimental studies. Performing conductivity imaging of intact animal and human subjects, we found technical difficulties that originated from the MR signal void phenomena in the local regions of bones, lungs and gas-filled tubular organs. Measured Bz data inside such a problematic region contain an excessive amount of noise that deteriorates the conductivity image quality. In order to alleviate this technical problem, we applied hybrid methods incorporating ramp-preserving denoising, harmonic inpainting with isotropic diffusion and ROI imaging using the local harmonic Bz algorithm. These methods allow us to produce conductivity images of intact animals with best achievable quality. We suggest guidelines to choose a hybrid method depending on the overall noise level and existence of distinct problematic regions of MR signal void.

  3. Automated Lung Segmentation and Image Quality Assessment for Clinical 3-D/4-D-Computed Tomography

    PubMed Central

    Li, Guang

    2014-01-01

    4-D-computed tomography (4DCT) provides not only a new dimension of patient-specific information for radiation therapy planning and treatment, but also a challenging scale of data volume to process and analyze. Manual analysis using existing 3-D tools is unable to keep up with vastly increased 4-D data volume, automated processing and analysis are thus needed to process 4DCT data effectively and efficiently. In this paper, we applied ideas and algorithms from image/signal processing, computer vision, and machine learning to 4DCT lung data so that lungs can be reliably segmented in a fully automated manner, lung features can be visualized and measured on the fly via user interactions, and data quality classifications can be computed in a robust manner. Comparisons of our results with an established treatment planning system and calculation by experts demonstrated negligible discrepancies (within ±2%) for volume assessment but one to two orders of magnitude performance enhancement. An empirical Fourier-analysis-based quality measure-delivered performances closely emulating human experts. Three machine learners are inspected to justify the viability of machine learning techniques used to robustly identify data quality of 4DCT images in the scalable manner. The resultant system provides a toolkit that speeds up 4-D tasks in the clinic and facilitates clinical research to improve current clinical practice. PMID:25621194

  4. Wavelet Transform-Based De-Noising for Two-Photon Imaging of Synaptic Ca2+ Transients

    PubMed Central

    Tigaret, Cezar M.; Tsaneva-Atanasova, Krasimira; Collingridge, Graham L.; Mellor, Jack R.

    2013-01-01

    Postsynaptic Ca2+ transients triggered by neurotransmission at excitatory synapses are a key signaling step for the induction of synaptic plasticity and are typically recorded in tissue slices using two-photon fluorescence imaging with Ca2+-sensitive dyes. The signals generated are small with very low peak signal/noise ratios (pSNRs) that make detailed analysis problematic. Here, we implement a wavelet-based de-noising algorithm (PURE-LET) to enhance signal/noise ratio for Ca2+ fluorescence transients evoked by single synaptic events under physiological conditions. Using simulated Ca2+ transients with defined noise levels, we analyzed the ability of the PURE-LET algorithm to retrieve the underlying signal. Fitting single Ca2+ transients with an exponential rise and decay model revealed a distortion of τrise but improved accuracy and reliability of τdecay and peak amplitude after PURE-LET de-noising compared to raw signals. The PURE-LET de-noising algorithm also provided a ∼30-dB gain in pSNR compared to ∼16-dB pSNR gain after an optimized binomial filter. The higher pSNR provided by PURE-LET de-noising increased discrimination accuracy between successes and failures of synaptic transmission as measured by the occurrence of synaptic Ca2+ transients by ∼20% relative to an optimized binomial filter. Furthermore, in comparison to binomial filter, no optimization of PURE-LET de-noising was required for reducing arbitrary bias. In conclusion, the de-noising of fluorescent Ca2+ transients using PURE-LET enhances detection and characterization of Ca2+ responses at central excitatory synapses. PMID:23473483

  5. Edge preserved enhancement of medical images using adaptive fusion-based denoising by shearlet transform and total variation algorithm

    NASA Astrophysics Data System (ADS)

    Gupta, Deep; Anand, Radhey Shyam; Tyagi, Barjeev

    2013-10-01

    Edge preserved enhancement is of great interest in medical images. Noise present in medical images affects the quality, contrast resolution, and most importantly, texture information and can make post-processing difficult also. An enhancement approach using an adaptive fusion algorithm is proposed which utilizes the features of shearlet transform (ST) and total variation (TV) approach. In the proposed method, three different denoised images processed with TV method, shearlet denoising, and edge information recovered from the remnant of the TV method and processed with the ST are fused adaptively. The result of enhanced images processed with the proposed method helps to improve the visibility and detectability of medical images. For the proposed method, different weights are evaluated from the different variance maps of individual denoised image and the edge extracted information from the remnant of the TV approach. The performance of the proposed method is evaluated by conducting various experiments on both the standard images and different medical images such as computed tomography, magnetic resonance, and ultrasound. Experiments show that the proposed method provides an improvement not only in noise reduction but also in the preservation of more edges and image details as compared to the others.

  6. Application of adaptive kinetic modelling for bias propagation reduction in direct 4D image reconstruction

    NASA Astrophysics Data System (ADS)

    Kotasidis, F. A.; Matthews, J. C.; Reader, A. J.; Angelis, G. I.; Zaidi, H.

    2014-10-01

    Parametric imaging in thoracic and abdominal PET can provide additional parameters more relevant to the pathophysiology of the system under study. However, dynamic data in the body are noisy due to the limiting counting statistics leading to suboptimal kinetic parameter estimates. Direct 4D image reconstruction algorithms can potentially improve kinetic parameter precision and accuracy in dynamic PET body imaging. However, construction of a common kinetic model is not always feasible and in contrast to post-reconstruction kinetic analysis, errors in poorly modelled regions may spatially propagate to regions which are well modelled. To reduce error propagation from erroneous model fits, we implement and evaluate a new approach to direct parameter estimation by incorporating a recently proposed kinetic modelling strategy within a direct 4D image reconstruction framework. The algorithm uses a secondary more general model to allow a less constrained model fit in regions where the kinetic model does not accurately describe the underlying kinetics. A portion of the residuals then is adaptively included back into the image whilst preserving the primary model characteristics in other well modelled regions using a penalty term that trades off the models. Using fully 4D simulations based on dynamic [15O]H2O datasets, we demonstrate reduction in propagation-related bias for all kinetic parameters. Under noisy conditions, reductions in bias due to propagation are obtained at the cost of increased noise, which in turn results in increased bias and variance of the kinetic parameters. This trade-off reflects the challenge of separating the residuals arising from poor kinetic modelling fits from the residuals arising purely from noise. Nonetheless, the overall root mean square error is reduced in most regions and parameters. Using the adaptive 4D image reconstruction improved model fits can be obtained in poorly modelled regions, leading to reduced errors potentially propagating

  7. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    SciTech Connect

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain E-mail: cedric.messaoudi@curie.fr Messaoudi, Cedric E-mail: cedric.messaoudi@curie.fr Marco, Sergio E-mail: cedric.messaoudi@curie.fr

    2015-01-13

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  8. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter. PMID:24977618

  9. The study of integration about measurable image and 4D production

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun

    2008-12-01

    In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.

  10. ANALYZING IMAGING BIOMARKERS FOR TRAUMATIC BRAIN INJURY USING 4D MODELING OF LONGITUDINAL MRI

    PubMed Central

    Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Chambers, Micah C.; Sadeghi, Neda; Vespa, Paul M.; van Horn, John D.; Gerig, Guido

    2013-01-01

    Quantitative imaging biomarkers are important for assessment of impact, recovery and treatment efficacy in patients with traumatic brain injury (TBI). To our knowledge, the identification of such biomarkers characterizing disease progress and recovery has been insufficiently explored in TBI due to difficulties in registration of baseline and follow-up data and automatic segmentation of tissue and lesions from multimodal, longitudinal MR image data. We propose a new methodology for computing imaging biomarkers in TBI by extending a recently proposed spatiotemporal 4D modeling approach in order to compute quantitative features of tissue change. The proposed method computes surface-based and voxel-based measurements such as cortical thickness, volume changes, and geometric deformation. We analyze the potential for clinical use of these biomarkers by correlating them with TBI-specific patient scores at the level of the whole brain and of individual regions. Our preliminary results indicate that the proposed voxel-based biomarkers are correlated with clinical outcomes. PMID:24443697

  11. Denoising of B{sub 1}{sup +} field maps for noise-robust image reconstruction in electrical properties tomography

    SciTech Connect

    Michel, Eric; Hernandez, Daniel; Cho, Min Hyoung; Lee, Soo Yeol

    2014-10-15

    Purpose: To validate the use of adaptive nonlinear filters in reconstructing conductivity and permittivity images from the noisy B{sub 1}{sup +} maps in electrical properties tomography (EPT). Methods: In EPT, electrical property images are computed by taking Laplacian of the B{sub 1}{sup +} maps. To mitigate the noise amplification in computing the Laplacian, the authors applied adaptive nonlinear denoising filters to the measured complex B{sub 1}{sup +} maps. After the denoising process, they computed the Laplacian by central differences. They performed EPT experiments on phantoms and a human brain at 3 T along with corresponding EPT simulations on finite-difference time-domain models. They evaluated the EPT images comparing them with the ones obtained by previous EPT reconstruction methods. Results: In both the EPT simulations and experiments, the nonlinear filtering greatly improved the EPT image quality when evaluated in terms of the mean and standard deviation of the electrical property values at the regions of interest. The proposed method also improved the overall similarity between the reconstructed conductivity images and the true shapes of the conductivity distribution. Conclusions: The nonlinear denoising enabled us to obtain better-quality EPT images of the phantoms and the human brain at 3 T.

  12. Fully 4D motion-compensated reconstruction of cardiac SPECT images

    NASA Astrophysics Data System (ADS)

    Gravier, Erwan; Yang, Yongyi; King, Michael A.; Jin, Mingwu

    2006-09-01

    In this paper, we investigate the benefits of a spatiotemporal approach for reconstruction of image sequences. In the proposed approach, we introduce a temporal prior in the form of motion compensation to account for the statistical correlations among the frames in a sequence, and reconstruct all the frames collectively as a single function of space and time. The reconstruction algorithm is derived based on the maximum a posteriori estimate, for which the one-step late expectation-maximization algorithm is used. We demonstrated the method in our experiments using simulated single photon emission computed tomography (SPECT) cardiac perfusion images. The four-dimensional (4D) gated mathematical cardiac-torso phantom was used for simulation of gated SPECT perfusion imaging with Tc-99m-sestamibi. In addition to bias-variance analysis and time activity curves, we also used a channelized Hotelling observer to evaluate the detectability of perfusion defects in the reconstructed images. Our experimental results demonstrated that the incorporation of temporal regularization into image reconstruction could significantly improve the accuracy of cardiac images without causing any significant cross-frame blurring that may arise from the cardiac motion. This could lead to not only improved detection of perfusion defects, but also improved reconstruction of the heart wall which is important for functional assessment of the myocardium. This work was supported in part by the National Institutes of Health under grant no HL65425.

  13. Segmentation of brain tumors in 4D MR images using the hidden Markov model.

    PubMed

    Solomon, Jeffrey; Butman, John A; Sood, Arun

    2006-12-01

    Tumor size is an objective measure that is used to evaluate the effectiveness of anticancer agents. Responses to therapy are categorized as complete response, partial response, stable disease and progressive disease. Implicit in this scheme is the change in the tumor over time; however, most tumor segmentation algorithms do not use temporal information. Here we introduce an automated method using probabilistic reasoning over both space and time to segment brain tumors from 4D spatio-temporal MRI data. The 3D expectation-maximization method is extended using the hidden Markov model to infer tumor classification based on previous and subsequent segmentation results. Spatial coherence via a Markov Random Field was included in the 3D spatial model. Simulated images as well as patient images from three independent sources were used to validate this method. The sensitivity and specificity of tumor segmentation using this spatio-temporal model is improved over commonly used spatial or temporal models alone. PMID:17050032

  14. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry

    NASA Astrophysics Data System (ADS)

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian

    2014-03-01

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.

  15. [Super-resolution reconstruction of lung 4D-CT images based on fast sub-pixel motion estimation].

    PubMed

    Xiao, Shan; Wang, Tingting; Lü, Qingwen; Zhang, Yu

    2015-07-01

    Super-resolution image reconstruction techniques play an important role for improving image resolution of lung 4D-CT. We presents a super-resolution approach based on fast sub-pixel motion estimation to reconstruct lung 4D-CT images. A fast sub-pixel motion estimation method was used to estimate the deformation fields between "frames", and then iterative back projection (IBP) algorithm was employed to reconstruct high-resolution images. Experimental results showed that compared with traditional interpolation method and super-resolution reconstruction algorithm based on full search motion estimation, the proposed method produced clearer images with significantly enhanced image structure details and reduced time for computation.

  16. Using 4D Cardiovascular Magnetic Resonance Imaging to Validate Computational Fluid Dynamics: A Case Study

    PubMed Central

    Biglino, Giovanni; Cosentino, Daria; Steeden, Jennifer A.; De Nova, Lorenzo; Castelli, Matteo; Ntsinjana, Hopewell; Pennati, Giancarlo; Taylor, Andrew M.; Schievano, Silvia

    2015-01-01

    Computational fluid dynamics (CFD) can have a complementary predictive role alongside the exquisite visualization capabilities of 4D cardiovascular magnetic resonance (CMR) imaging. In order to exploit these capabilities (e.g., for decision-making), it is necessary to validate computational models against real world data. In this study, we sought to acquire 4D CMR flow data in a controllable, experimental setup and use these data to validate a corresponding computational model. We applied this paradigm to a case of congenital heart disease, namely, transposition of the great arteries (TGA) repaired with arterial switch operation. For this purpose, a mock circulatory loop compatible with the CMR environment was constructed and two detailed aortic 3D models (i.e., one TGA case and one normal aortic anatomy) were tested under realistic hemodynamic conditions, acquiring 4D CMR flow. The same 3D domains were used for multi-scale CFD simulations, whereby the remainder of the mock circulatory system was appropriately summarized with a lumped parameter network. Boundary conditions of the simulations mirrored those measured in vitro. Results showed a very good quantitative agreement between experimental and computational models in terms of pressure (overall maximum % error = 4.4% aortic pressure in the control anatomy) and flow distribution data (overall maximum % error = 3.6% at the subclavian artery outlet of the TGA model). Very good qualitative agreement could also be appreciated in terms of streamlines, throughout the cardiac cycle. Additionally, velocity vectors in the ascending aorta revealed less symmetrical flow in the TGA model, which also exhibited higher wall shear stress in the anterior ascending aorta. PMID:26697416

  17. Enhancing a diffusion algorithm for 4D image segmentation using local information

    NASA Astrophysics Data System (ADS)

    Lösel, Philipp; Heuveline, Vincent

    2016-03-01

    Inspired by the diffusion of a particle, we present a novel approach for performing a semiautomatic segmentation of tomographic images in 3D, 4D or higher dimensions to meet the requirements of high-throughput measurements in a synchrotron X-ray microtomograph. Given a small number of 2D-slices with at least two manually labeled segments, one can either analytically determine the probability that an intelligently weighted random walk starting at one labeled pixel will be at a certain time at a specific position in the dataset or determine the probability approximately by performing several random walks. While the weights of a random walk take into account local information at the starting point, the random walk itself can be in any dimension. Starting a great number of random walks in each labeled pixel, a voxel in the dataset will be hit by several random walks over time. Hence, the image can be segmented by assigning each voxel to the label where the random walks most likely started from. Due to the high scalability of random walks, this approach is suitable for high throughput measurements. Additionally, we describe an interactively adjusted active contours slice by slice method considering local information, where we start with one manually labeled slice and move forward in any direction. This approach is superior with respect to accuracy towards the diffusion algorithm but inferior in the amount of tedious manual processing steps. The methods were applied on 3D and 4D datasets and evaluated by means of manually labeled images obtained in a realistic scenario with biologists.

  18. Development of a dynamic 4D anthropomorphic breast phantom for contrast-based breast imaging

    NASA Astrophysics Data System (ADS)

    Kiarashi, Nooshin; Lin, Yuan; Segars, William P.; Ghate, Sujata V.; Ikejimba, Lynda; Chen, Baiyu; Lo, Joseph Y.; Dobbins, James T., III; Nolte, Loren W.; Samei, Ehsan

    2012-03-01

    Mammography is currently the most widely accepted tool for detection and diagnosis of breast cancer. However, the sensitivity of mammography is reduced in women with dense breast tissue due to tissue overlap, which may obscure lesions. Digital breast tomosynthesis with contrast enhancement reduces tissue overlap and provides additional functional information about lesions (i.e. morphology and kinetics), which in turn may improve lesion characterization. The performance of such techniques is highly dependent on the structural composition of the breast, which varies significantly across patients. Therefore, optimization of breast imaging systems should be done with respect to this patient versatility. Furthermore, imaging techniques that employ contrast require the inclusion of a temporally varying breast composition with respect to the contrast agent kinetics to enable the optimization of the system. To these ends, we have developed a dynamic 4D anthropomorphic breast phantom, which can be used for optimizing a breast imaging system by incorporating material characteristics. The presented dynamic phantom is based on two recently developed anthropomorphic breast phantoms, which can be representative of a whole population through their randomized anatomical feature generation and various compression levels. The 4D dynamic phantom is incorporated with the kinetics of contrast agent uptake in different tissues and can realistically model benign and malignant lesions. To demonstrate the utility of the proposed dynamic phantom, contrast-enhanced digital mammography and breast tomosynthesis were simulated where a ray-tracing algorithm emulated the projections, a filtered back projection algorithm was used for reconstruction, and dual-energy and temporal subtractions were performed and compared.

  19. Segmentation of confocal Raman microspectroscopic imaging data using edge-preserving denoising and clustering.

    PubMed

    Alexandrov, Theodore; Lasch, Peter

    2013-06-18

    Over the past decade, confocal Raman microspectroscopic (CRM) imaging has matured into a useful analytical tool to obtain spatially resolved chemical information on the molecular composition of biological samples and has found its way into histopathology, cytology, and microbiology. A CRM imaging data set is a hyperspectral image in which Raman intensities are represented as a function of three coordinates: a spectral coordinate λ encoding the wavelength and two spatial coordinates x and y. Understanding CRM imaging data is challenging because of its complexity, size, and moderate signal-to-noise ratio. Spatial segmentation of CRM imaging data is a way to reveal regions of interest and is traditionally performed using nonsupervised clustering which relies on spectral domain-only information with the main drawback being the high sensitivity to noise. We present a new pipeline for spatial segmentation of CRM imaging data which combines preprocessing in the spectral and spatial domains with k-means clustering. Its core is the preprocessing routine in the spatial domain, edge-preserving denoising (EPD), which exploits the spatial relationships between Raman intensities acquired at neighboring pixels. Additionally, we propose to use both spatial correlation to identify Raman spectral features colocalized with defined spatial regions and confidence maps to assess the quality of spatial segmentation. For CRM data acquired from midsagittal Syrian hamster ( Mesocricetus auratus ) brain cryosections, we show how our pipeline benefits from the complex spatial-spectral relationships inherent in the CRM imaging data. EPD significantly improves the quality of spatial segmentation that allows us to extract the underlying structural and compositional information contained in the Raman microspectra. PMID:23701523

  20. Quantifying the impact of respiratory-gated 4D CT acquisition on thoracic image quality: A digital phantom study

    SciTech Connect

    Bernatowicz, K. Knopf, A.; Lomax, A.; Keall, P.; Kipritidis, J.; Mishra, P.

    2015-01-15

    Purpose: Prospective respiratory-gated 4D CT has been shown to reduce tumor image artifacts by up to 50% compared to conventional 4D CT. However, to date no studies have quantified the impact of gated 4D CT on normal lung tissue imaging, which is important in performing dose calculations based on accurate estimates of lung volume and structure. To determine the impact of gated 4D CT on thoracic image quality, the authors developed a novel simulation framework incorporating a realistic deformable digital phantom driven by patient tumor motion patterns. Based on this framework, the authors test the hypothesis that respiratory-gated 4D CT can significantly reduce lung imaging artifacts. Methods: Our simulation framework synchronizes the 4D extended cardiac torso (XCAT) phantom with tumor motion data in a quasi real-time fashion, allowing simulation of three 4D CT acquisition modes featuring different levels of respiratory feedback: (i) “conventional” 4D CT that uses a constant imaging and couch-shift frequency, (ii) “beam paused” 4D CT that interrupts imaging to avoid oversampling at a given couch position and respiratory phase, and (iii) “respiratory-gated” 4D CT that triggers acquisition only when the respiratory motion fulfills phase-specific displacement gating windows based on prescan breathing data. Our framework generates a set of ground truth comparators, representing the average XCAT anatomy during beam-on for each of ten respiratory phase bins. Based on this framework, the authors simulated conventional, beam-paused, and respiratory-gated 4D CT images using tumor motion patterns from seven lung cancer patients across 13 treatment fractions, with a simulated 5.5 cm{sup 3} spherical lesion. Normal lung tissue image quality was quantified by comparing simulated and ground truth images in terms of overall mean square error (MSE) intensity difference, threshold-based lung volume error, and fractional false positive/false negative rates. Results

  1. brainR: Interactive 3 and 4D Images of High Resolution Neuroimage Data

    PubMed Central

    Muschelli, John; Sweeney, Elizabeth; Crainiceanu, Ciprian

    2016-01-01

    We provide software tools for displaying and publishing interactive 3-dimensional (3D) and 4-dimensional (4D) figures to html webpages, with examples of high-resolution brain imaging. Our framework is based in the R statistical software using the rgl package, a 3D graphics library. We build on this package to allow manipulation of figures including rotation and translation, zooming, coloring of brain substructures, adjusting transparency levels, and addition/or removal of brain structures. The need for better visualization tools of ultra high dimensional data is ever present; we are providing a clean, simple, web-based option. We also provide a package (brainR) for users to readily implement these tools. PMID:27330829

  2. SU-C-9A-06: The Impact of CT Image Used for Attenuation Correction in 4D-PET

    SciTech Connect

    Cui, Y; Bowsher, J; Yan, S; Cai, J; Das, S; Yin, F

    2014-06-01

    Purpose: To evaluate the appropriateness of using 3D non-gated CT image for attenuation correction (AC) in a 4D-PET (gated PET) imaging protocol used in radiotherapy treatment planning simulation. Methods: The 4D-PET imaging protocol in a Siemens PET/CT simulator (Biograph mCT, Siemens Medical Solutions, Hoffman Estates, IL) was evaluated. CIRS Dynamic Thorax Phantom (CIRS Inc., Norfolk, VA) with a moving glass sphere (8 mL) in the middle of its thorax portion was used in the experiments. The glass was filled with {sup 18}F-FDG and was in a longitudinal motion derived from a real patient breathing pattern. Varian RPM system (Varian Medical Systems, Palo Alto, CA) was used for respiratory gating. Both phase-gating and amplitude-gating methods were tested. The clinical imaging protocol was modified to use three different CT images for AC in 4D-PET reconstruction: first is to use a single-phase CT image to mimic actual clinical protocol (single-CT-PET); second is to use the average intensity projection CT (AveIP-CT) derived from 4D-CT scanning (AveIP-CT-PET); third is to use 4D-CT image to do the phase-matched AC (phase-matching- PET). Maximum SUV (SUVmax) and volume of the moving target (glass sphere) with threshold of 40% SUVmax were calculated for comparison between 4D-PET images derived with different AC methods. Results: The SUVmax varied 7.3%±6.9% over the breathing cycle in single-CT-PET, compared to 2.5%±2.8% in AveIP-CT-PET and 1.3%±1.2% in phasematching PET. The SUVmax in single-CT-PET differed by up to 15% from those in phase-matching-PET. The target volumes measured from single- CT-PET images also presented variations up to 10% among different phases of 4D PET in both phase-gating and amplitude-gating experiments. Conclusion: Attenuation correction using non-gated CT in 4D-PET imaging is not optimal process for quantitative analysis. Clinical 4D-PET imaging protocols should consider phase-matched 4D-CT image if available to achieve better accuracy.

  3. Randomized denoising autoencoders for smaller and efficient imaging based AD clinical trials

    PubMed Central

    Ithapu, Vamsi K.; Singh, Vikas; Okonkwo, Ozioma; Johnson, Sterling C.

    2015-01-01

    There is growing body of research devoted to designing imaging-based biomarkers that identify Alzheimer’s disease (AD) in its prodromal stage using statistical machine learning methods. Recently several authors investigated how clinical trials for AD can be made more efficient (i.e., smaller sample size) using predictive measures from such classification methods. In this paper, we explain why predictive measures given by such SVM type objectives may be less than ideal for use in the setting described above. We give a solution based on a novel deep learning model, randomized denoising autoencoders (rDA), which regresses on training labels y while also accounting for the variance, a property which is very useful for clinical trial design. Our results give strong improvements in sample size estimates over strategies based on multi-kernel learning. Also, rDA predictions appear to more accurately correlate to stages of disease. Separately, our formulation empirically shows how deep architectures can be applied in the large d, small n regime — the default situation in medical imaging. This result is of independent interest. PMID:25485413

  4. A novel method for image denoising of fluorescence molecular imaging based on fuzzy C-Means clustering

    NASA Astrophysics Data System (ADS)

    An, Yu; Liu, Jie; Ye, Jinzuo; Mao, Yamin; Yang, Xin; Jiang, Shixin; Chi, Chongwei; Tian, Jie

    2015-03-01

    As an important molecular imaging modality, fluorescence molecular imaging (FMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorophore, FMI can noninvasively obtain the distribution of fluorophore in-vivo. However, due to the fact that the spectrum of fluorescence is in the section of the visible light range, there are mass of autofluorescence on the surface of the bio-tissues, which is a major disturbing factor in FMI. Meanwhile, the high-level of dark current for charge-coupled device (CCD) camera and other influencing factor can also produce a lot of background noise. In this paper, a novel method for image denoising of FMI based on fuzzy C-Means clustering (FCM) is proposed, because the fluorescent signal is the major component of the fluorescence images, and the intensity of autofluorescence and other background signals is relatively lower than the fluorescence signal. First, the fluorescence image is smoothed by sliding-neighborhood operations to initially eliminate the noise. Then, the wavelet transform (WLT) is performed on the fluorescence images to obtain the major component of the fluorescent signals. After that, the FCM method is adopt to separate the major component and background of the fluorescence images. Finally, the proposed method was validated using the original data obtained by in vivo implanted fluorophore experiment, and the results show that our proposed method can effectively obtain the fluorescence signal while eliminate the background noise, which could increase the quality of fluorescence images.

  5. 3D and 4D magnetic susceptibility tomography based on complex MR images

    DOEpatents

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  6. Sample Drift Correction Following 4D Confocal Time-lapse Imaging

    PubMed Central

    Parslow, Adam; Cardona, Albert; Bryson-Richardson, Robert J.

    2014-01-01

    The generation of four-dimensional (4D) confocal datasets; consisting of 3D image sequences over time; provides an excellent methodology to capture cellular behaviors involved in developmental processes.  The ability to track and follow cell movements is limited by sample movements that occur due to drift of the sample or, in some cases, growth during image acquisition. Tracking cells in datasets affected by drift and/or growth will incorporate these movements into any analysis of cell position. This may result in the apparent movement of static structures within the sample. Therefore prior to cell tracking, any sample drift should be corrected. Using the open source Fiji distribution 1  of ImageJ 2,3 and the incorporated LOCI tools 4, we developed the Correct 3D drift plug-in to remove erroneous sample movement in confocal datasets. This protocol effectively compensates for sample translation or alterations in focal position by utilizing phase correlation to register each time-point of a four-dimensional confocal datasets while maintaining the ability to visualize and measure cell movements over extended time-lapse experiments. PMID:24747942

  7. Dynamic real-time 4D cardiac MDCT image display using GPU-accelerated volume rendering.

    PubMed

    Zhang, Qi; Eagleson, Roy; Peters, Terry M

    2009-09-01

    Intraoperative cardiac monitoring, accurate preoperative diagnosis, and surgical planning are important components of minimally-invasive cardiac therapy. Retrospective, electrocardiographically (ECG) gated, multidetector computed tomographical (MDCT), four-dimensional (3D + time), real-time, cardiac image visualization is an important tool for the surgeon in such procedure, particularly if the dynamic volumetric image can be registered to, and fused with the actual patient anatomy. The addition of stereoscopic imaging provides a more intuitive environment by adding binocular vision and depth cues to structures within the beating heart. In this paper, we describe the design and implementation of a comprehensive stereoscopic 4D cardiac image visualization and manipulation platform, based on the opacity density radiation model, which exploits the power of modern graphics processing units (GPUs) in the rendering pipeline. In addition, we present a new algorithm to synchronize the phases of the dynamic heart to clinical ECG signals, and to calculate and compensate for latencies in the visualization pipeline. A dynamic multiresolution display is implemented to enable the interactive selection and emphasis of volume of interest (VOI) within the entire contextual cardiac volume and to enhance performance, and a novel color and opacity adjustment algorithm is designed to increase the uniformity of the rendered multiresolution image of heart. Our system provides a visualization environment superior to noninteractive software-based implementations, but with a rendering speed that is comparable to traditional, but inferior quality, volume rendering approaches based on texture mapping. This retrospective ECG-gated dynamic cardiac display system can provide real-time feedback regarding the suspected pathology, function, and structural defects, as well as anatomical information such as chamber volume and morphology.

  8. Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders

    PubMed Central

    Su, Hai; Xing, Fuyong; Kong, Xiangfei; Xie, Yuanpu; Zhang, Shaoting; Yang, Lin

    2016-01-01

    Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts.

  9. Nonlinear denoising of functional magnetic resonance imaging time series with wavelets

    NASA Astrophysics Data System (ADS)

    Stausberg, Sven; Lehnertz, Klaus

    2009-04-01

    In functional magnetic resonance imaging (fMRI) the blood oxygenation level dependent (BOLD) effect is used to identify and delineate neuronal activity. The sensitivity of a fMRI-based detection of neuronal activation, however, strongly depends on the relative levels of signal and noise in the time series data, and a large number of different artifact and noise sources interfere with the weak signal changes of the BOLD response. Thus, noise reduction is important to allow an accurate estimation of single activation-related BOLD signals across brain regions. Techniques employed so far include filtering in the time or frequency domain which, however, does not take into account possible nonlinearities of the BOLD response. We here evaluate a previously proposed method for nonlinear denoising of short and transient signals, which combines the wavelet transform with techniques from nonlinear time series analysis. We adopt the method to the problem at hand and show that successful noise reduction and, more importantly, preservation of the shape of individual BOLD signals can be achieved even in the presence of in-band noise.

  10. [Wavelet analysis and its application in denoising the spectrum of hyperspectral image].

    PubMed

    Zhou, Dan; Wang, Qin-Jun; Tian, Qing-Jiu; Lin, Qi-Zhong; Fu, Wen-Xue

    2009-07-01

    In order to remove the sawtoothed noise in the spectrum of hyperspectral remote sensing and improve the accuracy of information extraction using spectrum in the present research, the spectrum of vegetation in the USGS (United States Geological Survey) spectrum library was used to simulate the performance of wavelet denoising. These spectra were measured by a custom-modified and computer-controlled Beckman spectrometer at the USGS Denver Spectroscopy Lab. The wavelength accuracy is about 5 nm in the NIR and 2 nm in the visible. In the experiment, noise with signal to noise ratio (SNR) 30 was first added to the spectrum, and then removed by the wavelet denoising approach. For the purpose of finding the optimal parameters combinations, the SNR, mean squared error (MSE), spectral angle (SA) and integrated evaluation coefficient eta were used to evaluate the approach's denoising effects. Denoising effect is directly proportional to SNR, and inversely proportional to MSE, SA and the integrated evaluation coefficient eta. Denoising results show that the sawtoothed noise in noisy spectrum was basically eliminated, and the denoised spectrum basically coincides with the original spectrum, maintaining a good spectral characteristic of the curve. Evaluation results show that the optimal denoising can be achieved by firstly decomposing the noisy spectrum into 3-7 levels using db12, db10, sym9 and sym6 wavelets, then processing the wavelet transform coefficients by soft-threshold functions, and finally estimating the thresholds by heursure threshold selection rule and rescaling using a single estimation of level noise based on first-level coefficients. However, this approach depends on the noise level, which means that for different noise level the optimal parameters combination is also diverse.

  11. Modeling 4D Changes in Pathological Anatomy using Domain Adaptation: Analysis of TBI Imaging using a Tumor Database.

    PubMed

    Wang, Bo; Prastawa, Marcel; Saha, Avishek; Awate, Suyash P; Irimia, Andrei; Chambers, Micah C; Vespa, Paul M; Van Horn, John D; Pascucci, Valerio; Gerig, Guido

    2013-01-01

    Analysis of 4D medical images presenting pathology (i.e., lesions) is significantly challenging due to the presence of complex changes over time. Image analysis methods for 4D images with lesions need to account for changes in brain structures due to deformation, as well as the formation and deletion of new structures (e.g., edema, bleeding) due to the physiological processes associated with damage, intervention, and recovery. We propose a novel framework that models 4D changes in pathological anatomy across time, and provides explicit mapping from a healthy template to subjects with pathology. Moreover, our framework uses transfer learning to leverage rich information from a known source domain, where we have a collection of completely segmented images, to yield effective appearance models for the input target domain. The automatic 4D segmentation method uses a novel domain adaptation technique for generative kernel density models to transfer information between different domains, resulting in a fully automatic method that requires no user interaction. We demonstrate the effectiveness of our novel approach with the analysis of 4D images of traumatic brain injury (TBI), using a synthetic tumor database as the source domain. PMID:25346953

  12. Automatic landmark generation for deformable image registration evaluation for 4D CT images of lung

    NASA Astrophysics Data System (ADS)

    Vickress, J.; Battista, J.; Barnett, R.; Morgan, J.; Yartsev, S.

    2016-10-01

    Deformable image registration (DIR) has become a common tool in medical imaging across both diagnostic and treatment specialties, but the methods used offer varying levels of accuracy. Evaluation of DIR is commonly performed using manually selected landmarks, which is subjective, tedious and time consuming. We propose a semi-automated method that saves time and provides accuracy comparable to manual selection. Three landmarking methods including manual (with two independent observers), scale invariant feature transform (SIFT), and SIFT with manual editing (SIFT-M) were tested on 10 thoracic 4DCT image studies corresponding to the 0% and 50% phases of respiration. Results of each method were evaluated against a gold standard (GS) landmark set comparing both mean and proximal landmark displacements. The proximal method compares the local deformation magnitude between a test landmark pair and the closest GS pair. Statistical analysis was done using an intra class correlation (ICC) between test and GS displacement values. The creation time per landmark pair was 22, 34, 2.3, and 4.3 s for observers 1 and 2, SIFT, and SIFT-M methods respectively. Across 20 lungs from the 10 CT studies, the ICC values between the GS and observer 1 and 2, SIFT, and SIFT-M methods were 0.85, 0.85, 0.84, and 0.82 for mean lung deformation, and 0.97, 0.98, 0.91, and 0.96 for proximal landmark deformation, respectively. SIFT and SIFT-M methods have an accuracy that is comparable to manual methods when tested against a GS landmark set while saving 90% of the time. The number and distribution of landmarks significantly affected the analysis as manifested by the different results for mean deformation and proximal landmark deformation methods. Automatic landmark methods offer a promising alternative to manual landmarking, if the quantity, quality and distribution of landmarks can be optimized for the intended application.

  13. Evaluation of Non-Local Means Based Denoising Filters for Diffusion Kurtosis Imaging Using a New Phantom

    PubMed Central

    Zhou, Min-Xiong; Yan, Xu; Xie, Hai-Bin; Zheng, Hui; Xu, Dongrong; Yang, Guang

    2015-01-01

    Image denoising has a profound impact on the precision of estimated parameters in diffusion kurtosis imaging (DKI). This work first proposes an approach to constructing a DKI phantom that can be used to evaluate the performance of denoising algorithms in regard to their abilities of improving the reliability of DKI parameter estimation. The phantom was constructed from a real DKI dataset of a human brain, and the pipeline used to construct the phantom consists of diffusion-weighted (DW) image filtering, diffusion and kurtosis tensor regularization, and DW image reconstruction. The phantom preserves the image structure while minimizing image noise, and thus can be used as ground truth in the evaluation. Second, we used the phantom to evaluate three representative algorithms of non-local means (NLM). Results showed that one scheme of vector-based NLM, which uses DWI data with redundant information acquired at different b-values, produced the most reliable estimation of DKI parameters in terms of Mean Square Error (MSE), Bias and standard deviation (Std). The result of the comparison based on the phantom was consistent with those based on real datasets. PMID:25643162

  14. A deformable phantom for 4D radiotherapy verification: Design and image registration evaluation

    SciTech Connect

    Serban, Monica; Heath, Emily; Stroian, Gabriela; Collins, D. Louis; Seuntjens, Jan

    2008-03-15

    peak inhale. The SI displacement of the landmarks varied between 94% and 3% of the piston excursion for positions closer and farther away from the piston, respectively. The reproducibility of the phantom deformation was within the image resolution (0.7x0.7x1.25 mm{sup 3}). Vector average registration accuracy based on point landmarks was found to be 0.5 (0.4 SD) mm. The tumor and lung mean 3D DTA obtained from triangulated surfaces were 0.4 (0.1 SD) mm and 1.0 (0.8 SD) mm, respectively. This phantom is capable of reproducibly emulating the physically realistic lung features and deformations and has a wide range of potential applications, including four-dimensional (4D) imaging, evaluation of deformable registration accuracy, 4D planning and dose delivery.

  15. Application of 4D resistivity image profiling to detect DNAPLs plume.

    NASA Astrophysics Data System (ADS)

    Liu, H.; Yang, C.; Tsai, Y.

    2008-12-01

    In July 1993, the soil and groundwater of the factory of Taiwan , Miaoli was found to be contaminated by dichloroethane, chlorobenzene and other hazardous solvents. The contaminants were termed to be dense non-aqueous phase liquids (DNAPLs). The contaminated site was neglected for the following years until May 1998, the Environment Protection Agency of Miaoli ordered the company immediately take an action for treatment of the contaminated site. Excavating and exposing the contaminated soil was done at the previous waste DNAPL dumped area. In addition, more than 53 wells were drilled around the pool with a maximum depth of 12 m where a clayey layer was found. Continuous pumping the groundwater and monitoring the concentration of residual DNAPL contained in the well water samples have done in different stages of remediation. However, it is suspected that the DNAPL has existed for a long time, therefore the contaminants might dilute but remnants of a DNAPL plume that are toxic to humans still remain in the soil and migrate to deeper aquifers. A former contaminated site was investigated using the 2D, 3D and 4D resisitivity image technique, with aims of determining buried contaminant geometry. This paper emphasizes the use of resistivity image profiling (RIP) method to map the limit of this DNAPL waste disposal site where the records of operations are not variations. A significant change in resistivity values was detected between known polluted and non-polluted subsurface; a high resistivity value implies that the subsurface was contaminated by DNAPL plume. The results of the survey serve to provide insight into the sensitivity of RIP method for detecting DNAPL plumes within the shallow subsurface, and help to provide valuable information related to monitoring the possible migration path of DNAPL plume in the past. According to the formerly studies in this site, affiliation by excavates with pumps water remediation had very long time, Therefore this research was used

  16. Adaptive bilateral filter for image denoising and its application to in-vitro Time-of-Flight data

    NASA Astrophysics Data System (ADS)

    Seitel, Alexander; dos Santos, Thiago R.; Mersmann, Sven; Penne, Jochen; Groch, Anja; Yung, Kwong; Tetzlaff, Ralf; Meinzer, Hans-Peter; Maier-Hein, Lena

    2011-03-01

    Image-guided therapy systems generally require registration of pre-operative planning data with the patient's anatomy. One common approach to achieve this is to acquire intra-operative surface data and match it to surfaces extracted from the planning image. Although increasingly popular for surface generation in general, the novel Time-of-Flight (ToF) technology has not yet been applied in this context. This may be attributed to the fact that the ToF range images are subject to considerable noise. The contribution of this study is two-fold. Firstly, we present an adaption of the well-known bilateral filter for denoising ToF range images based on the noise characteristics of the camera. Secondly, we assess the quality of organ surfaces generated from ToF range data with and without bilateral smoothing using corresponding high resolution CT data as ground truth. According to an evaluation on five porcine organs, the root mean squared (RMS) distance between the denoised ToF data points and the reference computed tomography (CT) surfaces ranged from 3.0 mm (lung) to 9.0 mm (kidney). This corresponds to an error-reduction of up to 36% compared to the error of the original ToF surfaces.

  17. Evaluation of Elekta 4D cone beam CT-based automatic image registration for radiation treatment of lung cancer

    PubMed Central

    Harrison, Amy; Yu, Yan; Xiao, Ying; Werner-Wasik, Maria; Lu, Bo

    2015-01-01

    Objective: The study was aimed to evaluate the precision of Elekta four-dimensional (4D) cone beam CT (CBCT)-based automatic dual-image registrations using different landmarks for clipbox for radiation treatment of lung cancer. Methods: 30 4D CBCT scans from 15 patients were studied. 4D CBCT images were registered with reference CT images using dual-image registration: a clipbox registration and a mask registration. The image registrations performed in clinic using a physician-defined clipbox, were reviewed by physicians, and were taken as the standard. Studies were conducted to evaluate the automatic dual registrations using three kinds of landmarks for clipbox: spine, spine plus internal target volume (ITV) and lung (including as much of the lung as possible). Translational table shifts calculated from the automatic registrations were compared with those of the standard. Results: The mean of the table shift differences in the lateral direction were 0.03, 0.03 and 0.03 cm, for clipboxes based on spine, spine plus ITV and lung, respectively. The mean of the shift differences in the longitudinal direction were 0.08, 0.08 and 0.08 cm, respectively. The mean of the shift differences in the vertical direction were 0.03, 0.03 and 0.03 cm, respectively. Conclusion: The automatic registrations using three different landmarks for clipbox showed similar results. One can use any of the three landmarks in 4D CBCT dual-image registration. Advance in knowledge: The study provides knowledge and recommendations for application of Elekta 4D CBCT image registration in radiation therapy of lung cancer. PMID:26183932

  18. A flexible patch based approach for combined denoising and contrast enhancement of digital X-ray images.

    PubMed

    Irrera, Paolo; Bloch, Isabelle; Delplanque, Maurice

    2016-02-01

    Denoising and contrast enhancement play key roles in optimizing the trade-off between image quality and X-ray dose. However, these tasks present multiple challenges raised by noise level, low visibility of fine anatomical structures, heterogeneous conditions due to different exposure parameters, and patient characteristics. This work proposes a new method to address these challenges. We first introduce a patch-based filter adapted to the properties of the noise corrupting X-ray images. The filtered images are then used as oracles to define non parametric noise containment maps that, when applied in a multiscale contrast enhancement framework, allow optimizing the trade-off between improvement of the visibility of anatomical structures and noise reduction. A significant amount of tests on both phantoms and clinical images has shown that the proposed method is better suited than others for visual inspection for diagnosis, even when compared to an algorithm used to process low dose images in clinical routine. PMID:26716719

  19. Four-dimensional (4D) Motion Detection to Correct Respiratory Effects in Treatment Response Assessment Using Molecular Imaging Biomarkers

    PubMed Central

    Schreibmann, Eduard; Crocker, Ian; Schuster, David M.; Curran, Walter J.; Fox, Tim

    2014-01-01

    Observing early metabolic changes in positron emission tomography (PET) is an essential tool to assess treatment efficiency in radiotherapy. However, for thoracic regions, the use of three-dimensional (3D) PET imaging is unfeasible because the radiotracer activity is smeared by the respiratory motion and averaged during the imaging acquisition process. This motion-induced degradation is similar in magnitude with the treatment-induced changes, and the two occurrences become indiscernible. We present a customized temporal-spatial deformable registration method for quantifying respiratory motion in a four-dimensional (4D) PET dataset. Once the motion is quantified, a motion-corrected (MC) dataset is created by tracking voxels to eliminate breathing-induced changes in the 4D imaging scan. The 4D voxel-tracking data is then summed to yield a 3D MC-PET scan containing only treatment-induced changes. This proof of concept is exemplified on both phantom and clinical data, where the proposed algorithm tracked the trajectories of individual points through the 4D datasets reducing motion to less than 4 mm in all phases. This correction approach using deformable registration can discern motion blurring from treatment-induced changes in treatment response assessment using PET imaging. PMID:24000982

  20. New variational image decomposition model for simultaneously denoising and segmenting optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Duan, Jinming; Tench, Christopher; Gottlob, Irene; Proudlock, Frank; Bai, Li

    2015-11-01

    Optical coherence tomography (OCT) imaging plays an important role in clinical diagnosis and monitoring of diseases of the human retina. Automated analysis of optical coherence tomography images is a challenging task as the images are inherently noisy. In this paper, a novel variational image decomposition model is proposed to decompose an OCT image into three components: the first component is the original image but with the noise completely removed; the second contains the set of edges representing the retinal layer boundaries present in the image; and the third is an image of noise, or in image decomposition terms, the texture, or oscillatory patterns of the original image. In addition, a fast Fourier transform based split Bregman algorithm is developed to improve computational efficiency of solving the proposed model. Extensive experiments are conducted on both synthesised and real OCT images to demonstrate that the proposed model outperforms the state-of-the-art speckle noise reduction methods and leads to accurate retinal layer segmentation.

  1. Quantifying the image quality and dose reduction of respiratory triggered 4D cone-beam computed tomography with patient-measured breathing

    NASA Astrophysics Data System (ADS)

    Cooper, Benjamin J.; O'Brien, Ricky T.; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J.

    2015-12-01

    Respiratory triggered four dimensional cone-beam computed tomography (RT 4D CBCT) is a novel technique that uses a patient’s respiratory signal to drive the image acquisition with the goal of imaging dose reduction without degrading image quality. This work investigates image quality and dose using patient-measured respiratory signals for RT 4D CBCT simulations. Studies were performed that simulate a 4D CBCT image acquisition using both the novel RT 4D CBCT technique and a conventional 4D CBCT technique. A set containing 111 free breathing lung cancer patient respiratory signal files was used to create 111 pairs of RT 4D CBCT and conventional 4D CBCT image sets from realistic simulations of a 4D CBCT system using a Rando phantom and the digital phantom, XCAT. Each of these image sets were compared to a ground truth dataset from which a mean absolute pixel difference (MAPD) metric was calculated to quantify the degradation of image quality. The number of projections used in each simulation was counted and was assumed as a surrogate for imaging dose. Based on 111 breathing traces, when comparing RT 4D CBCT with conventional 4D CBCT, the average image quality was reduced by 7.6% (Rando study) and 11.1% (XCAT study). However, the average imaging dose reduction was 53% based on needing fewer projections (617 on average) than conventional 4D CBCT (1320 projections). The simulation studies have demonstrated that the RT 4D CBCT method can potentially offer a 53% saving in imaging dose on average compared to conventional 4D CBCT in simulation studies using a wide range of patient-measured breathing traces with a minimal impact on image quality.

  2. SU-E-J-183: Quantifying the Image Quality and Dose Reduction of Respiratory Triggered 4D Cone-Beam Computed Tomography with Patient- Measured Breathing

    SciTech Connect

    Cooper, B; OBrien, R; Kipritidis, J; Keall, P

    2014-06-01

    Purpose: Respiratory triggered four dimensional cone-beam computed tomography (RT 4D CBCT) is a novel technique that uses a patient's respiratory signal to drive the image acquisition with the goal of imaging dose reduction without degrading image quality. This work investigates image quality and dose using patient-measured respiratory signals for RT 4D CBCT simulations instead of synthetic sinusoidal signals used in previous work. Methods: Studies were performed that simulate a 4D CBCT image acquisition using both the novel RT 4D CBCT technique and a conventional 4D CBCT technique from a database of oversampled Rando phantom CBCT projections. A database containing 111 free breathing lung cancer patient respiratory signal files was used to create 111 RT 4D CBCT and 111 conventional 4D CBCT image datasets from realistic simulations of a 4D RT CBCT system. Each of these image datasets were compared to a ground truth dataset from which a root mean square error (RMSE) metric was calculated to quantify the degradation of image quality. The number of projections used in each simulation is counted and was assumed as a surrogate for imaging dose. Results: Based on 111 breathing traces, when comparing RT 4D CBCT with conventional 4D CBCT the average image quality was reduced by 7.6%. However, the average imaging dose reduction was 53% based on needing fewer projections (617 on average) than conventional 4D CBCT (1320 projections). Conclusion: The simulation studies using a wide range of patient breathing traces have demonstrated that the RT 4D CBCT method can potentially offer a substantial saving of imaging dose of 53% on average compared to conventional 4D CBCT in simulation studies with a minimal impact on image quality. A patent application (PCT/US2012/048693) has been filed which is related to this work.

  3. 4D cone-beam CT imaging for guidance in radiation therapy: setup verification by use of implanted fiducial markers

    NASA Astrophysics Data System (ADS)

    Jin, Peng; van Wieringen, Niek; Hulshof, Maarten C. C. M.; Bel, Arjan; Alderliesten, Tanja

    2016-03-01

    The use of 4D cone-beam computed tomography (CBCT) and fiducial markers for guidance during radiation therapy of mobile tumors is challenging due to the trade-off between image quality, imaging dose, and scanning time. We aimed to investigate the visibility of markers and the feasibility of marker-based 4D registration and manual respiration-induced marker motion quantification for different CBCT acquisition settings. A dynamic thorax phantom and a patient with implanted gold markers were included. For both the phantom and patient, the peak-to-peak amplitude of marker motion in the cranial-caudal direction ranged from 5.3 to 14.0 mm, which did not affect the marker visibility and the associated marker-based registration feasibility. While using a medium field of view (FOV) and the same total imaging dose as is applied for 3D CBCT scanning in our clinic, it was feasible to attain an improved marker visibility by reducing the imaging dose per projection and increasing the number of projection images. For a small FOV with a shorter rotation arc but similar total imaging dose, streak artifacts were reduced due to using a smaller sampling angle. Additionally, the use of a small FOV allowed reducing total imaging dose and scanning time (~2.5 min) without losing the marker visibility. In conclusion, by using 4D CBCT with identical or lower imaging dose and a reduced gantry speed, it is feasible to attain sufficient marker visibility for marker-based 4D setup verification. Moreover, regardless of the settings, manual marker motion quantification can achieve a high accuracy with the error <1.2 mm.

  4. 4D cone beam CT phase sorting using high frequency optical surface measurement during image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Price, G. J.; Marchant, T. E.; Parkhurst, J. M.; Sharrock, P. J.; Whitfield, G. A.; Moore, C. J.

    2011-03-01

    In image guided radiotherapy (IGRT) two of the most promising recent developments are four dimensional cone beam CT (4D CBCT) and dynamic optical metrology of patient surfaces. 4D CBCT is now becoming commercially available and finds use in treatment planning and verification, and whilst optical monitoring is a young technology, its ability to measure during treatment delivery without dose consequences has led to its uptake in many institutes. In this paper, we demonstrate the use of dynamic patient surfaces, simultaneously captured during CBCT acquisition using an optical sensor, to phase sort projection images for 4D CBCT volume reconstruction. The dual modality approach we describe means that in addition to 4D volumetric data, the system provides correlated wide field measurements of the patient's skin surface with high spatial and temporal resolution. As well as the value of such complementary data in verification and motion analysis studies, it introduces flexibility into the acquisition of the signal required for phase sorting. The specific technique used may be varied according to individual patient circumstances and the imaging target. We give details of three different methods of obtaining a suitable signal from the optical surfaces: simply following the motion of triangulation spots used to calibrate the surfaces' absolute height; monitoring the surface height in a single, arbitrarily selected, camera pixel; and tracking, in three dimensions, the movement of a surface feature. In addition to describing the system and methodology, we present initial results from a case study oesophageal cancer patient.

  5. Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Säring, Dennis; Fiehler, Jens; Illies, Till; Möller, Dietmar; Handels, Heinz

    2009-02-01

    In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.

  6. TH-E-BRF-02: 4D-CT Ventilation Image-Based IMRT Plans Are Dosimetrically Comparable to SPECT Ventilation Image-Based Plans

    SciTech Connect

    Kida, S; Bal, M; Kabus, S; Loo, B; Keall, P; Yamamoto, T

    2014-06-15

    Purpose: An emerging lung ventilation imaging method based on 4D-CT can be used in radiotherapy to selectively avoid irradiating highly-functional lung regions, which may reduce pulmonary toxicity. Efforts to validate 4DCT ventilation imaging have been focused on comparison with other imaging modalities including SPECT and xenon CT. The purpose of this study was to compare 4D-CT ventilation image-based functional IMRT plans with SPECT ventilation image-based plans as reference. Methods: 4D-CT and SPECT ventilation scans were acquired for five thoracic cancer patients in an IRB-approved prospective clinical trial. The ventilation images were created by quantitative analysis of regional volume changes (a surrogate for ventilation) using deformable image registration of the 4D-CT images. A pair of 4D-CT ventilation and SPECT ventilation image-based IMRT plans was created for each patient. Regional ventilation information was incorporated into lung dose-volume objectives for IMRT optimization by assigning different weights on a voxel-by-voxel basis. The objectives and constraints of the other structures in the plan were kept identical. The differences in the dose-volume metrics have been evaluated and tested by a paired t-test. SPECT ventilation was used to calculate the lung functional dose-volume metrics (i.e., mean dose, V20 and effective dose) for both 4D-CT ventilation image-based and SPECT ventilation image-based plans. Results: Overall there were no statistically significant differences in any dose-volume metrics between the 4D-CT and SPECT ventilation imagebased plans. For example, the average functional mean lung dose of the 4D-CT plans was 26.1±9.15 (Gy), which was comparable to 25.2±8.60 (Gy) of the SPECT plans (p = 0.89). For other critical organs and PTV, nonsignificant differences were found as well. Conclusion: This study has demonstrated that 4D-CT ventilation image-based functional IMRT plans are dosimetrically comparable to SPECT ventilation image

  7. Attempt of UAV oblique images and MLS point clouds for 4D modelling of roadside pole-like objects

    NASA Astrophysics Data System (ADS)

    Lin, Yi; West, Geoff

    2014-11-01

    The state-of-the-art remote sensing technologies, namely Unmanned Aerial Vehicle (UAV) based oblique imaging and Mobile Laser Scanning (MLS) show great potential for spatial information acquisition. This study investigated the combination of the two data sources for 4D modelling of roadside pole-like objects. The data for the analysis were collected by the Microdrone md4-200 UAV imaging system and the Sensei MLS system developed by the Finnish Geodetic Institute. Pole extraction, 3D structural parameter derivation and texture segmentation were deployed on the oblique images and point clouds, and their results were fused to yield the 4D models for one example of pole-like objects, namely lighting poles. The combination techniques proved promising.

  8. Combined self-learning based single-image super-resolution and dual-tree complex wavelet transform denoising for medical images

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Ye, Xujiong; Slabaugh, Greg; Keegan, Jennifer; Mohiaddin, Raad; Firmin, David

    2016-03-01

    In this paper, we propose a novel self-learning based single-image super-resolution (SR) method, which is coupled with dual-tree complex wavelet transform (DTCWT) based denoising to better recover high-resolution (HR) medical images. Unlike previous methods, this self-learning based SR approach enables us to reconstruct HR medical images from a single low-resolution (LR) image without extra training on HR image datasets in advance. The relationships between the given image and its scaled down versions are modeled using support vector regression with sparse coding and dictionary learning, without explicitly assuming reoccurrence or self-similarity across image scales. In addition, we perform DTCWT based denoising to initialize the HR images at each scale instead of simple bicubic interpolation. We evaluate our method on a variety of medical images. Both quantitative and qualitative results show that the proposed approach outperforms bicubic interpolation and state-of-the-art single-image SR methods while effectively removing noise.

  9. EarthScope imaging of 4D stress evolution of the San Andreas Fault System

    NASA Astrophysics Data System (ADS)

    Smith-Konter, B. R.; Del Pardo, C.

    2011-12-01

    EarthScope seismic and geodetic observations, combined with sophisticated computational models and powerful visualization tools, are now providing a critical ensemble of information about interseismic stressing rates along the San Andreas Fault System (SAFS). When combined with paleoseismic chronologies of earthquake ruptures spanning the last several hundreds of years, four-dimensional (4D) simulations of stress evolution spanning multiple earthquake cycles are now possible. To investigate stress variations at depth along the SAFS over multiple earthquake cycles, we use a 4D semi-analytic model that simulates interseismic strain accumulation, coseismic displacement, and post-seismic viscoelastic relaxation of the mantle. The model utilizes geologic estimates of fault locations and slip rates, as well as paleoseismic earthquake rupture histories, and is computed at a 500 m grid resolution to better resolve the sharp deformation gradients at creeping faults. Using EarthScope PBO and ALOS InSAR data, we tune the model locking depths and slip rates to compute the 4D stress accumulation within the seismogenic crust. 4D models show that stress accumulation and stress drop are a complex function of space and time. We use ParaView 3.10, an open-source multi-platform visualization package, for manipulation and visualization of 4D stress variations of fault segments at depth. We use ParaView to create a 3D meshed volume spanning a ~1000 x 1500 x 50 km region of the SAFS and present both volume and sliced views of stress from several viewpoints along the plate boundary. These models reveal pockets of stress concentrated at depth due to the interaction of neighboring fault segments and at fault segment branching junctions. We present several sensitivity tests that reveal the variation of stress at depth as a function of locking depth, slip rate, coefficient of friction, elastic plate thickness, and viscosity. These visualizations lay the groundwork for 4D time

  10. SU-D-17A-04: The Impact of Audiovisual Biofeedback On Image Quality During 4D Functional and Anatomic Imaging: Results of a Prospective Clinical Trial

    SciTech Connect

    Keall, P; Pollock, S; Yang, J; Diehn, M; Berger, J; Graves, E; Loo, B; Yamamoto, T

    2014-06-01

    Purpose: The ability of audiovisual (AV) biofeedback to improve breathing regularity has not previously been investigated for functional imaging studies. The purpose of this study was to investigate the impact of AV biofeedback on 4D-PET and 4D-CT image quality in a prospective clinical trial. We hypothesized that motion blurring in 4D-PET images and the number of artifacts in 4D-CT images are reduced using AV biofeedback. Methods: AV biofeedback is a real-time, interactive and personalized system designed to help a patient self-regulate his/her breathing using a patient-specific representative waveform and musical guides. In an IRB-approved prospective clinical trial, 4D-PET and 4D-CT images of 10 lung cancer patients were acquired with AV biofeedback (AV) and free breathing (FB). The 4D-PET images in 6 respiratory bins were analyzed for motion blurring by: (1) decrease of GTVPET and (2) increase of SUVmax in 4-DPET compared to 3D-PET. The 4D-CT images were analyzed for artifacts by: (1) comparing normalized cross correlation-based scores (NCCS); and (2) quantifying a visual assessment score (VAS). A two-tailed paired t-test was used to test the hypotheses. Results: The impact of AV biofeedback on 4D-PET and 4D-CT images varied widely between patients, suggesting inconsistent patient comprehension and capability. Overall, the 4D-PET decrease of GTVPET was 2.0±3.0cm3 with AV and 2.3±3.9cm{sup 3} for FB (p=0.61). The 4D-PET increase of SUVmax was 1.6±1.0 with AV and 1.1±0.8 with FB (p=0.002). The 4D-CT NCCS were 0.65±0.27 with AV and 0.60±0.32 for FB (p=0.32). The 4D-CT VAS was 0.0±2.7 (p=ns). Conclusion: A 10-patient study demonstrated a statistically significant reduction of motion blurring of AV over FB for 1/2 functional 4D-PET imaging metrics. No difference between AV and FB was found for 2 anatomic 4D-CT imaging metrics. Future studies will focus on optimizing the human-computer interface and including patient training sessions for improved

  11. Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.

    2007-03-01

    In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.

  12. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation–maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation–maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  13. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  14. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction.

    PubMed

    Karakatsanis, Nicolas A; Casey, Michael E; Lodge, Martin A; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible (18)F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published (18)F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  15. An efficient method for nonnegatively constrained Total Variation-based denoising of medical images corrupted by Poisson noise.

    PubMed

    Landi, G; Piccolomini, E Loli

    2012-01-01

    Medical images obtained with emission processes are corrupted by noise of Poisson type. In the paper the denoising problem is modeled in a Bayesian statistical setting by a nonnegatively constrained minimization problem, where the objective function is constituted by a data fitting term, the Kullback-Leibler divergence, plus a regularization term, the Total Variation function, weighted by a regularization parameter. Aim of the paper is to propose an efficient numerical method for the solution of the constrained problem. The method is a Newton projection method, where the inner system is solved by the Conjugate Gradient method, preconditioned and implemented in an efficient way for this specific application. The numerical results on simulated and real medical images prove the effectiveness of the method, both for the accuracy and the computational cost.

  16. SU-E-J-02: 4D Digital Tomosynthesis Based On Algebraic Image Reconstruction and Total-Variation Minimization for the Improvement of Image Quality

    SciTech Connect

    Kim, D; Kang, S; Kim, T; Suh, T; Kim, S

    2014-06-01

    Purpose: In this paper, we implemented the four-dimensional (4D) digital tomosynthesis (DTS) imaging based on algebraic image reconstruction technique and total-variation minimization method in order to compensate the undersampled projection data and improve the image quality. Methods: The projection data were acquired as supposed the cone-beam computed tomography system in linear accelerator by the Monte Carlo simulation and the in-house 4D digital phantom generation program. We performed 4D DTS based upon simultaneous algebraic reconstruction technique (SART) among the iterative image reconstruction technique and total-variation minimization method (TVMM). To verify the effectiveness of this reconstruction algorithm, we performed systematic simulation studies to investigate the imaging performance. Results: The 4D DTS algorithm based upon the SART and TVMM seems to give better results than that based upon the existing method, or filtered-backprojection. Conclusion: The advanced image reconstruction algorithm for the 4D DTS would be useful to validate each intra-fraction motion during radiation therapy. In addition, it will be possible to give advantage to real-time imaging for the adaptive radiation therapy. This research was supported by Leading Foreign Research Institute Recruitment Program (Grant No.2009-00420) and Basic Atomic Energy Research Institute (BAERI); (Grant No. 2009-0078390) through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP)

  17. SU-E-J-157: Improving the Quality of T2-Weighted 4D Magnetic Resonance Imaging for Clinical Evaluation

    SciTech Connect

    Du, D; Mutic, S; Hu, Y; Caruthers, S; Glide-Hurst, C; Low, D

    2014-06-01

    Purpose: To develop an imaging technique that enables us to acquire T2- weighted 4D Magnetic Resonance Imaging (4DMRI) with sufficient spatial coverage, temporal resolution and spatial resolution for clinical evaluation. Methods: T2-weighed 4DMRI images were acquired from a healthy volunteer using a respiratory amplitude triggered T2-weighted Turbo Spin Echo sequence. 10 respiratory states were used to equally sample the respiratory range based on amplitude (0%, 20%i, 40%i, 60%i, 80%i, 100%, 80%e, 60%e, 40%e and 20%e). To avoid frequent scanning halts, a methodology was devised that split 10 respiratory states into two packages in an interleaved manner and packages were acquired separately. Sixty 3mm sagittal slices at 1.5mm in-plane spatial resolution were acquired to offer good spatial coverage and reasonable spatial resolution. The in-plane field of view was 375mm × 260mm with nominal scan time of 3 minutes 42 seconds. Acquired 2D images at the same respiratory state were combined to form the 3D image set corresponding to that respiratory state and reconstructed in the coronal view to evaluate whether all slices were at the same respiratory state. 3D image sets of 10 respiratory states represented a complete 4D MRI image set. Results: T2-weighted 4DMRI image were acquired in 10 minutes which was within clinical acceptable range. Qualitatively, the acquired MRI images had good image quality for delineation purposes. There were no abrupt position changes in reconstructed coronal images which confirmed that all sagittal slices were in the same respiratory state. Conclusion: We demonstrated it was feasible to acquire T2-weighted 4DMRI image set within a practical amount of time (10 minutes) that had good temporal resolution (10 respiratory states), spatial resolution (1.5mm × 1.5mm × 3.0mm) and spatial coverage (60 slices) for future clinical evaluation.

  18. Locally homogenized and de-noised vector fields for cardiac fiber tracking in DT-MRI images

    NASA Astrophysics Data System (ADS)

    Akhbardeh, Alireza; Vadakkumpadan, Fijoy; Bayer, Jason; Trayanova, Natalia A.

    2009-02-01

    In this study we develop a methodology to accurately extract and visualize cardiac microstructure from experimental Diffusion Tensor (DT) data. First, a test model was constructed using an image-based model generation technique on Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) data. These images were derived from a dataset having 122x122x500 um3 voxel resolution. De-noising and image enhancement was applied to this high-resolution dataset to clearly define anatomical boundaries within the images. The myocardial tissue was segmented from structural images using edge detection, region growing, and level set thresholding. The primary eigenvector of the diffusion tensor for each voxel, which represents the longitudinal direction of the fiber, was calculated to generate a vector field. Then an advanced locally regularizing nonlinear anisotropic filter, termed Perona-Malik (PEM), was used to regularize this vector field to eliminate imaging artifacts inherent to DT-MRI from volume averaging of the tissue with the surrounding medium. Finally, the vector field was streamlined to visualize fibers within the segmented myocardial tissue to compare the results with unfiltered data. With this technique, we were able to recover locally regularized (homogenized) fibers with a high accuracy by applying the PEM regularization technique, particularly on anatomical surfaces where imaging artifacts were most apparent. This approach not only aides in the visualization of noisy complex 3D vector fields obtained from DT-MRI, but also eliminates volume averaging artifacts to provide a realistic cardiac microstructure for use in electrophysiological modeling studies.

  19. Toward time resolved 4D cardiac CT imaging with patient dose reduction: estimating the global heart motion

    NASA Astrophysics Data System (ADS)

    Taguchi, Katsuyuki; Segars, W. Paul; Fung, George S. K.; Tsui, Benjamin M. W.

    2006-03-01

    Coronary artery imaging with multi-slice helical computed tomography is a promising noninvasive imaging technique. The current major issues include the insufficient temporal resolution and large patient dose. We propose an image reconstruction method which provides a solution to both of the problems. The method uses an iterative approach repeating the following four steps until the difference between the two projection data sets falls below a certain criteria in step-4: 1) estimating or updating the cardiac motion vectors, 2) reconstructing the time-resolved 4D dynamic volume images using the motion vectors, 3) calculating the projection data from the current 4D images, 4) comparing them with the measured ones. In this study, we obtain the first estimate of the motion vector. We use the 4D NCAT phantom, a realistic computer model for the human anatomy and cardiac motions, to generate the dynamic fan-beam projection data sets as well to provide a known truth for the motion. Then, the halfscan reconstruction with the sliding time-window technique is used to generate cine images: f(t, r r). Here, we use one heart beat for each position r so that the time information is retained. Next, the magnitude of the first derivative of f(t, r r) with respect to time, i.e., |df/dt|, is calculated and summed over a region-of-interest (ROI), which is called the mean-absolute difference (MAD). The initial estimation of the vector field are obtained using MAD for each ROI. Results of the preliminary study are presented.

  20. 4-D Photoacoustic Tomography

    NASA Astrophysics Data System (ADS)

    Xiang, Liangzhong; Wang, Bo; Ji, Lijun; Jiang, Huabei

    2013-01-01

    Photoacoustic tomography (PAT) offers three-dimensional (3D) structural and functional imaging of living biological tissue with label-free, optical absorption contrast. These attributes lend PAT imaging to a wide variety of applications in clinical medicine and preclinical research. Despite advances in live animal imaging with PAT, there is still a need for 3D imaging at centimeter depths in real-time. We report the development of four dimensional (4D) PAT, which integrates time resolutions with 3D spatial resolution, obtained using spherical arrays of ultrasonic detectors. The 4D PAT technique generates motion pictures of imaged tissue, enabling real time tracking of dynamic physiological and pathological processes at hundred micrometer-millisecond resolutions. The 4D PAT technique is used here to image needle-based drug delivery and pharmacokinetics. We also use this technique to monitor 1) fast hemodynamic changes during inter-ictal epileptic seizures and 2) temperature variations during tumor thermal therapy.

  1. 5D respiratory motion model based image reconstruction algorithm for 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Liu, Jiulong; Zhang, Xue; Zhang, Xiaoqun; Zhao, Hongkai; Gao, Yu; Thomas, David; Low, Daniel A.; Gao, Hao

    2015-11-01

    4D cone-beam computed tomography (4DCBCT) reconstructs a temporal sequence of CBCT images for the purpose of motion management or 4D treatment in radiotherapy. However the image reconstruction often involves the binning of projection data to each temporal phase, and therefore suffers from deteriorated image quality due to inaccurate or uneven binning in phase, e.g., under the non-periodic breathing. A 5D model has been developed as an accurate model of (periodic and non-periodic) respiratory motion. That is, given the measurements of breathing amplitude and its time derivative, the 5D model parametrizes the respiratory motion by three time-independent variables, i.e., one reference image and two vector fields. In this work we aim to develop a new 4DCBCT reconstruction method based on 5D model. Instead of reconstructing a temporal sequence of images after the projection binning, the new method reconstructs time-independent reference image and vector fields with no requirement of binning. The image reconstruction is formulated as a optimization problem with total-variation regularization on both reference image and vector fields, and the problem is solved by the proximal alternating minimization algorithm, during which the split Bregman method is used to reconstruct the reference image, and the Chambolle's duality-based algorithm is used to reconstruct the vector fields. The convergence analysis of the proposed algorithm is provided for this nonconvex problem. Validated by the simulation studies, the new method has significantly improved image reconstruction accuracy due to no binning and reduced number of unknowns via the use of the 5D model.

  2. Radiation Dose Reduction in Pediatric Body CT Using Iterative Reconstruction and a Novel Image-Based Denoising Method

    PubMed Central

    Yu, Lifeng; Fletcher, Joel G.; Shiung, Maria; Thomas, Kristen B.; Matsumoto, Jane M.; Zingula, Shannon N.; McCollough, Cynthia H.

    2016-01-01

    OBJECTIVE The objective of this study was to evaluate the radiation dose reduction potential of a novel image-based denoising technique in pediatric abdominopelvic and chest CT examinations and compare it with a commercial iterative reconstruction method. MATERIALS AND METHODS Data were retrospectively collected from 50 (25 abdominopelvic and 25 chest) clinically indicated pediatric CT examinations. For each examination, a validated noise-insertion tool was used to simulate half-dose data, which were reconstructed using filtered back-projection (FBP) and sinogram-affirmed iterative reconstruction (SAFIRE) methods. A newly developed denoising technique, adaptive nonlocal means (aNLM), was also applied. For each of the 50 patients, three pediatric radiologists evaluated four datasets: full dose plus FBP, half dose plus FBP, half dose plus SAFIRE, and half dose plus aNLM. For each examination, the order of preference for the four datasets was ranked. The organ-specific diagnosis and diagnostic confidence for five primary organs were recorded. RESULTS The mean (± SD) volume CT dose index for the full-dose scan was 5.3 ± 2.1 mGy for abdominopelvic examinations and 2.4 ± 1.1 mGy for chest examinations. For abdominopelvic examinations, there was no statistically significant difference between the half dose plus aNLM dataset and the full dose plus FBP dataset (3.6 ± 1.0 vs 3.6 ± 0.9, respectively; p = 0.52), and aNLM performed better than SAFIRE. For chest examinations, there was no statistically significant difference between the half dose plus SAFIRE and the full dose plus FBP (4.1 ± 0.6 vs 4.2 ± 0.6, respectively; p = 0.67), and SAFIRE performed better than aNLM. For all organs, there was more than 85% agreement in organ-specific diagnosis among the three half-dose configurations and the full dose plus FBP configuration. CONCLUSION Although a novel image-based denoising technique performed better than a commercial iterative reconstruction method in pediatric

  3. Advanced image reconstruction strategies for 4D prostate DCE-MRI: steps toward clinical practicality

    NASA Astrophysics Data System (ADS)

    Stinson, Eric G.; Borisch, Eric A.; Froemming, Adam T.; Kawashima, Akira; Young, Phillip M.; Warndahl, Brent A.; Grimm, Roger C.; Manduca, Armando; Riederer, Stephen J.; Trzasko, Joshua D.

    2015-09-01

    Dynamic contrast-enhanced (DCE) MRI is an important tool for the detection and characterization of primary and recurring prostate cancer. Advanced reconstruction strategies (e.g., sparse or low-rank regression) provide improved depiction of contrast dynamics and pharmacokinetic parameters; however, the high computation cost of reconstructing 4D (3D+time, 50+ frames) datasets typically inhibits their routine clinical use. Here, a novel alternating direction method-of-multipliers (ADMM) optimization strategy is described that enables these methods to be executed in ∠5 minutes, and thus within the standard clinical workflow. After overviewing the mechanics of this approach, high-performance implementation strategies will be discussed and demonstrated through clinical cases.

  4. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm

    PubMed Central

    Molaei, Mehdi; Sheng, Jian

    2014-01-01

    Abstract: Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  5. Enhanced Terahertz Imaging of Small Forced Delamination in Woven Glass Fibre-reinforced Composites with Wavelet De-noising

    NASA Astrophysics Data System (ADS)

    Dong, Junliang; Locquet, Alexandre; Citrin, D. S.

    2016-03-01

    Terahertz (THz) reflection imaging is applied to characterize a woven glass fibre-reinforced composite laminate with a small region of forced delamination. The forced delamination is created by inserting a disk of 25- μ m-thick Upilex film, which is below the THz axial resolution, resulting in one featured echo with small amplitude in the reflected THz pulses. Low-amplitude components of the temporal signal due to ambient water vapor produce features of comparable amplitude with features associated with the THz pulse reflected off the interfaces of the delamination and suppress the contrast of THz C- and B-scans. Wavelet shrinkage de-noising is performed to remove water-vapor features, leading to enhanced THz C- and B-scans to locate the delamination in three dimensions with high contrast.

  6. Assessment of Left Ventricular Function in Cardiac MSCT Imaging by a 4D Hierarchical Surface-Volume Matching Process

    PubMed Central

    Simon, Antoine; Boulmier, Dominique; Coatrieux, Jean-Louis; Le Breton, Hervé

    2006-01-01

    Multislice computed tomography (MSCT) scanners offer new perspectives for cardiac kinetics evaluation with 4D dynamic sequences of high contrast and spatiotemporal resolutions. A new method is proposed for cardiac motion extraction in multislice CT. Based on a 4D hierarchical surface-volume matching process, it provides the detection of the heart left cavities along the acquired sequence and the estimation of their 3D surface velocity fields. A Markov random field model is defined to find, according to topological descriptors, the best correspondences between a 3D mesh describing the left endocardium at one time and the 3D acquired volume at the following time. The global optimization of the correspondences is realized with a multiresolution process. Results obtained on simulated and real data show the capabilities to extract clinically relevant global and local motion parameters and highlight new perspectives in cardiac computed tomography imaging. PMID:23165027

  7. A spatio-temporal filtering approach to denoising of single-trial ERP in rapid image triage.

    PubMed

    Yu, Ke; Shen, Kaiquan; Shao, Shiyun; Ng, Wu Chun; Kwok, Kenneth; Li, Xiaoping

    2012-03-15

    Conventional search for images containing points of interest (POI) in large-volume imagery is costly and sometimes even infeasible. The rapid image triage (RIT) system which is a human cognition guided computer vision technique is potentially a promising solution to the problem. In the RIT procedure, images are sequentially presented to a subject at a high speed. At the instant of observing a POI image, unique POI event-related potentials (ERP) characterized by P300 will be elicited and measured on the scalp. With accurate single-trial detection of such unique ERP, RIT can differentiate POI images from non-POI images. However, like other brain-computer interface systems relying on single-trial detection, RIT suffers from the low signal-to-noise ratio (SNR) of the single-trial ERP. This paper presents a spatio-temporal filtering approach tailored for the denoising of single-trial ERP for RIT. The proposed approach is essentially a non-uniformly delayed spatial Gaussian filter that attempts to suppress the non-event related background electroencephalogram (EEG) and other noises without significantly attenuating the useful ERP signals. The efficacy of the proposed approach is illustrated by both simulation tests and real RIT experiments. In particular, the real RIT experiments on 20 subjects show a statistically significant and meaningful average decrease of 9.8% in RIT classification error rate, compared to that without the proposed approach.

  8. Impact of CT attenuation correction method on quantitative respiratory-correlated (4D) PET/CT imaging

    SciTech Connect

    Nyflot, Matthew J.; Lee, Tzu-Cheng; Alessio, Adam M.; Kinahan, Paul E.; Wollenweber, Scott D.; Stearns, Charles W.; Bowen, Stephen R.

    2015-01-15

    Purpose: Respiratory-correlated positron emission tomography (PET/CT) 4D PET/CT is used to mitigate errors from respiratory motion; however, the optimal CT attenuation correction (CTAC) method for 4D PET/CT is unknown. The authors performed a phantom study to evaluate the quantitative performance of CTAC methods for 4D PET/CT in the ground truth setting. Methods: A programmable respiratory motion phantom with a custom movable insert designed to emulate a lung lesion and lung tissue was used for this study. The insert was driven by one of five waveforms: two sinusoidal waveforms or three patient-specific respiratory waveforms. 3DPET and 4DPET images of the phantom under motion were acquired and reconstructed with six CTAC methods: helical breath-hold (3DHEL), helical free-breathing (3DMOT), 4D phase-averaged (4DAVG), 4D maximum intensity projection (4DMIP), 4D phase-matched (4DMATCH), and 4D end-exhale (4DEXH) CTAC. Recovery of SUV{sub max}, SUV{sub mean}, SUV{sub peak}, and segmented tumor volume was evaluated as RC{sub max}, RC{sub mean}, RC{sub peak}, and RC{sub vol}, representing percent difference relative to the static ground truth case. Paired Wilcoxon tests and Kruskal–Wallis ANOVA were used to test for significant differences. Results: For 4DPET imaging, the maximum intensity projection CTAC produced significantly more accurate recovery coefficients than all other CTAC methods (p < 0.0001 over all metrics). Over all motion waveforms, ratios of 4DMIP CTAC recovery were 0.2 ± 5.4, −1.8 ± 6.5, −3.2 ± 5.0, and 3.0 ± 5.9 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. In comparison, recovery coefficients for phase-matched CTAC were −8.4 ± 5.3, −10.5 ± 6.2, −7.6 ± 5.0, and −13.0 ± 7.7 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. When testing differences between phases over all CTAC methods and waveforms, end-exhale phases were significantly more accurate (p = 0.005). However, these differences were driven by

  9. WE-G-BRF-09: Force- and Image-Adaptive Strategies for Robotised Placement of 4D Ultrasound Probes

    SciTech Connect

    Kuhlemann, I; Bruder, R; Ernst, F; Schweikard, A

    2014-06-15

    Purpose: To allow continuous acquisition of high quality 4D ultrasound images for non-invasive live tracking of tumours for IGRT, image- and force-adaptive strategies for robotised placement of 4D ultrasound probes are developed and evaluated. Methods: The developed robotised ultrasound system is based on a 6-axes industrial robot (adept Viper s850) carrying a 4D ultrasound transducer with a mounted force-torque sensor. The force-adaptive placement strategies include probe position control using artificial potential fields and contact pressure regulation by a PD controller strategy. The basis for live target tracking is a continuous minimum contact pressure to ensure good image quality and high patient comfort. This contact pressure can be significantly disturbed by respiratory movements and has to be compensated. All measurements were performed on human subjects under realistic conditions. When performing cardiac ultrasound, rib- and lung shadows are a common source of interference and can disrupt the tracking. To ensure continuous tracking, these artefacts had to be detected to automatically realign the probe. The detection is realised by multiple algorithms based on entropy calculations as well as a determination of the image quality. Results: Through active contact pressure regulation it was possible to reduce the variance of the contact pressure by 89.79% despite respiratory motion of the chest. The results regarding the image processing clearly demonstrate the feasibility to detect image artefacts like rib shadows in real-time. Conclusion: In all cases, it was possible to stabilise the image quality by active contact pressure control and automatically detected image artefacts. This fact enables the possibility to compensate for such interferences by realigning the probe and thus continuously optimising the ultrasound images. This is a huge step towards fully automated transducer positioning and opens the possibility for stable target tracking in

  10. 4D reconstruction of the past: the image retrieval and 3D model construction pipeline

    NASA Astrophysics Data System (ADS)

    Hadjiprocopis, Andreas; Ioannides, Marinos; Wenzel, Konrad; Rothermel, Mathias; Johnsons, Paul S.; Fritsch, Dieter; Doulamis, Anastasios; Protopapadakis, Eftychios; Kyriakaki, Georgia; Makantasis, Kostas; Weinlinger, Guenther; Klein, Michael; Fellner, Dieter; Stork, Andre; Santos, Pedro

    2014-08-01

    One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Our main goal is to enable historians, architects, archaeolo- gists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web. This paper aims to provide an update of our progress in designing and imple- menting a pipeline for searching, filtering and retrieving photographs from Open Access Image Repositories and social media sites and using these images to build accurate 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EU- ROPEANA. We provide details of how our implemented software searches and retrieves images of archaeological sites from Flickr and Picasa repositories as well as strategies on how to filter the results, on two levels; a) based on their built-in metadata including geo-location information and b) based on image processing and clustering techniques. We also describe our implementation of a Structure from Motion pipeline designed for producing 3D models using the large collection of 2D input images (>1000) retrieved from Internet Repositories.

  11. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G.; Liu, Chihray; Lu, Bo

    2015-12-01

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm ‘the common mask guided image reconstruction’ (c-MGIR). In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and ‘well’ solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the

  12. 4D optical coherence tomography of the embryonic heart using gated imaging

    NASA Astrophysics Data System (ADS)

    Jenkins, Michael W.; Rothenberg, Florence; Roy, Debashish; Nikolski, Vladimir P.; Wilson, David L.; Efimov, Igor R.; Rollins, Andrew M.

    2005-04-01

    Computed tomography (CT), ultrasound, and magnetic resonance imaging have been used to image and diagnose diseases of the human heart. By gating the acquisition of the images to the heart cycle (gated imaging), these modalities enable one to produce 3D images of the heart without significant motion artifact and to more accurately calculate various parameters such as ejection fractions [1-3]. Unfortunately, these imaging modalities give inadequate resolution when investigating embryonic development in animal models. Defects in developmental mechanisms during embryogenesis have long been thought to result in congenital cardiac anomalies. Our understanding of normal mechanisms of heart development and how abnormalities can lead to defects has been hampered by our inability to detect anatomic and physiologic changes in these small (<2mm) organs. Optical coherence tomography (OCT) has made it possible to visualize internal structures of the living embryonic heart with high-resolution in two- and threedimensions. OCT offers higher resolution than ultrasound (30 um axial, 90 um lateral) and magnetic resonance microscopy (25 um axial, 31 um lateral) [4, 5], with greater depth penetration over confocal microscopy (200 um). Optical coherence tomography (OCT) uses back reflected light from a sample to create an image with axial resolutions ranging from 2-15 um, while penetrating 1-2 mm in depth [6]. In the past, OCT groups estimated ejection fractions using 2D images in a Xenopus laevis [7], created 3D renderings of chick embryo hearts [8], and used a gated reconstruction technique to produce 2D Doppler OCT image of an in vivo Xenopus laevis heart [9]. In this paper we present a gated imaging system that allowed us to produce a 16-frame 3D movie of a beating chick embryo heart. The heart was excised from a day two (stage 13) chicken embryo and electrically paced at 1 Hz. We acquired 2D images (B-scans) in 62.5 ms, which provides enough temporal resolution to distinguish end

  13. Off-line determination of the optimal number of iterations of the robust anisotropic diffusion filter applied to denoising of brain MR images.

    PubMed

    Ferrari, Ricardo J

    2013-02-01

    Although anisotropic diffusion filters have been used extensively and with great success in medical image denoising, one limitation of this iterative approach, when used on fully automatic medical image processing schemes, is that the quality of the resulting denoised image is highly dependent on the number of iterations of the algorithm. Using many iterations may excessively blur the edges of the anatomical structures, while a few may not be enough to remove the undesirable noise. In this work, a mathematical model is proposed to automatically determine the number of iterations of the robust anisotropic diffusion filter applied to the problem of denoising three common human brain magnetic resonance (MR) images (T1-weighted, T2-weighted and proton density). The model is determined off-line by means of the maximization of the mean structural similarity index, which is used in this work as metric for quantitative assessment of the resulting processed images obtained after each iteration of the algorithm. After determining the model parameters, the optimal number of iterations of the algorithm is easily determined without requiring any extra computation time. The proposed method was tested on 3D synthetic and clinical human brain MR images and the results of qualitative and quantitative evaluation have shown its effectiveness. PMID:23124813

  14. 4D PET iterative deconvolution with spatiotemporal regularization for quantitative dynamic PET imaging.

    PubMed

    Reilhac, Anthonin; Charil, Arnaud; Wimberley, Catriona; Angelis, Georgios; Hamze, Hasar; Callaghan, Paul; Garcia, Marie-Paule; Boisson, Frederic; Ryder, Will; Meikle, Steven R; Gregoire, Marie-Claude

    2015-09-01

    Quantitative measurements in dynamic PET imaging are usually limited by the poor counting statistics particularly in short dynamic frames and by the low spatial resolution of the detection system, resulting in partial volume effects (PVEs). In this work, we present a fast and easy to implement method for the restoration of dynamic PET images that have suffered from both PVE and noise degradation. It is based on a weighted least squares iterative deconvolution approach of the dynamic PET image with spatial and temporal regularization. Using simulated dynamic [(11)C] Raclopride PET data with controlled biological variations in the striata between scans, we showed that the restoration method provides images which exhibit less noise and better contrast between emitting structures than the original images. In addition, the method is able to recover the true time activity curve in the striata region with an error below 3% while it was underestimated by more than 20% without correction. As a result, the method improves the accuracy and reduces the variability of the kinetic parameter estimates calculated from the corrected images. More importantly it increases the accuracy (from less than 66% to more than 95%) of measured biological variations as well as their statistical detectivity. PMID:26080302

  15. A novel non-registration based segmentation approach of 4D dynamic upper airway MR images: minimally interactive fuzzy connectedness

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Sin, Sanghun; Wagshul, Mark E.; Arens, Raanan

    2014-03-01

    There are several disease conditions that lead to upper airway restrictive disorders. In the study of these conditions, it is important to take into account the dynamic nature of the upper airway. Currently, dynamic MRI is the modality of choice for studying these diseases. Unfortunately, the contrast resolution obtainable in the images poses many challenges for an effective segmentation of the upper airway structures. No viable methods have been developed to date to solve this problem. In this paper, we demonstrate the adaptation of the iterative relative fuzzy connectedness (IRFC) algorithm for this application as a potential practical tool. After preprocessing to correct for background image non-uniformities and the non-standardness of MRI intensities, seeds are specified for the airway and its crucial background tissue components in only the 3D image corresponding to the first time instance of the 4D volume. Subsequently the process runs without human interaction and completes segmenting the whole 4D volume in 10 sec. Our evaluations indicate that the segmentations are of very good quality achieving true positive and false positive volume fractions and boundary distance with respect to reference manual segmentations of about 93%, 0.1%, and 0.5 mm, respectively.

  16. Dynamic Multiscale Boundary Conditions for 4D CT Images of Healthy and Emphysematous Rat

    SciTech Connect

    Jacob, Rick E.; Carson, James P.; Thomas, Mathew; Einstein, Daniel R.

    2013-06-14

    Changes in the shape of the lung during breathing determine the movement of airways and alveoli, and thus impact airflow dynamics. Modeling airflow dynamics in health and disease is a key goal for predictive multiscale models of respiration. Past efforts to model changes in lung shape during breathing have measured shape at multiple breath-holds. However, breath-holds do not capture hysteretic differences between inspiration and expiration resulting from the additional energy required for inspiration. Alternatively, imaging dynamically – without breath-holds – allows measurement of hysteretic differences. In this study, we acquire multiple micro-CT images per breath (4DCT) in live rats, and from these images we develop, for the first time, dynamic volume maps. These maps show changes in local volume across the entire lung throughout the breathing cycle and accurately predict the global pressure-volume (PV) hysteresis.

  17. Imaging plant growth in 4D: robust tissue reconstruction and lineaging at cell resolution.

    PubMed

    Fernandez, Romain; Das, Pradeep; Mirabet, Vincent; Moscardi, Eric; Traas, Jan; Verdeil, Jean-Luc; Malandain, Grégoire; Godin, Christophe

    2010-07-01

    Quantitative information on growing organs is required to better understand morphogenesis in both plants and animals. However, detailed analyses of growth patterns at cellular resolution have remained elusive. We developed an approach, multiangle image acquisition, three-dimensional reconstruction and cell segmentation-automated lineage tracking (MARS-ALT), in which we imaged whole organs from multiple angles, computationally merged and segmented these images to provide accurate cell identification in three dimensions and automatically tracked cell lineages through multiple rounds of cell division during development. Using these methods, we quantitatively analyzed Arabidopsis thaliana flower development at cell resolution, which revealed differential growth patterns of key regions during early stages of floral morphogenesis. Lastly, using rice roots, we demonstrated that this approach is both generic and scalable.

  18. Imaging plant growth in 4D: robust tissue reconstruction and lineaging at cell resolution.

    PubMed

    Fernandez, Romain; Das, Pradeep; Mirabet, Vincent; Moscardi, Eric; Traas, Jan; Verdeil, Jean-Luc; Malandain, Grégoire; Godin, Christophe

    2010-07-01

    Quantitative information on growing organs is required to better understand morphogenesis in both plants and animals. However, detailed analyses of growth patterns at cellular resolution have remained elusive. We developed an approach, multiangle image acquisition, three-dimensional reconstruction and cell segmentation-automated lineage tracking (MARS-ALT), in which we imaged whole organs from multiple angles, computationally merged and segmented these images to provide accurate cell identification in three dimensions and automatically tracked cell lineages through multiple rounds of cell division during development. Using these methods, we quantitatively analyzed Arabidopsis thaliana flower development at cell resolution, which revealed differential growth patterns of key regions during early stages of floral morphogenesis. Lastly, using rice roots, we demonstrated that this approach is both generic and scalable. PMID:20543845

  19. A novel structured dictionary for fast processing of 3D medical images, with application to computed tomography restoration and denoising

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-03-01

    Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.

  20. 4-D flow magnetic resonance imaging: blood flow quantification compared to 2-D phase-contrast magnetic resonance imaging and Doppler echocardiography

    PubMed Central

    Gabbour, Maya; Schnell, Susanne; Jarvis, Kelly; Robinson, Joshua D.; Markl, Michael

    2015-01-01

    Background Doppler echocardiography (echo) is the reference standard for blood flow velocity analysis, and two-dimensional (2-D) phase-contrast magnetic resonance imaging (MRI) is considered the reference standard for quantitative blood flow assessment. However, both clinical standard-of-care techniques are limited by 2-D acquisitions and single-direction velocity encoding and may make them inadequate to assess the complex three-dimensional hemodynamics seen in congenital heart disease. Four-dimensional flow MRI (4-D flow) enables qualitative and quantitative analysis of complex blood flow in the heart and great arteries. Objectives The objectives of this study are to compare 4-D flow with 2-D phase-contrast MRI for quantification of aortic and pulmonary flow and to evaluate the advantage of 4-D flow-based volumetric flow analysis compared to 2-D phase-contrast MRI and echo for peak velocity assessment in children and young adults. Materials and methods Two-dimensional phase-contrast MRI of the aortic root, main pulmonary artery (MPA), and right and left pulmonary arteries (RPA, LPA) and 4-D flow with volumetric coverage of the aorta and pulmonary arteries were performed in 50 patients (mean age: 13.1±6.4 years). Four-dimensional flow analyses included calculation of net flow and regurgitant fraction with 4-D flow analysis planes similarly positioned to 2-D planes. In addition, 4-D flow volumetric assessment of aortic root/ascending aorta and MPA peak velocities was performed and compared to 2-D phase-contrast MRI and echo. Results Excellent correlation and agreement were found between 2-D phase-contrast MRI and 4-D flow for net flow (r=0.97, P<0.001) and excellent correlation with good agreement was found for regurgitant fraction (r= 0.88, P<0.001) in all vessels. Two-dimensional phase-contrast MRI significantly underestimated aortic (P= 0.032) and MPA (P<0.001) peak velocities compared to echo, while volumetric 4-D flow analysis resulted in higher (aortic: P=0

  1. A 3D- and 4D-ESR imaging system for small animals.

    PubMed

    Oikawa, K; Ogata, T; Togashi, H; Yokoyama, H; Ohya-Nishiguchi, H; Kamada, H

    1996-01-01

    A new version of in vivo ESR-CT system composed of custom-made 0.7 GHz ESR spectrometer, air-core magnet with a field-scanning coil, three field-gradient coils, and two computers enables up- and down-field, and rapid magnetic-field scanning linearly controlled by computer. 3D-pictures of distribution of nitroxide radicals injected in brains and livers of rats and mice were obtained in 1.5 min with resolution of 1 mm. We have also succeeded in obtaining spatial-time imagings of the animals.

  2. Sparse-CAPR: Highly-Accelerated 4D CE-MRA with Parallel Imaging and Nonconvex Compressive Sensing

    PubMed Central

    Trzasko, Joshua D.; Haider, Clifton R.; Borisch, Eric A.; Campeau, Norbert G.; Glockner, James F.; Riederer, Stephen J.; Manduca, Armando

    2012-01-01

    CAPR is a SENSE-type parallel 3DFT acquisition paradigm for 4D contrast-enhanced magnetic resonance angiography (CE-MRA) that has been demonstrated capable of providing high spatial and temporal resolution, diagnostic-quality images at very high acceleration rates. However, CAPR images are typically reconstructed online using Tikhonov regularization and partial Fourier methods, which are prone to exhibit noise amplification and undersampling artifacts when operating at very high acceleration rates. In this work, a sparsity-driven offline reconstruction framework for CAPR is developed and demonstrated to consistently provide improvements over the currently-employed reconstruction strategy against these ill-effects. Moreover, the proposed reconstruction strategy requires no changes to the existing CAPR acquisition protocol, and an efficient numerical optimization and hardware system are described that allow for a 256×160×80 volume CE-MRA volume to be reconstructed from an 8-channel data set in less than two minutes. PMID:21608028

  3. Online 4d Reconstruction Using Multi-Images Available Under Open Access

    NASA Astrophysics Data System (ADS)

    Ioannides, M.; Hadjiprocopi, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E.; Makantasis, K.; Santos, P.; Fellner, D.; Stork, A.; Balet, O.; Julien, M.; Weinlinger, G.; Johnson, P. S.; Klein, M.; Fritsch, D.

    2013-07-01

    The advent of technology in digital cameras and their incorporation into virtually any smart mobile device has led to an explosion of the number of photographs taken every day. Today, the number of images stored online and available freely has reached unprecedented levels. It is estimated that in 2011, there were over 100 billion photographs stored in just one of the major social media sites. This number is growing exponentially. Moreover, advances in the fields of Photogrammetry and Computer Vision have led to significant breakthroughs such as the Structure from Motion algorithm which creates 3D models of objects using their twodimensional photographs. The existence of powerful and affordable computational machinery not only the reconstruction of complex structures but also entire cities. This paper illustrates an overview of our methodology for producing 3D models of Cultural Heritage structures such as monuments and artefacts from 2D data (pictures, video), available on Internet repositories, social media, Google Maps, Bing, etc. We also present new approaches to semantic enrichment of the end results and their subsequent export to Europeana, the European digital library, for integrated, interactive 3D visualisation within regular web browsers using WebGl and X3D. Our main goal is to enable historians, architects, archaeologists, urban planners and affiliated professionals to reconstruct views of historical structures from millions of images floating around the web and interact with them.

  4. 4-D imaging and monitoring of the Solfatara crater (Italy) by ambient noise tomography

    NASA Astrophysics Data System (ADS)

    Pilz, Marco; Parolai, Stefano; Woith, Heiko; Gresse, Marceau; Vandemeulebrouck, Jean

    2016-04-01

    Imaging shallow subsurface structures and monitoring related temporal variations are two of the main tasks for modern geosciences and seismology. Although many observations have reported temporal velocity changes, e.g., in volcanic areas and on landslides, new methods based on passive sources like ambient seismic noise can provide accurate spatially and temporally resolved information on the velocity structure and on velocity changes. The success of these passive applications is explained by the fact that these methods are based on surface waves which are always present in the ambient seismic noise wave field because they are excited preferentially by superficial sources. Such surface waves can easily be extracted because they dominate the Greeńs function between receivers located at the surface. For real-time monitoring of the shallow velocity structure of the Solfatara crater, one of the forty volcanoes in the Campi Flegrei area characterized by an intense hydrothermal activity due to the interaction of deep convection and meteoric water, we have installed a dense network of 50 seismological sensing units covering the whole surface area in the framework of the European project MED-SUV (The MED-SUV project has received funding from the European Union Seventh Framework Programme FP7 under Grant agreement no 308665). Continuous recordings of the ambient seismic noise over several days as well as signals of an active vibroseis source have been used. Based on a weighted inversion procedure for 3D-passive imaging using ambient noise cross-correlations of both Rayleigh and Love waves, we will present a high-resolution shear-wave velocity model of the structure beneath the Solfatara crater and its temporal changes. Results of seismic tomography are compared with a 3-D electrical resistivity model and CO2 flux map.

  5. 4D-CT imaging of a volume influenced by respiratory motion on multi-slice CT.

    PubMed

    Pan, Tinsu; Lee, Ting-Yim; Rietzel, Eike; Chen, George T Y

    2004-02-01

    We propose a new scanning protocol for generating 4D-CT image data sets influenced by respiratory motion. A cine scanning protocol is used during data acquisition, and two registration methods are used to sort images into temporal phases. A volume is imaged in multiple acquisitions of 1 or 2 cm length along the cranial-caudal direction. In each acquisition, the scans are continuously acquired for a time interval greater than or equal to the average respiratory cycle plus the duration of the data for an image reconstruction. The x ray is turned off during CT table translation and the acquisition is repeated until the prescribed volume is completely scanned. The scanning for 20 cm coverage takes about 1 min with an eight-slice CT or 2 mins with a four-slice CT. After data acquisition, the CT data are registered into respiratory phases based on either an internal anatomical match or an external respiratory signal. The internal approach registers the data according to correlation of anatomy in the CT images between two adjacent locations in consecutive respiratory cycles. We have demonstrated the technique with ROIs placed in the region of diaphragm. The external approach registers the image data according to an externally recorded respiratory signal generated by the Real-Time Position Management (RPM) Respiratory Gating System (Varian Medical Systems, Palo Alto, CA). Compared with previously reported prospective or retrospective imaging of the respiratory motion with a single-slice or multi-slice CT, the 4D-CT method proposed here provides (1) a shorter scan time of three to six times faster than the single-slice CT with prospective gating; (2) a shorter scan time of two to four times improvement over a previously reported multi-slice CT implementation, and (3) images over all phases of a breathing cycle. We have applied the scanning and registration methods on phantom, animal and patients, and initial results suggest the applicability of both the scanning and the

  6. SU-E-QI-17: Dependence of 3D/4D PET Quantitative Image Features On Noise

    SciTech Connect

    Oliver, J; Budzevich, M; Zhang, G; Latifi, K; Dilling, T; Balagurunathan, Y; Gu, Y; Grove, O; Feygelman, V; Gillies, R; Moros, E; Lee, H.

    2014-06-15

    Purpose: Quantitative imaging is a fast evolving discipline where a large number of features are extracted from images; i.e., radiomics. Some features have been shown to have diagnostic, prognostic and predictive value. However, they are sensitive to acquisition and processing factors; e.g., noise. In this study noise was added to positron emission tomography (PET) images to determine how features were affected by noise. Methods: Three levels of Gaussian noise were added to 8 lung cancer patients PET images acquired in 3D mode (static) and using respiratory tracking (4D); for the latter images from one of 10 phases were used. A total of 62 features: 14 shape, 19 intensity (1stO), 18 GLCM textures (2ndO; from grey level co-occurrence matrices) and 11 RLM textures (2ndO; from run-length matrices) features were extracted from segmented tumors. Dimensions of GLCM were 256×256, calculated using 3D images with a step size of 1 voxel in 13 directions. Grey levels were binned into 256 levels for RLM and features were calculated in all 13 directions. Results: Feature variation generally increased with noise. Shape features were the most stable while RLM were the most unstable. Intensity and GLCM features performed well; the latter being more robust. The most stable 1stO features were compactness, maximum and minimum length, standard deviation, root-mean-squared, I30, V10-V90, and entropy. The most stable 2ndO features were entropy, sum-average, sum-entropy, difference-average, difference-variance, difference-entropy, information-correlation-2, short-run-emphasis, long-run-emphasis, and run-percentage. In general, features computed from images from one of the phases of 4D scans were more stable than from 3D scans. Conclusion: This study shows the need to characterize image features carefully before they are used in research and medical applications. It also shows that the performance of features, and thereby feature selection, may be assessed in part by noise analysis.

  7. 4D imaging of fracturing in organic-rich shales during heating

    SciTech Connect

    Maya Kobchenko; Hamed Panahi; François Renard; Dag K. Dysthe; Anders Malthe-Sørenssen; Adriano Mazzini; Julien Scheibert1; Bjørn Jamtveit; Paul Meakin

    2011-12-01

    To better understand the mechanisms of fracture pattern development and fluid escape in low permeability rocks, we performed time-resolved in situ X-ray tomography imaging to investigate the processes that occur during the slow heating (from 60 to 400 C) of organic-rich Green River shale. At about 350 C cracks nucleated in the sample, and as the temperature continued to increase, these cracks propagated parallel to shale bedding and coalesced, thus cutting across the sample. Thermogravimetry and gas chromatography revealed that the fracturing occurring at {approx}350 C was associated with significant mass loss and release of light hydrocarbons generated by the decomposition of immature organic matter. Kerogen decomposition is thought to cause an internal pressure build up sufficient to form cracks in the shale, thus providing pathways for the outgoing hydrocarbons. We show that a 2D numerical model based on this idea qualitatively reproduces the experimentally observed dynamics of crack nucleation, growth and coalescence, as well as the irregular outlines of the cracks. Our results provide a new description of fracture pattern formation in low permeability shales.

  8. Impact of scanning parameters and breathing patterns on image quality and accuracy of tumor motion reconstruction in 4D CBCT: a phantom study.

    PubMed

    Lee, Soyoung; Yan, Guanghua; Lu, Bo; Kahler, Darren; Li, Jonathan G; Sanjiv, Samat S

    2015-01-01

    Four-dimensional, cone-beam CT (4D CBCT) substantially reduces respiration-induced motion blurring artifacts in three-dimension (3D) CBCT. However, the image quality of 4D CBCT is significantly degraded which may affect its accuracy in localizing a mobile tumor for high-precision, image-guided radiation therapy (IGRT). The purpose of this study was to investigate the impact of scanning parameters hereinafter collectively referred to as scanning sequence) and breathing patterns on the image quality and the accuracy of computed tumor trajectory for a commercial 4D CBCT system, in preparation for its clinical implementation. We simulated a series of periodic and aperiodic sinusoidal breathing patterns with a respiratory motion phantom. The aperiodic pattern was created by varying the period or amplitude of individual sinusoidal breathing cycles. 4D CBCT scans of the phantom were acquired with a manufacturer-supplied scanning sequence (4D-S-slow) and two in-house modified scanning sequences (4D-M-slow and 4D-M-fast). While 4D-S-slow used small field of view (FOV), partial rotation (200°), and no imaging filter, 4D-M-slow and 4D-M-fast used medium FOV, full rotation, and the F1 filter. The scanning speed was doubled in 4D-M-fast (100°/min gantry rotation). The image quality of the 4D CBCT scans was evaluated using contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), and motion blurring ratio (MBR). The trajectory of the moving target was reconstructed by registering each phase of the 4D CBCT with a reference CT. The root-mean-squared-error (RMSE) analysis was used to quantify its accuracy. Significant decrease in CNR and SNR from 3D CBCT to 4D CBCT was observed. The 4D-S-slow and 4D-M-fast scans had comparable image quality, while the 4D-M-slow scans had better performance due to doubled projections. Both CNR and SNR decreased slightly as the breathing period increased, while no dependence on the amplitude was observed. The difference of both CNR and SNR

  9. Computational biomechanics and experimental validation of vessel deformation based on 4D-CT imaging of the porcine aorta

    NASA Astrophysics Data System (ADS)

    Hazer, Dilana; Finol, Ender A.; Kostrzewa, Michael; Kopaigorenko, Maria; Richter, Götz-M.; Dillmann, Rüdiger

    2009-02-01

    Cardiovascular disease results from pathological biomechanical conditions and fatigue of the vessel wall. Image-based computational modeling provides a physical and realistic insight into the patient-specific biomechanics and enables accurate predictive simulations of development, growth and failure of cardiovascular disease. An experimental validation is necessary for the evaluation and the clinical implementation of such computational models. In the present study, we have implemented dynamic Computed-Tomography (4D-CT) imaging and catheter-based in vivo measured pressures to numerically simulate and experimentally evaluate the biomechanics of the porcine aorta. The computations are based on the Finite Element Method (FEM) and simulate the arterial wall response to the transient pressure-based boundary condition. They are evaluated by comparing the numerically predicted wall deformation and that calculated from the acquired 4D-CT data. The dynamic motion of the vessel is quantified by means of the hydraulic diameter, analyzing sequences at 5% increments over the cardiac cycle. Our results show that accurate biomechanical modeling is possible using FEM-based simulations. The RMS error of the computed hydraulic diameter at five cross-sections of the aorta was 0.188, 0.252, 0.280, 0.237 and 0.204 mm, which is equivalent to 1.7%, 2.3%, 2.7%, 2.3% and 2.0%, respectively, when expressed as a function of the time-averaged hydraulic diameter measured from the CT images. The present investigation is a first attempt to simulate and validate vessel deformation based on realistic morphological data and boundary conditions. An experimentally validated system would help in evaluating individual therapies and optimal treatment strategies in the field of minimally invasive endovascular surgery.

  10. Multidimensional immunolabeling and 4D time-lapse imaging of vital ex vivo lung tissue

    PubMed Central

    Vierkotten, Sarah; Lindner, Michael; Königshoff, Melanie; Eickelberg, Oliver

    2015-01-01

    During the last decades, the study of cell behavior was largely accomplished in uncoated or extracellular matrix (ECM)-coated plastic dishes. To date, considerable cell biological efforts have tried to model in vitro the natural microenvironment found in vivo. For the lung, explants cultured ex vivo as lung tissue cultures (LTCs) provide a three-dimensional (3D) tissue model containing all cells in their natural microenvironment. Techniques for assessing the dynamic live interaction between ECM and cellular tissue components, however, are still missing. Here, we describe specific multidimensional immunolabeling of living 3D-LTCs, derived from healthy and fibrotic mouse lungs, as well as patient-derived 3D-LTCs, and concomitant real-time four-dimensional multichannel imaging thereof. This approach allowed the evaluation of dynamic interactions between mesenchymal cells and macrophages with their ECM. Furthermore, fibroblasts transiently expressing focal adhesions markers incorporated into the 3D-LTCs, paving new ways for studying the dynamic interaction between cellular adhesions and their natural-derived ECM. A novel protein transfer technology (FuseIt/Ibidi) shuttled fluorescently labeled α-smooth muscle actin antibodies into the native cells of living 3D-LTCs, enabling live monitoring of α-smooth muscle actin-positive stress fibers in native tissue myofibroblasts residing in fibrotic lesions of 3D-LTCs. Finally, this technique can be applied to healthy and diseased human lung tissue, as well as to adherent cells in conventional two-dimensional cell culture. This novel method will provide valuable new insights into the dynamics of ECM (patho)biology, studying in detail the interaction between ECM and cellular tissue components in their natural microenvironment. PMID:26092995

  11. SU-E-J-74: Impact of Respiration-Correlated Image Quality On Tumor Motion Reconstruction in 4D-CBCT: A Phantom Study

    SciTech Connect

    Lee, S; Lu, B; Samant, S

    2014-06-01

    Purpose: To investigate the effects of scanning parameters and respiratory patterns on the image quality for 4-dimensional cone-beam computed tomography(4D-CBCT) imaging, and assess the accuracy of computed tumor trajectory for lung imaging using registration of phased 4D-CBCT imaging with treatment planning-CT. Methods: We simulated a periodic and non-sinusoidal respirations with various breathing periods and amplitudes using a respiratory phantom(Quasar, Modus Medical Devices Inc) to acquire respiration-correlated 4D-CBCT images. 4D-CBCT scans(Elekta Oncology Systems Ltd) were performed with different scanning parameters for collimation size(e.g., small and medium field-of-views) and scanning speed(e.g., slow 50°·min{sup −1}, fast 100°·min{sup −1}). Using a standard CBCT-QA phantom(Catphan500, The Phantom Laboratory), the image qualities of all phases in 4D-CBCT were evaluated with contrast-to-noise ratio(CNR) for lung tissue and uniformity in each module. Using a respiratory phantom, the target imaging in 4D-CBCT was compared to 3D-CBCT target image. The target trajectory from 10-respiratory phases in 4D-CBCT was extracted using an automatic image registration and subsequently assessed the accuracy by comparing with actual motion of the target. Results: Image analysis indicated that a short respiration with a small amplitude resulted in superior CNR and uniformity. Smaller variation of CNR and uniformity was present amongst different respiratory phases. The small field-of-view with a partial scan using slow scan can improve CNR, but degraded uniformity. Large amplitude of respiration can degrade image quality. RMS of voxel densities in tumor area of 4D-CBCT images between sinusoidal and non-sinusoidal motion exhibited no significant difference. The maximum displacement errors of motion trajectories were less than 1.0 mm and 13.5 mm, for sinusoidal and non-sinusoidal breathings, respectively. The accuracy of motion reconstruction showed good overall

  12. First Steps Toward Ultrasound-Based Motion Compensation for Imaging and Therapy: Calibration with an Optical System and 4D PET Imaging.

    PubMed

    Schwaab, Julia; Kurz, Christopher; Sarti, Cristina; Bongers, André; Schoenahl, Frédéric; Bert, Christoph; Debus, Jürgen; Parodi, Katia; Jenne, Jürgen Walter

    2015-01-01

    Target motion, particularly in the abdomen, due to respiration or patient movement is still a challenge in many diagnostic and therapeutic processes. Hence, methods to detect and compensate this motion are required. Diagnostic ultrasound (US) represents a non-invasive and dose-free alternative to fluoroscopy, providing more information about internal target motion than respiration belt or optical tracking. The goal of this project is to develop an US-based motion tracking for real-time motion correction in radiation therapy and diagnostic imaging, notably in 4D positron emission tomography (PET). In this work, a workflow is established to enable the transformation of US tracking data to the coordinates of the treatment delivery or imaging system - even if the US probe is moving due to respiration. It is shown that the US tracking signal is equally adequate for 4D PET image reconstruction as the clinically used respiration belt and provides additional opportunities in this concern. Furthermore, it is demonstrated that the US probe being within the PET field of view generally has no relevant influence on the image quality. The accuracy and precision of all the steps in the calibration workflow for US tracking-based 4D PET imaging are found to be in an acceptable range for clinical implementation. Eventually, we show in vitro that an US-based motion tracking in absolute room coordinates with a moving US transducer is feasible. PMID:26649277

  13. First Steps Toward Ultrasound-Based Motion Compensation for Imaging and Therapy: Calibration with an Optical System and 4D PET Imaging

    PubMed Central

    Schwaab, Julia; Kurz, Christopher; Sarti, Cristina; Bongers, André; Schoenahl, Frédéric; Bert, Christoph; Debus, Jürgen; Parodi, Katia; Jenne, Jürgen Walter

    2015-01-01

    Target motion, particularly in the abdomen, due to respiration or patient movement is still a challenge in many diagnostic and therapeutic processes. Hence, methods to detect and compensate this motion are required. Diagnostic ultrasound (US) represents a non-invasive and dose-free alternative to fluoroscopy, providing more information about internal target motion than respiration belt or optical tracking. The goal of this project is to develop an US-based motion tracking for real-time motion correction in radiation therapy and diagnostic imaging, notably in 4D positron emission tomography (PET). In this work, a workflow is established to enable the transformation of US tracking data to the coordinates of the treatment delivery or imaging system – even if the US probe is moving due to respiration. It is shown that the US tracking signal is equally adequate for 4D PET image reconstruction as the clinically used respiration belt and provides additional opportunities in this concern. Furthermore, it is demonstrated that the US probe being within the PET field of view generally has no relevant influence on the image quality. The accuracy and precision of all the steps in the calibration workflow for US tracking-based 4D PET imaging are found to be in an acceptable range for clinical implementation. Eventually, we show in vitro that an US-based motion tracking in absolute room coordinates with a moving US transducer is feasible. PMID:26649277

  14. The development of a population of 4D pediatric XCAT phantoms for CT imaging research and optimization

    NASA Astrophysics Data System (ADS)

    Norris, Hannah; Zhang, Yakun; Frush, Jack; Sturgeon, Gregory M.; Minhas, Anum; Tward, Daniel J.; Ratnanather, J. Tilak; Miller, M. I.; Frush, Donald; Samei, Ehsan; Segars, W. Paul

    2014-03-01

    With the increased use of CT examinations, the associated radiation dose has become a large concern, especially for pediatrics. Much research has focused on reducing radiation dose through new scanning and reconstruction methods. Computational phantoms provide an effective and efficient means for evaluating image quality, patient-specific dose, and organ-specific dose in CT. We previously developed a set of highly-detailed 4D reference pediatric XCAT phantoms at ages of newborn, 1, 5, 10, and 15 years with organ and tissues masses matched to ICRP Publication 89 values. We now extend this reference set to a series of 64 pediatric phantoms of a variety of ages and height and weight percentiles, representative of the public at large. High resolution PET-CT data was reviewed by a practicing experienced radiologist for anatomic regularity and was then segmented with manual and semi-automatic methods to form a target model. A Multi-Channel Large Deformation Diffeomorphic Metric Mapping (MC-LDDMM) algorithm was used to calculate the transform from the best age matching pediatric reference phantom to the patient target. The transform was used to complete the target, filling in the non-segmented structures and defining models for the cardiac and respiratory motions. The complete phantoms, consisting of thousands of structures, were then manually inspected for anatomical accuracy. 3D CT data was simulated from the phantoms to demonstrate their ability to generate realistic, patient quality imaging data. The population of pediatric phantoms developed in this work provides a vital tool to investigate dose reduction techniques in 3D and 4D pediatric CT.

  15. SU-E-J-153: Reconstructing 4D Cone Beam CT Images for Clinical QA of Lung SABR Treatments

    SciTech Connect

    Beaudry, J; Bergman, A; Cropp, R

    2015-06-15

    Purpose: To verify that the planned Primary Target Volume (PTV) and Internal Gross Tumor Volume (IGTV) fully enclose a moving lung tumor volume as visualized on a pre-SABR treatment verification 4D Cone Beam CT. Methods: Daily 3DCBCT image sets were acquired immediately prior to treatment for 10 SABR lung patients using the on-board imaging system integrated into a Varian TrueBeam (v1.6: no 4DCBCT module available). Respiratory information was acquired during the scan using the Varian RPM system. The CBCT projections were sorted into 8 bins offline, both by breathing phase and amplitude, using in-house software. An iterative algorithm based on total variation minimization, implemented in the open source reconstruction toolkit (RTK), was used to reconstruct the binned projections into 4DCBCT images. The relative tumor motion was quantified by tracking the centroid of the tumor volume from each 4DCBCT image. Following CT-CBCT registration, the planning CT volumes were compared to the location of the CBCT tumor volume as it moves along its breathing trajectory. An overlap metric quantified the ability of the planned PTV and IGTV to contain the tumor volume at treatment. Results: The 4DCBCT reconstructed images visibly show the tumor motion. The mean overlap between the planned PTV (IGTV) and the 4DCBCT tumor volumes was 100% (94%), with an uncertainty of 5% from the 4DCBCT tumor volume contours. Examination of the tumor motion and overlap metric verify that the IGTV drawn at the planning stage is a good representation of the tumor location at treatment. Conclusion: It is difficult to compare GTV volumes from a 4DCBCT and a planning CT due to image quality differences. However, it was possible to conclude the GTV remained within the PTV 100% of the time thus giving the treatment staff confidence that SABR lung treatements are being delivered accurately.

  16. 4-D segmentation and normalization of 3He MR images for intrasubject assessment of ventilated lung volumes

    NASA Astrophysics Data System (ADS)

    Contrella, Benjamin; Tustison, Nicholas J.; Altes, Talissa A.; Avants, Brian B.; Mugler, John P., III; de Lange, Eduard E.

    2012-03-01

    Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under the same conditions from 7 to 467 days (mean +/- standard deviation: 185 +/- 37.2) later. Several techniques were evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.

  17. Denoising and artefact reduction in dynamic flat detector CT perfusion imaging using high speed acquisition: first experimental and clinical results.

    PubMed

    Manhart, Michael T; Aichert, André; Struffert, Tobias; Deuerling-Zheng, Yu; Kowarschik, Markus; Maier, Andreas K; Hornegger, Joachim; Doerfler, Arnd

    2014-08-21

    Flat detector CT perfusion (FD-CTP) is a novel technique using C-arm angiography systems for interventional dynamic tissue perfusion measurement with high potential benefits for catheter-guided treatment of stroke. However, FD-CTP is challenging since C-arms rotate slower than conventional CT systems. Furthermore, noise and artefacts affect the measurement of contrast agent flow in tissue. Recent robotic C-arms are able to use high speed protocols (HSP), which allow sampling of the contrast agent flow with improved temporal resolution. However, low angular sampling of projection images leads to streak artefacts, which are translated to the perfusion maps. We recently introduced the FDK-JBF denoising technique based on Feldkamp (FDK) reconstruction followed by joint bilateral filtering (JBF). As this edge-preserving noise reduction preserves streak artefacts, an empirical streak reduction (SR) technique is presented in this work. The SR method exploits spatial and temporal information in the form of total variation and time-curve analysis to detect and remove streaks. The novel approach is evaluated in a numerical brain phantom and a patient study. An improved noise and artefact reduction compared to existing post-processing methods and faster computation speed compared to an algebraic reconstruction method are achieved.

  18. PDE-based Non-Linear Diffusion Techniques for Denoising Scientific and Industrial Images: An Empirical Study

    SciTech Connect

    Weeratunga, S K; Kamath, C

    2001-12-20

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, they focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. They complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. They explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. They also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. The empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  19. Denoising and artefact reduction in dynamic flat detector CT perfusion imaging using high speed acquisition: first experimental and clinical results

    NASA Astrophysics Data System (ADS)

    Manhart, Michael T.; Aichert, André; Struffert, Tobias; Deuerling-Zheng, Yu; Kowarschik, Markus; Maier, Andreas K.; Hornegger, Joachim; Doerfler, Arnd

    2014-08-01

    Flat detector CT perfusion (FD-CTP) is a novel technique using C-arm angiography systems for interventional dynamic tissue perfusion measurement with high potential benefits for catheter-guided treatment of stroke. However, FD-CTP is challenging since C-arms rotate slower than conventional CT systems. Furthermore, noise and artefacts affect the measurement of contrast agent flow in tissue. Recent robotic C-arms are able to use high speed protocols (HSP), which allow sampling of the contrast agent flow with improved temporal resolution. However, low angular sampling of projection images leads to streak artefacts, which are translated to the perfusion maps. We recently introduced the FDK-JBF denoising technique based on Feldkamp (FDK) reconstruction followed by joint bilateral filtering (JBF). As this edge-preserving noise reduction preserves streak artefacts, an empirical streak reduction (SR) technique is presented in this work. The SR method exploits spatial and temporal information in the form of total variation and time-curve analysis to detect and remove streaks. The novel approach is evaluated in a numerical brain phantom and a patient study. An improved noise and artefact reduction compared to existing post-processing methods and faster computation speed compared to an algebraic reconstruction method are achieved.

  20. SU-D-207-03: Development of 4D-CBCT Imaging System with Dual Source KV X-Ray Tubes

    SciTech Connect

    Nakamura, M; Ishihara, Y; Matsuo, Y; Ueki, N; Iizuka, Y; Mizowaki, T; Hiraoka, M

    2015-06-15

    Purpose: The purposes of this work are to develop 4D-CBCT imaging system with orthogonal dual source kV X-ray tubes, and to determine the imaging doses from 4D-CBCT scans. Methods: Dual source kV X-ray tubes were used for the 4D-CBCT imaging. The maximum CBCT field of view was 200 mm in diameter and 150 mm in length, and the imaging parameters were 110 kV, 160 mA and 5 ms. The rotational angle was 105°, the rotational speed of the gantry was 1.5°/s, the gantry rotation time was 70 s, and the image acquisition interval was 0.3°. The observed amplitude of infrared marker motion during respiration was used to sort each image into eight respiratory phase bins. The EGSnrc/BEAMnrc and EGSnrc/DOSXYZnrc packages were used to simulate kV X-ray dose distributions of 4D-CBCT imaging. The kV X-ray dose distributions were calculated for 9 lung cancer patients based on the planning CT images with dose calculation grid size of 2.5 x 2.5 x 2.5 mm. The dose covering a 2-cc volume of skin (D2cc), defined as the inner 5 mm of the skin surface with the exception of bone structure, was assessed. Results: A moving object was well identified on 4D-CBCT images in a phantom study. Given a gantry rotational angle of 105° and the configuration of kV X-ray imaging subsystems, both kV X-ray fields overlapped at a part of skin surface. The D2cc for the 4D-CBCT scans was in the range 73.8–105.4 mGy. Linear correlation coefficient between the 1000 minus averaged SSD during CBCT scanning and D2cc was −0.65 (with a slope of −0.17) for the 4D-CBCT scans. Conclusion: We have developed 4D-CBCT imaging system with dual source kV X-ray tubes. The total imaging dose with 4D-CBCT scans was up to 105.4 mGy.

  1. A rapid compression technique for 4-D functional MRI images using data rearrangement and modified binary array techniques.

    PubMed

    Uma Vetri Selvi, G; Nadarajan, R

    2015-12-01

    Compression techniques are vital for efficient storage and fast transfer of medical image data. The existing compression techniques take significant amount of time for performing encoding and decoding and hence the purpose of compression is not fully satisfied. In this paper a rapid 4-D lossy compression method constructed using data rearrangement, wavelet-based contourlet transformation and a modified binary array technique has been proposed for functional magnetic resonance imaging (fMRI) images. In the proposed method, the image slices of fMRI data are rearranged so that the redundant slices form a sequence. The image sequence is then divided into slices and transformed using wavelet-based contourlet transform (WBCT). In WBCT, the high frequency sub-band obtained from wavelet transform is further decomposed into multiple directional sub-bands by directional filter bank to obtain more directional information. The relationship between the coefficients has been changed in WBCT as it has more directions. The differences in parent–child relationships are handled by a repositioning algorithm. The repositioned coefficients are then subjected to quantization. The quantized coefficients are further compressed by modified binary array technique where the most frequently occurring value of a sequence is coded only once. The proposed method has been experimented with fMRI images the results indicated that the processing time of the proposed method is less compared to existing wavelet-based set partitioning in hierarchical trees and set partitioning embedded block coder (SPECK) compression schemes [1]. The proposed method could also yield a better compression performance compared to wavelet-based SPECK coder. The objective results showed that the proposed method could gain good compression ratio in maintaining a peak signal noise ratio value of above 70 for all the experimented sequences. The SSIM value is equal to 1 and the value of CC is greater than 0.9 for all

  2. Analysis of the advantage of individual PTVs defined on axial 3D CT and 4D CT images for liver cancer.

    PubMed

    Li, Fengxiang; Li, Jianbin; Xing, Jun; Zhang, Yingjie; Fan, Tingyong; Xu, Min; Shang, Dongping; Liu, Tonghai; Song, Jinlong

    2012-11-08

    The purpose of this study was to compare positional and volumetric differences of planning target volumes (PTVs) defined on axial three dimensional CT (3D CT) and four dimensional CT (4D CT) for liver cancer. Fourteen patients with liver cancer underwent 3D CT and 4D CT simulation scans during free breathing. The tumor motion was measured by 4D CT. Three internal target volumes (ITVs) were produced based on the clinical target volume from 3DCT (CTV3D): i) A conventional ITV (ITVconv) was produced by adding 10 mm in CC direction and 5 mm in LR and and AP directions to CTV3D; ii) A specific ITV (ITVspec) was created using a specific margin in transaxial direction; iii) ITVvector was produced by adding an isotropic margin derived from the individual tumor motion vector. ITV4D was defined on the fusion of CTVs on all phases of 4D CT. PTVs were generated by adding a 5 mm setup margin to ITVs. The average centroid shifts between PTVs derived from 3DCT and PTV4D in left-right (LR), anterior-posterior (AP), and cranial-caudal (CC) directions were close to zero. Comparing PTV4D to PTVconv, PTVspec, and PTVvector resulted in a decrease in volume size by 33.18% ± 12.39%, 24.95% ± 13.01%, 48.08% ± 15.32%, respectively. The mean degree of inclusions (DI) of PTV4D in PTVconv, and PTV4D in PTVspec, and PTV4D in PTVvector was 0.98, 0.97, and 0.99, which showed no significant correlation to tumor motion vector (r = -0.470, 0.259, and 0.244; p = 0.090, 0.371, and 0.401). The mean DIs of PTVconv in PTV4D, PTVspec in PTV4D, and PTVvector in PTV4D was 0.66, 0.73, and 0.52. The size of individual PTV from 4D CT is significantly less than that of PTVs from 3DCT. The position of targets derived from axial 3DCT images scatters around the center of 4D targets randomly. Compared to conventional PTV, the use of 3D CT-based PTVs with individual margins cannot significantly reduce normal tissues being unnecessarily irradiated, but may contribute to reducing the risk of missing targets for

  3. Denoising and covariance estimation of single particle cryo-EM images.

    PubMed

    Bhamre, Tejal; Zhang, Teng; Singer, Amit

    2016-07-01

    The problem of image restoration in cryo-EM entails correcting for the effects of the Contrast Transfer Function (CTF) and noise. Popular methods for image restoration include 'phase flipping', which corrects only for the Fourier phases but not amplitudes, and Wiener filtering, which requires the spectral signal to noise ratio. We propose a new image restoration method which we call 'Covariance Wiener Filtering' (CWF). In CWF, the covariance matrix of the projection images is used within the classical Wiener filtering framework for solving the image restoration deconvolution problem. Our estimation procedure for the covariance matrix is new and successfully corrects for the CTF. We demonstrate the efficacy of CWF by applying it to restore both simulated and experimental cryo-EM images. Results with experimental datasets demonstrate that CWF provides a good way to evaluate the particle images and to see what the dataset contains even without 2D classification and averaging.

  4. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  5. TU-F-12A-05: Sensitivity of Textural Features to 3D Vs. 4D FDG-PET/CT Imaging in NSCLC Patients

    SciTech Connect

    Yang, F; Nyflot, M; Bowen, S; Kinahan, P; Sandison, G

    2014-06-15

    Purpose: Neighborhood Gray-level difference matrices (NGLDM) based texture parameters extracted from conventional (3D) 18F-FDG PET scans in patients with NSCLC have been previously shown to associate with response to chemoradiation and poorer patient outcome. However, the change in these parameters when utilizing respiratory-correlated (4D) FDG-PET scans has not yet been characterized for NSCLC. The Objectives: of this study was to assess the extent to which NGLDM-based texture parameters on 4D PET images vary with reference to values derived from 3D scans in NSCLC. Methods: Eight patients with newly diagnosed NSCLC treated with concomitant chemoradiotherapy were included in this study. 4D PET scans were reconstructed with OSEM-IR in 5 respiratory phase-binned images and corresponding CT data of each phase were employed for attenuation correction. NGLDM-based texture features, consisting of coarseness, contrast, busyness, complexity and strength, were evaluated for gross tumor volumes defined on 3D/4D PET scans by radiation oncologists. Variation of the obtained texture parameters over the respiratory cycle were examined with respect to values extracted from 3D scans. Results: Differences between texture parameters derived from 4D scans at different respiratory phases and those extracted from 3D scans ranged from −30% to 13% for coarseness, −12% to 40% for contrast, −5% to 50% for busyness, −7% to 38% for complexity, and −43% to 20% for strength. Furthermore, no evident correlations were observed between respiratory phase and 4D scan texture parameters. Conclusion: Results of the current study showed that NGLDM-based texture parameters varied considerably based on choice of 3D PET and 4D PET reconstruction of NSCLC patient images, indicating that standardized image acquisition and analysis protocols need to be established for clinical studies, especially multicenter clinical trials, intending to validate prognostic values of texture features for NSCLC.

  6. SU-E-J-28: Gantry Speed Significantly Affects Image Quality and Imaging Dose for 4D Cone-Beam Computed Tomography On the Varian Edge Platform

    SciTech Connect

    Santoso, A; Song, K; Gardner, S; Chetty, I; Wen, N

    2015-06-15

    Purpose: 4D-CBCT facilitates assessment of tumor motion at treatment position. We investigated the effect of gantry speed on 4D-CBCT image quality and dose using the Varian Edge On-Board Imager (OBI). Methods: A thoracic protocol was designed using a 125 kVp spectrum. Image quality parameters were obtained via 4D acquisition using a Catphan phantom with a gating system. A sinusoidal waveform was executed with a five second period and superior-inferior motion. 4D-CBCT scans were sorted into 4 and 10 phases. Image quality metrics included spatial resolution, contrast-to-noise ratio (CNR), uniformity index (UI), Hounsfield unit (HU) sensitivity, and RMS error (RMSE) of motion amplitude. Dosimetry was accomplished using Gafchromic XR-QA2 films within a CIRS Thorax phantom. This was placed on the gating phantom using the same motion waveform. Results: High contrast resolution decreased linearly from 5.93 to 4.18 lp/cm, 6.54 to 4.18 lp/cm, and 5.19 to 3.91 lp/cm for averaged, 4 phase, and 10 phase 4DCBCT volumes respectively as gantry speed increased from 1.0 to 6.0 degs/sec. CNRs decreased linearly from 4.80 to 1.82 as the gantry speed increased from 1.0 to 6.0 degs/sec, respectively. No significant variations in UIs, HU sensitivities, or RMSEs were observed with variable gantry speed. Ion chamber measurements compared to film yielded small percent differences in plastic water regions (0.1–9.6%), larger percent differences in lung equivalent regions (7.5–34.8%), and significantly larger percent differences in bone equivalent regions (119.1–137.3%). Ion chamber measurements decreased from 17.29 to 2.89 cGy with increasing gantry speed from 1.0 to 6.0 degs/sec. Conclusion: Maintaining technique factors while changing gantry speed changes the number of projections used for reconstruction. Increasing the number of projections by decreasing gantry speed decreases noise, however, dose is increased. The future of 4DCBCT’s clinical utility relies on further

  7. Real-time image-content-based beamline control for smart 4D X-ray imaging.

    PubMed

    Vogelgesang, Matthias; Farago, Tomas; Morgeneyer, Thilo F; Helfen, Lukas; Dos Santos Rolo, Tomy; Myagotin, Anton; Baumbach, Tilo

    2016-09-01

    Real-time processing of X-ray image data acquired at synchrotron radiation facilities allows for smart high-speed experiments. This includes workflows covering parameterized and image-based feedback-driven control up to the final storage of raw and processed data. Nevertheless, there is presently no system that supports an efficient construction of such experiment workflows in a scalable way. Thus, here an architecture based on a high-level control system that manages low-level data acquisition, data processing and device changes is described. This system is suitable for routine as well as prototypical experiments, and provides specialized building blocks to conduct four-dimensional in situ, in vivo and operando tomography and laminography. PMID:27577784

  8. SU-E-J-154: Image Quality Assessment of Contrast-Enhanced 4D-CT for Pancreatic Adenocarcinoma in Radiotherapy Simulation

    SciTech Connect

    Choi, W; Xue, M; Patel, K; Regine, W; Wang, J; D’Souza, W; Lu, W; Kang, M; Klahr, P

    2015-06-15

    Purpose: This study presents quantitative and qualitative assessment of the image qualities in contrast-enhanced (CE) 3D-CT, 4D-CT and CE 4D-CT to identify feasibility for replacing the clinical standard simulation with a single CE 4D-CT for pancreatic adenocarcinoma (PDA) in radiotherapy simulation. Methods: Ten PDA patients were enrolled and underwent three CT scans: a clinical standard pair of CE 3D-CT immediately followed by a 4D-CT, and a CE 4D-CT one week later. Physicians qualitatively evaluated the general image quality and regional vessel definitions and gave a score from 1 to 5. Next, physicians delineated the contours of the tumor (T) and the normal pancreatic parenchyma (P) on the three CTs (CE 3D-CT, 50% phase for 4D-CT and CE 4D-CT), then high density areas were automatically removed by thresholding at 500 HU and morphological operations. The pancreatic tumor contrast-to-noise ratio (CNR), signal-tonoise ratio (SNR) and conspicuity (C, absolute difference of mean enhancement levels in P and T) were computed to quantitatively assess image quality. The Wilcoxon rank sum test was used to compare these quantities. Results: In qualitative evaluations, CE 3D-CT and CE 4D-CT scored equivalently (4.4±0.4 and 4.3±0.4) and both were significantly better than 4D-CT (3.1±0.6). In quantitative evaluations, the C values were higher in CE 4D-CT (28±19 HU, p=0.19 and 0.17) than the clinical standard pair of CE 3D-CT and 4D-CT (17±12 and 16±17 HU, p=0.65). In CE 3D-CT and CE 4D-CT, mean CNR (1.8±1.4 and 1.8±1.7, p=0.94) and mean SNR (5.8±2.6 and 5.5±3.2, p=0.71) both were higher than 4D-CT (CNR: 1.1±1.3, p<0.3; SNR: 3.3±2.1, p<0.1). The absolute enhancement levels for T and P were higher in CE 4D-CT (87, 82 HU) than in CE 3D-CT (60, 56) and 4DCT (53, 70). Conclusions: The individually optimized CE 4D-CT is feasible and achieved comparable image qualities to the clinical standard simulation. This study was supported in part by Philips Healthcare.

  9. Compression and denoising in magnetic resonance imaging via SVD on the Fourier domain using computer algebra

    NASA Astrophysics Data System (ADS)

    Díaz, Felipe

    2015-09-01

    Magnetic resonance (MR) data reconstruction can be computationally a challenging task. The signal-to-noise ratio might also present complications, especially with high-resolution images. In this sense, data compression can be useful not only for reducing the complexity and memory requirements, but also to reduce noise, even to allow eliminate spurious components.This article proposes the use of a system based on singular value decomposition of low order for noise reconstruction and reduction in MR imaging system. The proposed method is evaluated using in vivo MRI data. Rebuilt images with less than 20 of the original data and with similar quality in terms of visual inspection are presented. Also a quantitative evaluation of the method is presented.

  10. 4D-Imaging of the Lung: Reproducibility of Lesion Size and Displacement on Helical CT, MRI, and Cone Beam CT in a Ventilated Ex Vivo System

    SciTech Connect

    Biederer, Juergen Dinkel, Julien; Remmert, Gregor; Jetter, Siri; Nill, Simeon; Moser, Torsten; Bendl, Rolf; Thierfelder, Carsten; Fabel, Michael; Oelfke, Uwe; Bock, Michael; Plathow, Christian; Bolte, Hendrik; Welzel, Thomas; Hoffmann, Beata; Hartmann, Guenter; Schlegel, Wolfgang; Debus, Juergen; Heller, Martin

    2009-03-01

    Purpose: Four-dimensional (4D) imaging is a key to motion-adapted radiotherapy of lung tumors. We evaluated in a ventilated ex vivo system how size and displacement of artificial pulmonary nodules are reproduced with helical 4D-CT, 4D-MRI, and linac-integrated cone beam CT (CBCT). Methods and Materials: Four porcine lungs with 18 agarose nodules (mean diameters 1.3-1.9 cm), were ventilated inside a chest phantom at 8/min and subject to 4D-CT (collimation 24 x 1.2 mm, pitch 0.1, slice/increment 24x10{sup 2}/1.5/0.8 mm, pitch 0.1, temporal resolution 0.5 s), 4D-MRI (echo-shared dynamic three-dimensional-flash; repetition/echo time 2.13/0.72 ms, voxel size 2.7 x 2.7 x 4.0 mm, temporal resolution 1.4 s) and linac-integrated 4D-CBCT (720 projections, 3-min rotation, temporal resolution {approx}1 s). Static CT without respiration served as control. Three observers recorded lesion size (RECIST-diameters x/y/z) and axial displacement. Interobserver- and interphase-variation coefficients (IO/IP VC) of measurements indicated reproducibility. Results: Mean x/y/z lesion diameters in cm were equal on static and dynamic CT (1.88/1.87; 1.30/1.39; 1.71/1.73; p > 0.05), but appeared larger on MRI and CBCT (2.06/1.95 [p < 0.05 vs. CT]; 1.47/1.28 [MRI vs. CT/CBCT p < 0.05]; 1.86/1.83 [CT vs. CBCT p < 0.05]). Interobserver-VC for lesion sizes were 2.54-4.47% (CT), 2.29-4.48% (4D-CT); 5.44-6.22% (MRI) and 4.86-6.97% (CBCT). Interphase-VC for lesion sizes ranged from 2.28% (4D-CT) to 10.0% (CBCT). Mean displacement in cm decreased from static CT (1.65) to 4D-CT (1.40), CBCT (1.23) and MRI (1.16). Conclusions: Lesion sizes are exactly reproduced with 4D-CT but overestimated on 4D-MRI and CBCT with a larger variability due to limited temporal and spatial resolution. All 4D-modalities underestimate lesion displacement.

  11. 4D Electron Tomography

    NASA Astrophysics Data System (ADS)

    Kwon, Oh-Hoon; Zewail, Ahmed H.

    2010-06-01

    Electron tomography provides three-dimensional (3D) imaging of noncrystalline and crystalline equilibrium structures, as well as elemental volume composition, of materials and biological specimens, including those of viruses and cells. We report the development of 4D electron tomography by integrating the fourth dimension (time resolution) with the 3D spatial resolution obtained from a complete tilt series of 2D projections of an object. The different time frames of tomograms constitute a movie of the object in motion, thus enabling studies of nonequilibrium structures and transient processes. The method was demonstrated using carbon nanotubes of a bracelet-like ring structure for which 4D tomograms display different modes of motion, such as breathing and wiggling, with resonance frequencies up to 30 megahertz. Applications can now make use of the full space-time range with the nanometer-femtosecond resolution of ultrafast electron tomography.

  12. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    SciTech Connect

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J.; Kuncic, Zdenka

    2014-04-15

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  13. TU-F-17A-01: BEST IN PHYSICS (JOINT IMAGING-THERAPY) - An Automatic Toolkit for Efficient and Robust Analysis of 4D Respiratory Motion

    SciTech Connect

    Wei, J; Yuan, A; Li, G

    2014-06-15

    Purpose: To provide an automatic image analysis toolkit to process thoracic 4-dimensional computed tomography (4DCT) and extract patient-specific motion information to facilitate investigational or clinical use of 4DCT. Methods: We developed an automatic toolkit in MATLAB to overcome the extra workload from the time dimension in 4DCT. This toolkit employs image/signal processing, computer vision, and machine learning methods to visualize, segment, register, and characterize lung 4DCT automatically or interactively. A fully-automated 3D lung segmentation algorithm was designed and 4D lung segmentation was achieved in batch mode. Voxel counting was used to calculate volume variations of the torso, lung and its air component, and local volume changes at the diaphragm and chest wall to characterize breathing pattern. Segmented lung volumes in 12 patients are compared with those from a treatment planning system (TPS). Voxel conversion was introduced from CT# to other physical parameters, such as gravity-induced pressure, to create a secondary 4D image. A demon algorithm was applied in deformable image registration and motion trajectories were extracted automatically. Calculated motion parameters were plotted with various templates. Machine learning algorithms, such as Naive Bayes and random forests, were implemented to study respiratory motion. This toolkit is complementary to and will be integrated with the Computational Environment for Radiotherapy Research (CERR). Results: The automatic 4D image/data processing toolkit provides a platform for analysis of 4D images and datasets. It processes 4D data automatically in batch mode and provides interactive visual verification for manual adjustments. The discrepancy in lung volume calculation between this and the TPS is <±2% and the time saving is by 1–2 orders of magnitude. Conclusion: A framework of 4D toolkit has been developed to analyze thoracic 4DCT automatically or interactively, facilitating both investigational

  14. SU-E-J-200: A Dosimetric Analysis of 3D Versus 4D Image-Based Dose Calculation for Stereotactic Body Radiation Therapy in Lung Tumors

    SciTech Connect

    Ma, M; Rouabhi, O; Flynn, R; Xia, J; Bayouth, J

    2014-06-01

    Purpose: To evaluate the dosimetric difference between 3D and 4Dweighted dose calculation using patient specific respiratory trace and deformable image registration for stereotactic body radiation therapy in lung tumors. Methods: Two dose calculation techniques, 3D and 4D-weighed dose calculation, were used for dosimetric comparison for 9 lung cancer patients. The magnitude of the tumor motion varied from 3 mm to 23 mm. Breath-hold exhale CT was used for 3D dose calculation with ITV generated from the motion observed from 4D-CT. For 4D-weighted calculation, dose of each binned CT image from the ten breathing amplitudes was first recomputed using the same planning parameters as those used in the 3D calculation. The dose distribution of each binned CT was mapped to the breath-hold CT using deformable image registration. The 4D-weighted dose was computed by summing the deformed doses with the temporal probabilities calculated from their corresponding respiratory traces. Dosimetric evaluation criteria includes lung V20, mean lung dose, and mean tumor dose. Results: Comparing with 3D calculation, lung V20, mean lung dose, and mean tumor dose using 4D-weighted dose calculation were changed by −0.67% ± 2.13%, −4.11% ± 6.94% (−0.36 Gy ± 0.87 Gy), −1.16% ± 1.36%(−0.73 Gy ± 0.85 Gy) accordingly. Conclusion: This work demonstrates that conventional 3D dose calculation method may overestimate the lung V20, MLD, and MTD. The absolute difference between 3D and 4D-weighted dose calculation in lung tumor may not be clinically significant. This research is supported by Siemens Medical Solutions USA, Inc and Iowa Center for Research By Undergraduates.

  15. Performance comparison of denoising filters for source camera identification

    NASA Astrophysics Data System (ADS)

    Cortiana, A.; Conotter, V.; Boato, G.; De Natale, F. G. B.

    2011-02-01

    Source identification for digital content is one of the main branches of digital image forensics. It relies on the extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently characterizes the digital device which generated the content. Such noise is estimated as the difference between the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance comparison of different denoising filters for source identification purposes. In particular, results achieved with a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously employed in such a context.

  16. Non-rigid dual respiratory and cardiac motion correction methods after, during, and before image reconstruction for 4D cardiac PET

    NASA Astrophysics Data System (ADS)

    Feng, Tao; Wang, Jizhe; Fung, George; Tsui, Benjamin

    2016-01-01

    Respiratory motion (RM) and cardiac motion (CM) degrade the quality and resolution in cardiac PET scans. We have developed non-rigid motion estimation methods to estimate both RM and CM based on 4D cardiac gated PET data alone, and compensate the dual respiratory and cardiac (R&C) motions after (MCAR), during (MCDR), and before (MCBR) image reconstruction. In all three R&C motion correction methods, attenuation-activity mismatch effect was modeled by using transformed attenuation maps using the estimated RM. The difference of using activity preserving and non-activity preserving models in R&C correction was also studied. Realistic Monte Carlo simulated 4D cardiac PET data using the 4D XCAT phantom and accurate models of the scanner design parameters and performance characteristics at different noise levels were employed as the known truth and for method development and evaluation. Results from the simulation study suggested that all three dual R&C motion correction methods provide substantial improvement in the quality of 4D cardiac gated PET images as compared with no motion correction. Specifically, the MCDR method yields the best performance for all different noise levels compared with the MCAR and MCBR methods. While MCBR reduces computational time dramatically but the resultant 4D cardiac gated PET images has overall inferior image quality when compared to that from the MCAR and MCDR approaches in the ‘almost’ noise free case. Also, the MCBR method has better noise handling properties when compared with MCAR and provides better quantitative results in high noise cases. When the goal is to reduce scan time or patient radiation dose, MCDR and MCBR provide a good compromise between image quality and computational times.

  17. Non-rigid dual respiratory and cardiac motion correction methods after, during, and before image reconstruction for 4D cardiac PET.

    PubMed

    Feng, Tao; Wang, Jizhe; Fung, George; Tsui, Benjamin

    2016-01-01

    Respiratory motion (RM) and cardiac motion (CM) degrade the quality and resolution in cardiac PET scans. We have developed non-rigid motion estimation methods to estimate both RM and CM based on 4D cardiac gated PET data alone, and compensate the dual respiratory and cardiac (R&C) motions after (MCAR), during (MCDR), and before (MCBR) image reconstruction. In all three R&C motion correction methods, attenuation-activity mismatch effect was modeled by using transformed attenuation maps using the estimated RM. The difference of using activity preserving and non-activity preserving models in R&C correction was also studied. Realistic Monte Carlo simulated 4D cardiac PET data using the 4D XCAT phantom and accurate models of the scanner design parameters and performance characteristics at different noise levels were employed as the known truth and for method development and evaluation. Results from the simulation study suggested that all three dual R&C motion correction methods provide substantial improvement in the quality of 4D cardiac gated PET images as compared with no motion correction. Specifically, the MCDR method yields the best performance for all different noise levels compared with the MCAR and MCBR methods. While MCBR reduces computational time dramatically but the resultant 4D cardiac gated PET images has overall inferior image quality when compared to that from the MCAR and MCDR approaches in the 'almost' noise free case. Also, the MCBR method has better noise handling properties when compared with MCAR and provides better quantitative results in high noise cases. When the goal is to reduce scan time or patient radiation dose, MCDR and MCBR provide a good compromise between image quality and computational times.

  18. Optimization of dynamic measurement of receptor kinetics by wavelet denoising.

    PubMed

    Alpert, Nathaniel M; Reilhac, Anthonin; Chio, Tat C; Selesnick, Ivan

    2006-04-01

    The most important technical limitation affecting dynamic measurements with PET is low signal-to-noise ratio (SNR). Several reports have suggested that wavelet processing of receptor kinetic data in the human brain can improve the SNR of parametric images of binding potential (BP). However, it is difficult to fully assess these reports because objective standards have not been developed to measure the tradeoff between accuracy (e.g. degradation of resolution) and precision. This paper employs a realistic simulation method that includes all major elements affecting image formation. The simulation was used to derive an ensemble of dynamic PET ligand (11C-raclopride) experiments that was subjected to wavelet processing. A method for optimizing wavelet denoising is presented and used to analyze the simulated experiments. Using optimized wavelet denoising, SNR of the four-dimensional PET data increased by about a factor of two and SNR of three-dimensional BP maps increased by about a factor of 1.5. Analysis of the difference between the processed and unprocessed means for the 4D concentration data showed that more than 80% of voxels in the ensemble mean of the wavelet processed data deviated by less than 3%. These results show that a 1.5x increase in SNR can be achieved with little degradation of resolution. This corresponds to injecting about twice the radioactivity, a maneuver that is not possible in human studies without saturating the PET camera and/or exposing the subject to more than permitted radioactivity.

  19. Task-based evaluation of a 4D MAP-RBI-EM image reconstruction method for gated myocardial perfusion SPECT using a human observer study

    NASA Astrophysics Data System (ADS)

    Lee, Taek-Soo; Higuchi, Takahiro; Lautamäki, Riikka; Bengel, Frank M.; Tsui, Benjamin M. W.

    2015-09-01

    We evaluated the performance of a new 4D image reconstruction method for improved 4D gated myocardial perfusion (MP) SPECT using a task-based human observer study. We used a realistic 4D NURBS-based Cardiac-Torso (NCAT) phantom that models cardiac beating motion. Half of the population was normal; the other half had a regional hypokinetic wall motion abnormality. Noise-free and noisy projection data with 16 gates/cardiac cycle were generated using an analytical projector that included the effects of attenuation, collimator-detector response, and scatter (ADS), and were reconstructed using the 3D FBP without and 3D OS-EM with ADS corrections followed by different cut-off frequencies of a 4D linear post-filter. A 4D iterative maximum a posteriori rescaled-block (MAP-RBI)-EM image reconstruction method with ADS corrections was also used to reconstruct the projection data using various values of the weighting factor for its prior. The trade-offs between bias and noise were represented by the normalized mean squared error (NMSE) and averaged normalized standard deviation (NSDav), respectively. They were used to select reasonable ranges of the reconstructed images for use in a human observer study. The observers were trained with the simulated cine images and were instructed to rate their confidence on the absence or presence of a motion defect on a continuous scale. We then applied receiver operating characteristic (ROC) analysis and used the area under the ROC curve (AUC) index. The results showed that significant differences in detection performance among the different NMSE-NSDav combinations were found and the optimal trade-off from optimized reconstruction parameters corresponded to a maximum AUC value. The 4D MAP-RBI-EM with ADS correction, which had the best trade-off among the tested reconstruction methods, also had the highest AUC value, resulting in significantly better human observer detection performance when detecting regional myocardial wall motion

  20. Task-based evaluation of a 4D MAP-RBI-EM image reconstruction method for gated myocardial perfusion SPECT using a human observer study.

    PubMed

    Lee, Taek-Soo; Higuchi, Takahiro; Lautamäki, Riikka; Bengel, Frank M; Tsui, Benjamin M W

    2015-09-01

    We evaluated the performance of a new 4D image reconstruction method for improved 4D gated myocardial perfusion (MP) SPECT using a task-based human observer study. We used a realistic 4D NURBS-based Cardiac-Torso (NCAT) phantom that models cardiac beating motion. Half of the population was normal; the other half had a regional hypokinetic wall motion abnormality. Noise-free and noisy projection data with 16 gates/cardiac cycle were generated using an analytical projector that included the effects of attenuation, collimator-detector response, and scatter (ADS), and were reconstructed using the 3D FBP without and 3D OS-EM with ADS corrections followed by different cut-off frequencies of a 4D linear post-filter. A 4D iterative maximum a posteriori rescaled-block (MAP-RBI)-EM image reconstruction method with ADS corrections was also used to reconstruct the projection data using various values of the weighting factor for its prior. The trade-offs between bias and noise were represented by the normalized mean squared error (NMSE) and averaged normalized standard deviation (NSDav), respectively. They were used to select reasonable ranges of the reconstructed images for use in a human observer study. The observers were trained with the simulated cine images and were instructed to rate their confidence on the absence or presence of a motion defect on a continuous scale. We then applied receiver operating characteristic (ROC) analysis and used the area under the ROC curve (AUC) index. The results showed that significant differences in detection performance among the different NMSE-NSDav combinations were found and the optimal trade-off from optimized reconstruction parameters corresponded to a maximum AUC value. The 4D MAP-RBI-EM with ADS correction, which had the best trade-off among the tested reconstruction methods, also had the highest AUC value, resulting in significantly better human observer detection performance when detecting regional myocardial wall motion

  1. Task-Based Evaluation of a 4D MAP-RBI-EM Image Reconstruction Method for Gated Myocardial Perfusion SPECT using a Human Observer Study

    PubMed Central

    Lee, Taek-Soo; Higuchi, Takahiro; Lautamäki, Riikka; Bengel, Frank M.; Tsui, Benjamin M. W.

    2015-01-01

    We evaluated the performance of a new 4D image reconstruction method for improved 4D gated myocardial perfusion (MP) SPECT using a task-based human observer study. We used a realistic 4D NURBS-based Cardiac-Torso (NCAT) phantom that models cardiac beating motion. Half of the population was normal; the other half had a regional hypokinetic wall motion abnormality. Noise-free and noisy projection data with 16 gates/cardiac cycle were generated using an analytical projector that included the effects of attenuation, collimator-detector response, and scatter (ADS), and were reconstructed using the 3D FBP without and 3D OS-EM with ADS corrections followed by different cut-off frequencies of a 4D linear post-filter. A 4D iterative maximum a posteriori rescaled-block (MAP-RBI)-EM image reconstruction method with ADS corrections was also used to reconstruct the projection data using various values of the weighting factor for its prior. The trade-offs between bias and noise were represented by the normalized mean squared error (NMSE) and averaged normalized standard deviation (NSDav), respectively. They were used to select reasonable ranges of the reconstructed images for use in a human observer study. The observers were trained with the simulated cine images and were instructed to rate their confidence on the absence or presence of a motion defect on a continuous scale. We then applied receiver operating characteristic (ROC) analysis and used the area under the ROC curve (AUC) index. The results showed that significant differences in detection performance among the different NMSE-NSDav combinations were found and the optimal trade-off from optimized reconstruction parameters corresponded to a maximum AUC value. The 4D MAP-RBI-EM with ADS correction, which had the best trade-off among the tested reconstruction methods, also had the highest AUC value, resulting in significantly better human observer detection performance when detecting regional myocardial wall motion

  2. Abdominal 4D Flow MR Imaging in a Breath Hold: Combination of Spiral Sampling and Dynamic Compressed Sensing for Highly Accelerated Acquisition

    PubMed Central

    Knight-Greenfield, Ashley; Jajamovich, Guido; Besa, Cecilia; Cui, Yong; Stalder, Aurélien; Markl, Michael; Taouli, Bachir

    2015-01-01

    Purpose To develop a highly accelerated phase-contrast cardiac-gated volume flow measurement (four-dimensional [4D] flow) magnetic resonance (MR) imaging technique based on spiral sampling and dynamic compressed sensing and to compare this technique with established phase-contrast imaging techniques for the quantification of blood flow in abdominal vessels. Materials and Methods This single-center prospective study was compliant with HIPAA and approved by the institutional review board. Ten subjects (nine men, one woman; mean age, 51 years; age range, 30–70 years) were enrolled. Seven patients had liver disease. Written informed consent was obtained from all participants. Two 4D flow acquisitions were performed in each subject, one with use of Cartesian sampling with respiratory tracking and the other with use of spiral sampling and a breath hold. Cartesian two-dimensional (2D) cine phase-contrast images were also acquired in the portal vein. Two observers independently assessed vessel conspicuity on phase-contrast three-dimensional angiograms. Quantitative flow parameters were measured by two independent observers in major abdominal vessels. Intertechnique concordance was quantified by using Bland-Altman and logistic regression analyses. Results There was moderate to substantial agreement in vessel conspicuity between 4D flow acquisitions in arteries and veins (κ = 0.71 and 0.61, respectively, for observer 1; κ = 0.71 and 0.44 for observer 2), whereas more artifacts were observed with spiral 4D flow (κ = 0.30 and 0.20). Quantitative measurements in abdominal vessels showed good equivalence between spiral and Cartesian 4D flow techniques (lower bound of the 95% confidence interval: 63%, 77%, 60%, and 64% for flow, area, average velocity, and peak velocity, respectively). For portal venous flow, spiral 4D flow was in better agreement with 2D cine phase-contrast flow (95% limits of agreement: −8.8 and 9.3 mL/sec, respectively) than was Cartesian 4D flow (95

  3. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  4. Transformation of light double cones in the human retina: the origin of trichromatism, of 4D-spatiotemporal vision, and of patchwise 4D Fourier transformation in Talbot imaging

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    1997-09-01

    The interpretation of the 'inverted' retina of primates as an 'optoretina' (a light cones transforming diffractive cellular 3D-phase grating) integrates the functional, structural, and oscillatory aspects of a cortical layer. It is therefore relevant to consider prenatal developments as a basis of the macro- and micro-geometry of the inner eye. This geometry becomes relevant for the postnatal trichromatic synchrony organization (TSO) as well as the adaptive levels of human vision. It is shown that the functional performances, the trichromatism in photopic vision, the monocular spatiotemporal 3D- and 4D-motion detection, as well as the Fourier optical image transformation with extraction of invariances all become possible. To transform light cones into reciprocal gratings especially the spectral phase conditions in the eikonal of the geometrical optical imaging before the retinal 3D-grating become relevant first, then in the von Laue resp. reciprocal von Laue equation for 3D-grating optics inside the grating and finally in the periodicity of Talbot-2/Fresnel-planes in the near-field behind the grating. It is becoming possible to technically realize -- at least in some specific aspects -- such a cortical optoretina sensor element with its typical hexagonal-concentric structure which leads to these visual functions.

  5. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data.

    PubMed

    Kotasidis, F A; Mehranian, A; Zaidi, H

    2016-05-01

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image

  6. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data

    NASA Astrophysics Data System (ADS)

    Kotasidis, F. A.; Mehranian, A.; Zaidi, H.

    2016-05-01

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image

  7. IMRT treatment plans and functional planning with functional lung imaging from 4D-CT for thoracic cancer patients

    PubMed Central

    2013-01-01

    Background and purpose Currently, the inhomogeneity of the pulmonary function is not considered when treatment plans are generated in thoracic cancer radiotherapy. This study evaluates the dose of treatment plans on highly-functional volumes and performs functional treatment planning by incorporation of ventilation data from 4D-CT. Materials and methods Eleven patients were included in this retrospective study. Ventilation was calculated using 4D-CT. Two treatment plans were generated for each case, the first one without the incorporation of the ventilation and the second with it. The dose of the first plans was overlapped with the ventilation and analyzed. Highly-functional regions were avoided in the second treatment plans. Results For small targets in the first plans (PTV < 400 cc, 6 cases), all V5, V20 and the mean lung dose values for the highly-functional regions were lower than that of the total lung. For large targets, two out of five cases had higher V5 and V20 values for the highly-functional regions. All the second plans were within constraints. Conclusion Radiation treatments affect functional lung more seriously in large tumor cases. With compromise of dose to other critical organs, functional treatment planning to reduce dose in highly-functional lung volumes can be achieved PMID:23281734

  8. A novel CT-FFR method for the coronary artery based on 4D-CT image analysis and structural and fluid analysis

    NASA Astrophysics Data System (ADS)

    Hirohata, K.; Kano, A.; Goryu, A.; Ooga, J.; Hongo, T.; Higashi, S.; Fujisawa, Y.; Wakai, S.; Arakita, K.; Ikeda, Y.; Kaminaga, S.; Ko, B. S.; Seneviratne, S. K.

    2015-03-01

    Non invasive fractional flow reserve derived from CT coronary angiography (CT-FFR) has to date been typically performed using the principles of fluid analysis in which a lumped parameter coronary vascular bed model is assigned to represent the impedance of the downstream coronary vascular networks absent in the computational domain for each coronary outlet. This approach may have a number of limitations. It may not account for the impact of the myocardial contraction and relaxation during the cardiac cycle, patient-specific boundary conditions for coronary artery outlets and vessel stiffness. We have developed a novel approach based on 4D-CT image tracking (registration) and structural and fluid analysis, to address these issues. In our approach, we analyzed the deformation variation of vessels and the volume variation of vessels, primarily from 70% to 100% of cardiac phase, to better define boundary conditions and stiffness of vessels. We used a statistical estimation method based on a hierarchical Bayes model to integrate 4D-CT measurements and structural and fluid analysis data. Under these analysis conditions, we performed structural and fluid analysis to determine pressure, flow rate and CT-FFR. The consistency of this method has been verified by a comparison of 4D-CTFFR analysis results derived from five clinical 4D-CT datasets with invasive measurements of FFR. Additionally, phantom experiments of flexible tubes with/without stenosis using pulsating pumps, flow sensors and pressure sensors were performed. Our results show that the proposed 4D-CT-FFR analysis method has the potential to accurately estimate the effect of coronary artery stenosis on blood flow.

  9. 4-D OCT in Developmental Cardiology

    NASA Astrophysics Data System (ADS)

    Jenkins, Michael W.; Rollins, Andrew M.

    Although strong evidence exists to suggest that altered cardiac function can lead to CHDs, few studies have investigated the influential role of cardiac function and biophysical forces on the development of the cardiovascular system due to a lack of proper in vivo imaging tools. 4-D imaging is needed to decipher the complex spatial and temporal patterns of biomechanical forces acting upon the heart. Numerous solutions over the past several years have demonstrated 4-D OCT imaging of the developing cardiovascular system. This chapter will focus on these solutions and explain their context in the evolution of 4-D OCT imaging. The first sections describe the relevant techniques (prospective gating, direct 4-D imaging, retrospective gating), while later sections focus on 4-D Doppler imaging and measurements of force implementing 4-D OCT Doppler. Finally, the techniques are summarized, and some possible future directions are discussed.

  10. Magnetic Particle / Magnetic Resonance Imaging: In-Vitro MPI-Guided Real Time Catheter Tracking and 4D Angioplasty Using a Road Map and Blood Pool Tracer Approach

    PubMed Central

    Jung, Caroline; Kaul, Michael Gerhard; Werner, Franziska; Them, Kolja; Reimer, Rudolph; Nielsen, Peter; vom Scheidt, Annika; Adam, Gerhard; Knopp, Tobias; Ittrich, Harald

    2016-01-01

    Purpose In-vitro evaluation of the feasibility of 4D real time tracking of endovascular devices and stenosis treatment with a magnetic particle imaging (MPI) / magnetic resonance imaging (MRI) road map approach and an MPI-guided approach using a blood pool tracer. Materials and Methods A guide wire and angioplasty-catheter were labeled with a thin layer of magnetic lacquer. For real time MPI a custom made software framework was developed. A stenotic vessel phantom filled with saline or superparamagnetic iron oxide nanoparticles (MM4) was equipped with bimodal fiducial markers for co-registration in preclinical 7T MRI and MPI. In-vitro angioplasty was performed inflating the balloon with saline or MM4. MPI data were acquired using a field of view of 37.3×37.3×18.6 mm3 and a frame rate of 46 volumes/sec. Analysis of the magnetic lacquer-marks on the devices were performed with electron microscopy, atomic absorption spectrometry and micro-computed tomography. Results Magnetic marks allowed for MPI/MRI guidance of interventional devices. Bimodal fiducial markers enable MPI/MRI image fusion for MRI based roadmapping. MRI roadmapping and the blood pool tracer approach facilitate MPI real time monitoring of in-vitro angioplasty. Successful angioplasty was verified with MPI and MRI. Magnetic marks consist of micrometer sized ferromagnetic plates mainly composed of iron and iron oxide. Conclusions 4D real time MP imaging, tracking and guiding of endovascular instruments and in-vitro angioplasty is feasible. In addition to an approach that requires a blood pool tracer, MRI based roadmapping might emerge as a promising tool for radiation free 4D MPI-guided interventions. PMID:27249022

  11. MO-C-17A-02: A Novel Method for Evaluating Hepatic Stiffness Based On 4D-MRI and Deformable Image Registration

    SciTech Connect

    Cui, T; Liang, X; Czito, B; Palta, M; Bashir, M; Yin, F; Cai, J

    2014-06-15

    Purpose: Quantitative imaging of hepatic stiffness has significant potential in radiation therapy, ranging from treatment planning to response assessment. This study aims to develop a novel, noninvasive method to quantify liver stiffness with 3D strains liver maps using 4D-MRI and deformable image registration (DIR). Methods: Five patients with liver cancer were imaged with an institutionally developed 4D-MRI technique under an IRB-approved protocol. Displacement vector fields (DVFs) across the liver were generated via DIR of different phases of 4D-MRI. Strain tensor at each voxel of interest (VOI) was computed from the relative displacements between the VOI and each of the six adjacent voxels. Three principal strains (E{sub 1}, E{sub 2} and E{sub 3}) of the VOI were derived as the eigenvalue of the strain tensor, which represent the magnitudes of the maximum and minimum stretches. Strain tensors for two regions of interest (ROIs) were calculated and compared for each patient, one within the tumor (ROI{sub 1}) and the other in normal liver distant from the heart (ROI{sub 2}). Results: 3D strain maps were successfully generated fort each respiratory phase of 4D-MRI for all patients. Liver deformations induced by both respiration and cardiac motion were observed. Differences in strain values adjacent to the distant from the heart indicate significant deformation caused by cardiac expansion during diastole. The large E{sub 1}/E{sub 2} (∼2) and E{sub 1}/E{sub 2} (∼10) ratios reflect the predominance of liver deformation in the superior-inferior direction. The mean E{sub 1} in ROI{sub 1} (0.12±0.10) was smaller than in ROI{sub 2} (0.15±0.12), reflecting a higher degree of stiffness of the cirrhotic tumor. Conclusion: We have successfully developed a novel method for quantitatively evaluating regional hepatic stiffness based on DIR of 4D-MRI. Our initial findings indicate that liver strain is heterogeneous, and liver tumors may have lower principal strain values

  12. Imaging 4-D hydrogeologic processes with geophysics: an example using crosswell electrical measurements to characterize a tracer plume

    NASA Astrophysics Data System (ADS)

    Singha, K.; Gorelick, S. M.

    2005-05-01

    Geophysical methods provide an inexpensive way to collect spatially exhaustive data about hydrogeologic, mechanical or geochemical parameters. In the presence of heterogeneity over multiple scales of these parameters at most field sites, geophysical data can contribute greatly to our understanding about the subsurface by providing important data we would otherwise lack without extensive, and often expensive, direct sampling. Recent work has highlighted the use of time-lapse geophysical data to help characterize hydrogeologic processes. We investigate the potential for making quantitative assessments of sodium-chloride tracer transport using 4-D crosswell electrical resistivity tomography (ERT) in a sand and gravel aquifer at the Massachusetts Military Reservation on Cape Cod. Given information about the relation between electrical conductivity and tracer concentration, we can estimate spatial moments from the 3-D ERT inversions, which give us information about tracer mass, center of mass, and dispersivity through time. The accuracy of these integrated measurements of tracer plume behavior is dependent on spatially variable resolution. The ERT inversions display greater apparent dispersion than tracer plumes estimated by 3D advective-dispersive simulation. This behavior is attributed to reduced measurement sensitivity to electrical conductivity values with distance from the electrodes and differential smoothing from tomographic inversion. The latter is a problem common to overparameterized inverse problems, which often occur when real-world budget limitations preclude extensive well-drilling or additional data collection. These results prompt future work on intelligent methods for reparameterizing the inverse problem and coupling additional disparate data sets.

  13. Digital in-line holography: 4-D imaging and tracking of micro-structures and organisms in microfluidics and biology

    NASA Astrophysics Data System (ADS)

    Garcia-Sucerquia, J.; Xu, W.; Jericho, S. K.; Jericho, M. H.; Tamblyn, I.; Kreuzer, H. J.

    2006-01-01

    In recent years, in-line holography as originally proposed by Gabor, supplemented with numerical reconstruction, has been perfected to the point at which wavelength resolution both laterally and in depth is routinely achieved with light by using digital in-line holographic microscopy (DIHM). The advantages of DIHM are: (1) simplicity of the hardware (laser- pinhole-CCD camera), (2) magnification is obtained in the numerical reconstruction, (3) maximum information of the 3-D structure with a depth of field of millimeters, (4) changes in the specimen and the simultaneous motion of many species, can be followed in 4-D at the camera frame rate. We present results obtained with DIHM in biological and microfluidic applications. By taking advantage of the large depth of field and the plane-to-plane reconstruction capability of DIHM, we can produce 3D representations of the paths followed by micron-sized objects such as suspensions of microspheres and biological samples (cells, algae, protozoa, bacteria). Examples from biology include a study of the motion of bacteria in a diatom and the track of algae and paramecium. In microfluidic applications we observe micro-channel flow, motion of bubbles in water and evolution in electrolysis. The paper finishes with new results from an underwater version of DIHM.

  14. Imaging and dosimetric errors in 4D PET/CT-guided radiotherapy from patient-specific respiratory patterns: a dynamic motion phantom end-to-end study

    NASA Astrophysics Data System (ADS)

    Bowen, S. R.; Nyflot, M. J.; Herrmann, C.; Groh, C. M.; Meyer, J.; Wollenweber, S. D.; Stearns, C. W.; Kinahan, P. E.; Sandison, G. A.

    2015-05-01

    Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10-20%, treatment planning errors were 5-10%, and treatment delivery errors were 5-30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5-10% in PET/CT imaging, <5% in treatment planning, and <2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT

  15. Imaging and dosimetric errors in 4D PET/CT-guided radiotherapy from patient-specific respiratory patterns: a dynamic motion phantom end-to-end study

    PubMed Central

    Bowen, S R; Nyflot, M J; Hermann, C; Groh, C; Meyer, J; Wollenweber, S D; Stearns, C W; Kinahan, P E; Sandison, G A

    2015-01-01

    Effective positron emission tomography/computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by 6 different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy (VMAT) were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses (EUD), and 2%-2mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10–20%, treatment planning errors were 5–10%, and treatment delivery errors were 5–30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5–10% in PET/CT imaging, < 5% in treatment planning, and < 2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT

  16. High-Resolution 4D Imaging of Technetium Transport in Porous Media using Preclinical SPECT-CT

    NASA Astrophysics Data System (ADS)

    Dogan, M.; DeVol, T. A.; Groen, H.; Moysey, S. M.; Ramakers, R.; Powell, B. A.

    2015-12-01

    Preclinical SPECT-CT (single-photon emission computed tomography with integrated X-ray computed tomography) offers the potential to quantitatively image the dynamic three-dimensional distribution of radioisotopes with sub-millimeter resolution, overlaid with structural CT images (20-200 micron resolution), making this an attractive method for studying transport in porous media. A preclinical SPECT-CT system (U-SPECT4CT, MILabs BV. Utrecht, The Netherlands) was evaluated for imaging flow and transport of 99mTc (t1/2=6hrs) using a 46,5mm by 156,4mm column packed with individual layers consisting of <0.2mm diameter silica gel, 0.2-0.25, 0.5, 1.0, 2.0, 3.0, and 4.0mm diameter glass beads, and a natural soil sample obtained from the Savannah River Site. The column was saturated with water prior to injecting the 99mTc solution. During the injection the flow was interrupted intermittently for 10 minute periods to allow for the acquisition of a SPECT image of the transport front. Non-uniformity of the front was clearly observed in the images as well as the retarded movement of 99mTc in the soil layer. The latter is suggesting good potential for monitoring transport processes occurring on the timescale of hours. After breakthrough of 99mTc was achieved, the flow was stopped and SPECT data were collected in one hour increments to evaluate the sensitivity of the instrument as the isotope decayed. Fused SPECT- CT images allowed for improved interpretation of 99mTc distributions within individual pore spaces. With ~3 MBq remaining in the column, the lowest activity imaged, it was not possible to clearly discriminate any of the pore spaces.

  17. Assessing Cardiac Injury in Mice With Dual Energy-MicroCT, 4D-MicroCT, and MicroSPECT Imaging After Partial Heart Irradiation

    SciTech Connect

    Lee, Chang-Lung; Min, Hooney; Befera, Nicholas; Clark, Darin; Qi, Yi; Das, Shiva; Johnson, G. Allan; Badea, Cristian T.; Kirsch, David G.

    2014-03-01

    Purpose: To develop a mouse model of cardiac injury after partial heart irradiation (PHI) and to test whether dual energy (DE)-microCT and 4-dimensional (4D)-microCT can be used to assess cardiac injury after PHI to complement myocardial perfusion imaging using micro-single photon emission computed tomography (SPECT). Methods and Materials: To study cardiac injury from tangent field irradiation in mice, we used a small-field biological irradiator to deliver a single dose of 12 Gy x-rays to approximately one-third of the left ventricle (LV) of Tie2Cre; p53{sup FL/+} and Tie2Cre; p53{sup FL/−} mice, where 1 or both alleles of p53 are deleted in endothelial cells. Four and 8 weeks after irradiation, mice were injected with gold and iodinated nanoparticle-based contrast agents, and imaged with DE-microCT and 4D-microCT to evaluate myocardial vascular permeability and cardiac function, respectively. Additionally, the same mice were imaged with microSPECT to assess myocardial perfusion. Results: After PHI with tangent fields, DE-microCT scans showed a time-dependent increase in accumulation of gold nanoparticles (AuNp) in the myocardium of Tie2Cre; p53{sup FL/−} mice. In Tie2Cre; p53{sup FL/−} mice, extravasation of AuNp was observed within the irradiated LV, whereas in the myocardium of Tie2Cre; p53{sup FL/+} mice, AuNp were restricted to blood vessels. In addition, data from DE-microCT and microSPECT showed a linear correlation (R{sup 2} = 0.97) between the fraction of the LV that accumulated AuNp and the fraction of LV with a perfusion defect. Furthermore, 4D-microCT scans demonstrated that PHI caused a markedly decreased ejection fraction, and higher end-diastolic and end-systolic volumes, to develop in Tie2Cre; p53{sup FL/−} mice, which were associated with compensatory cardiac hypertrophy of the heart that was not irradiated. Conclusions: Our results show that DE-microCT and 4D-microCT with nanoparticle-based contrast agents are novel imaging approaches

  18. An automated landmark-based elastic registration technique for large deformation recovery from 4-D CT lung images

    NASA Astrophysics Data System (ADS)

    Negahdar, Mohammadreza; Zacarias, Albert; Milam, Rebecca A.; Dunlap, Neal; Woo, Shiao Y.; Amini, Amir A.

    2012-03-01

    The treatment plan evaluation for lung cancer patients involves pre-treatment and post-treatment volume CT imaging of the lung. However, treatment of the tumor volume lung results in structural changes to the lung during the course of treatment. In order to register the pre-treatment volume to post-treatment volume, there is a need to find robust and homologous features which are not affected by the radiation treatment along with a smooth deformation field. Since airways are well-distributed in the entire lung, in this paper, we propose use of airway tree bifurcations for registration of the pre-treatment volume to the post-treatment volume. A dedicated and automated algorithm has been developed that finds corresponding airway bifurcations in both images. To derive the 3-D deformation field, a B-spline transformation model guided by mutual information similarity metric was used to guarantee the smoothness of the transformation while combining global information from bifurcation points. Therefore, the approach combines both global statistical intensity information with local image feature information. Since during normal breathing, the lung undergoes large nonlinear deformations, it is expected that the proposed method would also be applicable to large deformation registration between maximum inhale and maximum exhale images in the same subject. The method has been evaluated by registering 3-D CT volumes at maximum exhale data to all the other temporal volumes in the POPI-model data.

  19. NiftyFit: a Software Package for Multi-parametric Model-Fitting of 4D Magnetic Resonance Imaging Data.

    PubMed

    Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien

    2016-07-01

    Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require.

  20. NiftyFit: a Software Package for Multi-parametric Model-Fitting of 4D Magnetic Resonance Imaging Data.

    PubMed

    Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien

    2016-07-01

    Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require. PMID:26972806

  1. Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics

    PubMed Central

    Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A.; Calhoun, Vince D.

    2010-01-01

    We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D de-noising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional de-noising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the de-noised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of de-noised wavelet coefficients for each voxel. Given the decorrelated nature of these de-noised wavelet coefficients; it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules. First, the analysis module where we combine a new 3-D wavelet denoising approach with better signal separation properties of ICA in the wavelet domain, to yield an activation component that corresponds closely to the true underlying signal and is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing + spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic (ROC) curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false

  2. SU-E-J-151: Dosimetric Evaluation of DIR Mapped Contours for Image Guided Adaptive Radiotherapy with 4D Cone-Beam CT

    SciTech Connect

    Balik, S; Weiss, E; Williamson, J; Hugo, G; Jan, N; Zhang, L; Roman, N; Christensen, G

    2014-06-01

    Purpose: To estimate dosimetric errors resulting from using contours deformably mapped from planning CT to 4D cone beam CT (CBCT) images for image-guided adaptive radiotherapy of locally advanced non-small cell lung cancer (NSCLC). Methods: Ten locally advanced non-small cell lung cancer (NSCLC) patients underwent one planning 4D fan-beam CT (4DFBCT) and weekly 4DCBCT scans. Multiple physicians delineated the gross tumor volume (GTV) and normal structures in planning CT images and only GTV in CBCT images. Manual contours were mapped from planning CT to CBCTs using small deformation, inverse consistent linear elastic (SICLE) algorithm for two scans in each patient. Two physicians reviewed and rated the DIR-mapped (auto) and manual GTV contours as clinically acceptable (CA), clinically acceptable after minor modification (CAMM) and unacceptable (CU). Mapped normal structures were visually inspected and corrected if necessary, and used to override tissue density for dose calculation. CTV (6mm expansion of GTV) and PTV (5mm expansion of CTV) were created. VMAT plans were generated using the DIR-mapped contours to deliver 66 Gy in 33 fractions with 95% and 100% coverage (V66) to PTV and CTV, respectively. Plan evaluation for V66 was based on manual PTV and CTV contours. Results: Mean PTV V66 was 84% (range 75% – 95%) and mean CTV V66 was 97% (range 93% – 100%) for CAMM scored plans (12 plans); and was 90% (range 80% – 95%) and 99% (range 95% – 100%) for CA scored plans (7 plans). The difference in V66 between CAMM and CA was significant for PTV (p = 0.03) and approached significance for CTV (p = 0.07). Conclusion: The quality of DIR-mapped contours directly impacted the plan quality for 4DCBCT-based adaptation. Larger safety margins may be needed when planning with auto contours for IGART with 4DCBCT images. Reseach was supported by NIH P01CA116602.

  3. 4D megahertz optical coherence tomography (OCT): imaging and live display beyond 1 gigavoxel/sec (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Huber, Robert A.; Draxinger, Wolfgang; Wieser, Wolfgang; Kolb, Jan Philip; Pfeiffer, Tom; Karpf, Sebastian N.; Eibl, Matthias; Klein, Thomas

    2016-03-01

    Over the last 20 years, optical coherence tomography (OCT) has become a valuable diagnostic tool in ophthalmology with several 10,000 devices sold today. Other applications, like intravascular OCT in cardiology and gastro-intestinal imaging will follow. OCT provides 3-dimensional image data with microscopic resolution of biological tissue in vivo. In most applications, off-line processing of the acquired OCT-data is sufficient. However, for OCT applications like OCT aided surgical microscopes, for functional OCT imaging of tissue after a stimulus, or for interactive endoscopy an OCT engine capable of acquiring, processing and displaying large and high quality 3D OCT data sets at video rate is highly desired. We developed such a prototype OCT engine and demonstrate live OCT with 25 volumes per second at a size of 320x320x320 pixels. The computer processing load of more than 1.5 TFLOPS was handled by a GTX 690 graphics processing unit with more than 3000 stream processors operating in parallel. In the talk, we will describe the optics and electronics hardware as well as the software of the system in detail and analyze current limitations. The talk also focuses on new OCT applications, where such a system improves diagnosis and monitoring of medical procedures. The additional acquisition of hyperspectral stimulated Raman signals with the system will be discussed.

  4. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. PMID:23218511

  5. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations.

  6. Usefulness of four dimensional (4D) PET/CT imaging in the evaluation of thoracic lesions and in radiotherapy planning: Review of the literature.

    PubMed

    Sindoni, Alessandro; Minutoli, Fabio; Pontoriero, Antonio; Iatì, Giuseppe; Baldari, Sergio; Pergolizzi, Stefano

    2016-06-01

    In the past decade, Positron Emission Tomography (PET) has become a routinely used methodology for the assessment of solid tumors, which can detect functional abnormalities even before they become morphologically evident on conventional imaging. PET imaging has been reported to be useful in characterizing solitary pulmonary nodules, guiding biopsy, improving lung cancer staging, guiding therapy, monitoring treatment response and predicting outcome. This review focuses on the most relevant and recent literature findings, highlighting the current role of PET/CT and the evaluation of 4D-PET/CT modality for radiation therapy planning applications. Current evidence suggests that gross tumor volume delineation based on 4D-PET/CT information may be the best approach currently available for its delineation in thoracic cancers (lung and non-lung lesions). In our opinion, its use in this clinical setting is strongly encouraged, as it may improve patient treatment outcome in the setting of radiation therapy for cancers of the thoracic region, not only involving lung, but also lymph nodes and esophageal tissue. Literature results warrants further investigation in future prospective studies, especially in the setting of dose escalation. PMID:27133755

  7. MAME Models for 4D Live-cell Imaging of Tumor: Microenvironment Interactions that Impact Malignant Progression

    PubMed Central

    Sameni, Mansoureh; Anbalagan, Arulselvi; Olive, Mary B.; Moin, Kamiar; Mattingly, Raymond R.; Sloane, Bonnie F.

    2012-01-01

    We have developed 3D coculture models, which we term MAME (mammary architecture and microenvironment engineering), and used them for live-cell imaging in real-time of cell:cell interactions. Our overall goal was to develop models that recapitulate the architecture of preinvasive breast lesions to study their progression to an invasive phenotype. Specifically, we developed models to analyze interactions among pre-malignant breast epithelial cell variants and other cell types of the tumor microenvironment that have been implicated in enhancing or reducing the progression of preinvasive breast epithelial cells to invasive ductal carcinomas. Other cell types studied to date are myoepithelial cells, fibroblasts, macrophages and blood and lymphatic microvascular endothelial cells. In addition to the MAME models, which are designed to recapitulate the cellular interactions within the breast during cancer progression, we have developed comparable models for the progression of prostate cancers. Here we illustrate the procedures for establishing the 3D cocultures along with the use of live-cell imaging and a functional proteolysis assay to follow the transition of cocultures of breast ductal carcinoma in situ (DCIS) cells and fibroblasts to an invasive phenotype over time, in this case over twenty-three days in culture. The MAME cocultures consist of multiple layers. Fibroblasts are embedded in the bottom layer of type I collagen. On that is placed a layer of reconstituted basement membrane (rBM) on which DCIS cells are seeded. A final top layer of 2% rBM is included and replenished with every change of media. To image proteolysis associated with the progression to an invasive phenotype, we use dye-quenched (DQ) fluorescent matrix proteins (DQ-collagen I mixed with the layer of collagen I and DQ-collagen IV mixed with the middle layer of rBM) and observe live cultures using confocal microscopy. Optical sections are captured, processed and reconstructed in 3D with Volocity

  8. 4D Imaging of Salt Precipitation during Evaporation from Saline Porous Media Influenced by the Particle Size Distribution

    NASA Astrophysics Data System (ADS)

    Norouzi Rad, M.; Shokri, N.

    2014-12-01

    Understanding the physics of water evaporation from saline porous media is important in many processes such as evaporation from porous media, vegetation, plant growth, biodiversity in soil, and durability of building materials. To investigate the effect of particle size distribution on the dynamics of salt precipitation in saline porous media during evaporation, we applied X-ray micro-tomography technique. Six samples of quartz sand with different grain size distributions were used in the present study enabling us to constrain the effects of particle and pore sizes on salt precipitation patterns and dynamics. The pore size distributions were computed using the pore-scale X-ray images. The packed beds were saturated with NaCl solution of 3 Molal and the X-ray imaging was continued for one day with temporal resolution of 30 min resulting in pore scale information about the evaporation and precipitation dynamics. Our results show more precipitation at the early stage of the evaporation in the case of sand with the larger particle size due to the presence of fewer evaporation sites at the surface. The presence of more preferential evaporation sites at the surface of finer sands significantly modified the patterns and thickness of the salt crust deposited on the surface such that a thinner salt crust was formed in the case of sand with smaller particle size covering larger area at the surface as opposed to the thicker patchy crusts in samples with larger particle sizes. Our results provide new insights regarding the physics of salt precipitation in porous media during evaporation.

  9. Cardiac function and perfusion dynamics measured on a beat-by-beat basis in the live mouse using ultra-fast 4D optoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Ford, Steven J.; Deán-Ben, Xosé L.; Razansky, Daniel

    2015-03-01

    The fast heart rate (~7 Hz) of the mouse makes cardiac imaging and functional analysis difficult when studying mouse models of cardiovascular disease, and cannot be done truly in real-time and 3D using established imaging modalities. Optoacoustic imaging, on the other hand, provides ultra-fast imaging at up to 50 volumetric frames per second, allowing for acquisition of several frames per mouse cardiac cycle. In this study, we combined a recently-developed 3D optoacoustic imaging array with novel analytical techniques to assess cardiac function and perfusion dynamics of the mouse heart at high, 4D spatiotemporal resolution. In brief, the heart of an anesthetized mouse was imaged over a series of multiple volumetric frames. In another experiment, an intravenous bolus of indocyanine green (ICG) was injected and its distribution was subsequently imaged in the heart. Unique temporal features of the cardiac cycle and ICG distribution profiles were used to segment the heart from background and to assess cardiac function. The 3D nature of the experimental data allowed for determination of cardiac volumes at ~7-8 frames per mouse cardiac cycle, providing important cardiac function parameters (e.g., stroke volume, ejection fraction) on a beat-by-beat basis, which has been previously unachieved by any other cardiac imaging modality. Furthermore, ICG distribution dynamics allowed for the determination of pulmonary transit time and thus additional quantitative measures of cardiovascular function. This work demonstrates the potential for optoacoustic cardiac imaging and is expected to have a major contribution toward future preclinical studies of animal models of cardiovascular health and disease.

  10. Validating and improving CT ventilation imaging by correlating with ventilation 4D-PET/CT using {sup 68}Ga-labeled nanoparticles

    SciTech Connect

    Kipritidis, John Keall, Paul J.; Siva, Shankar; Hofman, Michael S.; Callahan, Jason; Hicks, Rodney J.

    2014-01-15

    Purpose: CT ventilation imaging is a novel functional lung imaging modality based on deformable image registration. The authors present the first validation study of CT ventilation using positron emission tomography with{sup 68}Ga-labeled nanoparticles (PET-Galligas). The authors quantify this agreement for different CT ventilation metrics and PET reconstruction parameters. Methods: PET-Galligas ventilation scans were acquired for 12 lung cancer patients using a four-dimensional (4D) PET/CT scanner. CT ventilation images were then produced by applying B-spline deformable image registration between the respiratory correlated phases of the 4D-CT. The authors test four ventilation metrics, two existing and two modified. The two existing metrics model mechanical ventilation (alveolar air-flow) based on Hounsfield unit (HU) change (V{sub HU}) or Jacobian determinant of deformation (V{sub Jac}). The two modified metrics incorporate a voxel-wise tissue-density scaling (ρV{sub HU} and ρV{sub Jac}) and were hypothesized to better model the physiological ventilation. In order to assess the impact of PET image quality, comparisons were performed using both standard and respiratory-gated PET images with the former exhibiting better signal. Different median filtering kernels (σ{sub m} = 0 or 3 mm) were also applied to all images. As in previous studies, similarity metrics included the Spearman correlation coefficient r within the segmented lung volumes, and Dice coefficient d{sub 20} for the (0 − 20)th functional percentile volumes. Results: The best agreement between CT and PET ventilation was obtained comparing standard PET images to the density-scaled HU metric (ρV{sub HU}) with σ{sub m} = 3 mm. This leads to correlation values in the ranges 0.22 ⩽ r ⩽ 0.76 and 0.38 ⩽ d{sub 20} ⩽ 0.68, with r{sup ¯}=0.42±0.16 and d{sup ¯}{sub 20}=0.52±0.09 averaged over the 12 patients. Compared to Jacobian-based metrics, HU-based metrics lead to statistically significant

  11. 4D seismic to image a thin carbonate reservoir during a miscible C02 flood: Hall-Gurney Field, Kansas, USA

    USGS Publications Warehouse

    Raef, A.E.; Miller, R.D.; Franseen, E.K.; Byrnes, A.P.; Watney, W.L.; Harrison, W.E.

    2005-01-01

    The movement of miscible CO2 injected into a shallow (900 m) thin (3.6-6m) carbonate reservoir was monitored using the high-resolution parallel progressive blanking (PPB) approach. The approach concentrated on repeatability during acquisition and processing, and use of amplitude envelope 4D horizon attributes. Comparison of production data and reservoir simulations to seismic images provided a measure of the effectiveness of time-lapse (TL) to detect weak anomalies associated with changes in fluid concentration. Specifically, the method aided in the analysis of high-resolution data to distinguish subtle seismic characteristics and associated trends related to depositional lithofacies and geometries and structural elements of this carbonate reservoir that impact fluid character and EOR efforts.

  12. 4-D imaging of sub-second dynamics in pore-scale processes using real-time synchrotron X-ray tomography

    NASA Astrophysics Data System (ADS)

    Dobson, Katherine J.; Coban, Sophia B.; McDonald, Samuel A.; Walsh, Joanna N.; Atwood, Robert C.; Withers, Philip J.

    2016-07-01

    A variable volume flow cell has been integrated with state-of-the-art ultra-high-speed synchrotron X-ray tomography imaging. The combination allows the first real-time (sub-second) capture of dynamic pore (micron)-scale fluid transport processes in 4-D (3-D + time). With 3-D data volumes acquired at up to 20 Hz, we perform in situ experiments that capture high-frequency pore-scale dynamics in 5-25 mm diameter samples with voxel (3-D equivalent of a pixel) resolutions of 2.5 to 3.8 µm. The data are free from motion artefacts and can be spatially registered or collected in the same orientation, making them suitable for detailed quantitative analysis of the dynamic fluid distribution pathways and processes. The methods presented here are capable of capturing a wide range of high-frequency nonequilibrium pore-scale processes including wetting, dilution, mixing, and reaction phenomena, without sacrificing significant spatial resolution. As well as fast streaming (continuous acquisition) at 20 Hz, they also allow larger-scale and longer-term experimental runs to be sampled intermittently at lower frequency (time-lapse imaging), benefiting from fast image acquisition rates to prevent motion blur in highly dynamic systems. This marks a major technical breakthrough for quantification of high-frequency pore-scale processes: processes that are critical for developing and validating more accurate multiscale flow models through spatially and temporally heterogeneous pore networks.

  13. Integration of image/video understanding engine into 4D/RCS architecture for intelligent perception-based behavior of robots in real-world environments

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-10-01

    To be completely successful, robots need to have reliable perceptual systems that are similar to human vision. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects with respect to the observer and to each other. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a "raster" into a "vector" representation. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views. Once built, the model of visual scene changes slower then local information in the visual buffer. It allows for disambiguating visual information and effective control of actions and navigation via incremental relational changes in visual buffer. Network-Symbolic models can be seamlessly integrated into the NIST 4D/RCS architecture and better interpret images/video for situation awareness, target recognition, navigation and actions.

  14. Direct 4D PET MLEM reconstruction of parametric images using the simplified reference tissue model with the basis function method for [¹¹C]raclopride.

    PubMed

    Gravel, Paul; Reader, Andrew J

    2015-06-01

    This work assesses the one-step late maximum likelihood expectation maximization (OSL-MLEM) 4D PET reconstruction algorithm for direct estimation of parametric images from raw PET data when using the simplified reference tissue model with the basis function method (SRTM-BFM) for the kinetic analysis. To date, the OSL-MLEM method has been evaluated using kinetic models based on two-tissue compartments with an irreversible component. We extend the evaluation of this method for two-tissue compartments with a reversible component, using SRTM-BFM on simulated 3D + time data sets (with use of [(11)C]raclopride time-activity curves from real data) and on real data sets acquired with the high resolution research tomograph. The performance of the proposed method is evaluated by comparing voxel-level binding potential (BPND) estimates with those obtained from conventional post-reconstruction kinetic parameter estimation. For the commonly chosen number of iterations used in practice, our results show that for the 3D + time simulation, the direct method delivers results with lower (%)RMSE at the normal count level (decreases of 9-10 percentage points, corresponding to a 38-44% reduction), and also at low count levels (decreases of 17-21 percentage points, corresponding to a 26-36% reduction). As for the real 3D data set, the results obtained follow a similar trend, with the direct reconstruction method offering a 21% decrease in (%)CV compared to the post reconstruction method at low count levels. Thus, based on the results presented herein, using the SRTM-BFM kinetic model in conjunction with the OSL-MLEM direct 4D PET MLEM reconstruction method offers an improvement in performance when compared to conventional post reconstruction methods. PMID:25992999

  15. An improved non-local means filter for denoising in brain magnetic resonance imaging based on fuzzy cluster

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Sang, Xinzhu; Xing, Shujun; Wang, Bo

    2014-11-01

    Combining non-local means (NLM) filter with appropriate fuzzy cluster criterion, objective and subjective manners with synthetic brain Magnetic Resonance Imaging(MRI) are evaluated. Experimental results show that noise is effectively suppressed while image details are well kept, compared with the traditional NLM method. Meanwhile, quantitative and qualitative results indicate that artifacts are greatly reduced in our proposed method and brain MR images are typically enhanced.

  16. Feasibility of quantitative lung perfusion by 4D CT imaging by a new dynamic-scanning protocol in an animal model

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Goldin, Jonathan G.; Abtin, Fereidoun G.; Brown, Matt; McNitt-Gray, Mike

    2008-03-01

    The purpose of this study is to test a new dynamic Perfusion-CT imaging protocol in an animal model and investigate the feasibility of quantifying perfusion of lung parenchyma to perform functional analysis from 4D CT image data. A novel perfusion-CT protocol was designed with 25 scanning time points: the first at baseline and 24 scans after a bolus injection of contrast material. Post-contrast CT scanning images were acquired with a high sampling rate before the first blood recirculation and then a relatively low sampling rate until 10 minutes after administrating contrast agent. Lower radiation techniques were used to keep the radiation dose to an acceptable level. 2 Yorkshire swine with pulmonary emboli underwent this perfusion- CT protocol at suspended end inspiration. The software tools were designed to measure the quantitative perfusion parameters (perfusion, permeability, relative blood volume, blood flow, wash-in & wash-out enhancement) of voxel or interesting area of lung. The perfusion values were calculated for further lung functional analysis and presented visually as contrast enhancement maps for the volume being examined. The results show increased CT temporal sampling rate provides the feasibility of quantifying lung function and evaluating the pulmonary emboli. Differences between areas with known perfusion defects and those without perfusion defects were observed. In conclusion, the techniques to calculate the lung perfusion on animal model have potential application in human lung functional analysis such as evaluation of functional effects of pulmonary embolism. With further study, these techniques might be applicable in human lung parenchyma characterization and possibly for lung nodule characterization.

  17. Diffractive centrosymmetric 3D-transmission phase gratings positioned at the image plane of optical systems transform lightlike 4D-WORLD as tunable resonators into spectral metrics...

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    1999-08-01

    Diffractive 3D phase gratings of spherical scatterers dense in hexagonal packing geometry represent adaptively tunable 4D-spatiotemporal filters with trichromatic resonance in visible spectrum. They are described in the (lambda) - chromatic and the reciprocal (nu) -aspects by reciprocal geometric translations of the lightlike Pythagoras theorem, and by the direction cosine for double cones. The most elementary resonance condition in the lightlike Pythagoras theorem is given by the transformation of the grating constants gx, gy, gz of the hexagonal 3D grating to (lambda) h1h2h3 equals (lambda) 111 with cos (alpha) equals 0.5. Through normalization of the chromaticity in the von Laue-interferences to (lambda) 111, the (nu) (lambda) equals (lambda) h1h2h3/(lambda) 111-factor of phase velocity becomes the crucial resonance factor, the 'regulating device' of the spatiotemporal interaction between 3D grating and light, space and time. In the reciprocal space equal/unequal weights and times in spectral metrics result at positions of interference maxima defined by hyperbolas and circles. A database becomes built up by optical interference for trichromatic image preprocessing, motion detection in vector space, multiple range data analysis, patchwide multiple correlations in the spatial frequency spectrum, etc.

  18. Enhanced Optoelectronic Performance of a Passivated Nanowire-Based Device: Key Information from Real-Space Imaging Using 4D Electron Microscopy.

    PubMed

    Khan, Jafar I; Adhikari, Aniruddha; Sun, Jingya; Priante, Davide; Bose, Riya; Shaheen, Basamat S; Ng, Tien Khee; Zhao, Chao; Bakr, Osman M; Ooi, Boon S; Mohammed, Omar F

    2016-05-01

    Managing trap states and understanding their role in ultrafast charge-carrier dynamics, particularly at surface and interfaces, remains a major bottleneck preventing further advancements and commercial exploitation of nanowire (NW)-based devices. A key challenge is to selectively map such ultrafast dynamical processes on the surfaces of NWs, a capability so far out of reach of time-resolved laser techniques. Selective mapping of surface dynamics in real space and time can only be achieved by applying four-dimensional scanning ultrafast electron microscopy (4D S-UEM). Charge carrier dynamics are spatially and temporally visualized on the surface of InGaN NW arrays before and after surface passivation with octadecylthiol (ODT). The time-resolved secondary electron images clearly demonstrate that carrier recombination on the NW surface is significantly slowed down after ODT treatment. This observation is fully supported by enhancement of the performance of the light emitting device. Direct observation of surface dynamics provides a profound understanding of the photophysical mechanisms on materials' surfaces and enables the formulation of effective surface trap state management strategies for the next generation of high-performance NW-based optoelectronic devices. PMID:26938476

  19. 4-D imaging of seepage in earthen embankments with time-lapse inversion of self-potential data constrained by acoustic emissions localization

    NASA Astrophysics Data System (ADS)

    Rittgers, J. B.; Revil, A.; Planes, T.; Mooney, M. A.; Koelewijn, A. R.

    2015-02-01

    New methods are required to combine the information contained in the passive electrical and seismic signals to detect, localize and monitor hydromechanical disturbances in porous media. We propose a field experiment showing how passive seismic and electrical data can be combined together to detect a preferential flow path associated with internal erosion in a Earth dam. Continuous passive seismic and electrical (self-potential) monitoring data were recorded during a 7-d full-scale levee (earthen embankment) failure test, conducted in Booneschans, Netherlands in 2012. Spatially coherent acoustic emissions events and the development of a self-potential anomaly, associated with induced concentrated seepage and internal erosion phenomena, were identified and imaged near the downstream toe of the embankment, in an area that subsequently developed a series of concentrated water flows and sand boils, and where liquefaction of the embankment toe eventually developed. We present a new 4-D grid-search algorithm for acoustic emissions localization in both time and space, and the application of the localization results to add spatially varying constraints to time-lapse 3-D modelling of self-potential data in the terms of source current localization. Seismic signal localization results are utilized to build a set of time-invariant yet spatially varying model weights used for the inversion of the self-potential data. Results from the combination of these two passive techniques show results that are more consistent in terms of focused ground water flow with respect to visual observation on the embankment. This approach to geophysical monitoring of earthen embankments provides an improved approach for early detection and imaging of the development of embankment defects associated with concentrated seepage and internal erosion phenomena. The same approach can be used to detect various types of hydromechanical disturbances at larger scales.

  20. Helical 4D CT and Comparison with Cine 4D CT

    NASA Astrophysics Data System (ADS)

    Pan, Tinsu

    4D CT was one of the most important developments in radiation oncology in the last decade. Its early development in single slice CT and commercialization in multi-slice CT has radically changed our practice in radiation treatment of lung cancer, and has enabled the stereotactic radiosurgery of early stage lung cancer. In this chapter, we will document the history of 4D CT development, detail the data sufficiency condition governing the 4D CT data collection; present the design of the commercial helical 4D CTs from Philips and Siemens; compare the differences between the helical 4D CT and the GE cine 4D CT in data acquisition, slice thickness, acquisition time and work flow; review the respiratory monitoring devices; and understand the causes of image artifacts in 4D CT.

  1. A Variational Approach to the Denoising of Images Based on Different Variants of the TV-Regularization

    SciTech Connect

    Bildhauer, Michael Fuchs, Martin

    2012-12-15

    We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.

  2. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  3. Use of INSAT-3D sounder and imager radiances in the 4D-VAR data assimilation system and its implications in the analyses and forecasts

    NASA Astrophysics Data System (ADS)

    Indira Rani, S.; Taylor, Ruth; George, John P.; Rajagopal, E. N.

    2016-05-01

    INSAT-3D, the first Indian geostationary satellite with sounding capability, provides valuable information over India and the surrounding oceanic regions which are pivotal to Numerical Weather Prediction. In collaboration with UK Met Office, NCMRWF developed the assimilation capability of INSAT-3D Clear Sky Brightness Temperature (CSBT), both from the sounder and imager, in the 4D-Var assimilation system being used at NCMRWF. Out of the 18 sounder channels, radiances from 9 channels are selected for assimilation depending on relevance of the information in each channel. The first three high peaking channels, the CO2 absorption channels and the three water vapor channels (channel no. 10, 11, and 12) are assimilated both over land and Ocean, whereas the window channels (channel no. 6, 7, and 8) are assimilated only over the Ocean. Measured satellite radiances are compared with that from short range forecasts to monitor the data quality. This is based on the assumption that the observed satellite radiances are free from calibration errors and the short range forecast provided by NWP model is free from systematic errors. Innovations (Observation - Forecast) before and after the bias correction are indicative of how well the bias correction works. Since the biases vary with air-masses, time, scan angle and also due to instrument degradation, an accurate bias correction algorithm for the assimilation of INSAT-3D sounder radiance is important. This paper discusses the bias correction methods and other quality controls used for the selected INSAT-3D sounder channels and the impact of bias corrected radiance in the data assimilation system particularly over India and surrounding oceanic regions.

  4. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  5. Improving 4D plan quality for PBS-based liver tumour treatments by combining online image guided beam gating with rescanning

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Knopf, Antje-Christin; Weber, Damien Charles; Lomax, Antony John

    2015-10-01

    Pencil beam scanned (PBS) proton therapy has many advantages over conventional radiotherapy, but its effectiveness for treating mobile tumours remains questionable. Gating dose delivery to the breathing pattern is a well-developed method in conventional radiotherapy for mitigating tumour-motion, but its clinical efficiency for PBS proton therapy is not yet well documented. In this study, the dosimetric benefits and the treatment efficiency of beam gating for PBS proton therapy has been comprehensively evaluated. A series of dedicated 4D dose calculations (4DDC) have been performed on 9 different 4DCT(MRI) liver data sets, which give realistic 4DCT extracting motion information from 4DMRI. The value of 4DCT(MRI) is its capability of providing not only patient geometries and deformable breathing characteristics, but also includes variations in the breathing patterns between breathing cycles. In order to monitor target motion and derive a gating signal, we simulate time-resolved beams’ eye view (BEV) x-ray images as an online motion surrogate. 4DDCs have been performed using three amplitude-based gating window sizes (10/5/3 mm) with motion surrogates derived from either pre-implanted fiducial markers or the diaphragm. In addition, gating has also been simulated in combination with up to 19 times rescanning using either volumetric or layered approaches. The quality of the resulting 4DDC plans has been quantified in terms of the plan homogeneity index (HI), total treatment time and duty cycle. Results show that neither beam gating nor rescanning alone can fully retrieve the plan homogeneity of the static reference plan. Especially for variable breathing patterns, reductions of the effective duty cycle to as low as 10% have been observed with the smallest gating rescanning window (3 mm), implying that gating on its own for such cases would result in much longer treatment times. In addition, when rescanning is applied on its own, large differences between volumetric

  6. Total Variation Denoising and Support Localization of the Gradient

    NASA Astrophysics Data System (ADS)

    Chambolle, A.; Duval, V.; Peyré, G.; Poon, C.

    2016-10-01

    This paper describes the geometrical properties of the solutions to the total variation denoising method. A folklore statement is that this method is able to restore sharp edges, but at the same time, might introduce some staircasing (i.e. “fake” edges) in flat areas. Quite surprisingly, put aside numerical evidences, almost no theoretical result are available to backup these claims. The first contribution of this paper is a precise mathematical definition of the “extended support” (associated to the noise-free image) of TV denoising. This is intuitively the region which is unstable and will suffer from the staircasing effect. Our main result shows that the TV denoising method indeed restores a piece-wise constant image outside a small tube surrounding the extended support. Furthermore, the radius of this tube shrinks toward zero as the noise level vanishes and in some cases, an upper bound on the convergence rate is given.

  7. Effect of denoising on supervised lung parenchymal clusters

    NASA Astrophysics Data System (ADS)

    Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.

  8. Birdsong Denoising Using Wavelets

    PubMed Central

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  9. Is a Clinical Target Volume (CTV) Necessary in the Treatment of Lung Cancer in the Modern Era Combining 4-D Imaging and Image-guided Radiotherapy (IGRT)?

    PubMed Central

    Kilburn, Jeremy M; Lucas, John T; Soike, Michael H; Ayala-Peacock, Diandra N; Blackstock, Arthur W; Hinson, William H; Munley, Michael T; Petty, William J

    2016-01-01

    Objective: We hypothesized that omission of clinical target volumes (CTV) in lung cancer radiotherapy would not compromise control by determining retrospectively if the addition of a CTV would encompass the site of failure. Methods: Stage II-III patients were treated from 2009-2012 with daily cone-beam imaging and a 5 mm planning target volume (PTV) without a CTV. PTVs were expanded 1 cm and termed CTVretro. Recurrences were scored as 1) within the PTV, 2) within CTVretro, or 3) outside the PTV. Locoregional control (LRC), distant control (DC), progression-free survival (PFS), and overall survival (OS) were estimated. Result: Among 110 patients, Stage IIIA 57%, IIIB 32%, IIA 4%, and IIB 7%. Eighty-six percent of Stage III patients received chemotherapy. Median dose was 70 Gy (45-74 Gy) and fraction size ranged from 1.5-2.7 Gy. Median follow-up was 12 months, median OS was 22 months (95% CI 19-30 months), and LRC at two years was 69%. Fourteen local and eight regional events were scored with two CTVretro failures equating to a two-year CTV failure-free survival of 98%. Conclusion: Omission of a 1 cm CTV expansion appears feasible based on only two events among 110 patients and should be considered in radiation planning. PMID:26929893

  10. Method for identifying subsurface fluid migration and drainage pathways in and among oil and gas reservoirs using 3-D and 4-D seismic imaging

    DOEpatents

    Anderson, Roger N.; Boulanger, Albert; Bagdonas, Edward P.; Xu, Liqing; He, Wei

    1996-01-01

    The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.

  11. Method for identifying subsurface fluid migration and drainage pathways in and among oil and gas reservoirs using 3-D and 4-D seismic imaging

    DOEpatents

    Anderson, R.N.; Boulanger, A.; Bagdonas, E.P.; Xu, L.; He, W.

    1996-12-17

    The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.

  12. Constrained reconstructions for 4D intervention guidance.

    PubMed

    Kuntz, J; Flach, B; Kueres, R; Semmler, W; Kachelriess, M; Bartling, S

    2013-05-21

    Image-guided interventions are an increasingly important part of clinical minimally invasive procedures. However, up to now they cannot be performed under 4D (3D + time) guidance due to the exceedingly high x-ray dose. In this work we investigate the applicability of compressed sensing reconstructions for highly undersampled CT datasets combined with the incorporation of prior images in order to yield low dose 4D intervention guidance. We present a new reconstruction scheme prior image dynamic interventional CT (PrIDICT) that accounts for specific image features in intervention guidance and compare it to PICCS and ASD-POCS. The optimal parameters for the dose per projection and the numbers of projections per reconstruction are determined in phantom simulations and measurements. In vivo experiments in six pigs are performed in a cone-beam CT; measured doses are compared to current gold-standard intervention guidance represented by a clinical fluoroscopy system. Phantom studies show maximum image quality for identical overall doses in the range of 14 to 21 projections per reconstruction. In vivo studies reveal that interventional materials can be followed in 4D visualization and that PrIDICT, compared to PICCS and ASD-POCS, shows superior reconstruction results and fewer artifacts in the periphery with dose in the order of biplane fluoroscopy. These results suggest that 4D intervention guidance can be realized with today's flat detector and gantry systems using the herein presented reconstruction scheme.

  13. Constrained reconstructions for 4D intervention guidance

    NASA Astrophysics Data System (ADS)

    Kuntz, J.; Flach, B.; Kueres, R.; Semmler, W.; Kachelrieß, M.; Bartling, S.

    2013-05-01

    Image-guided interventions are an increasingly important part of clinical minimally invasive procedures. However, up to now they cannot be performed under 4D (3D + time) guidance due to the exceedingly high x-ray dose. In this work we investigate the applicability of compressed sensing reconstructions for highly undersampled CT datasets combined with the incorporation of prior images in order to yield low dose 4D intervention guidance. We present a new reconstruction scheme prior image dynamic interventional CT (PrIDICT) that accounts for specific image features in intervention guidance and compare it to PICCS and ASD-POCS. The optimal parameters for the dose per projection and the numbers of projections per reconstruction are determined in phantom simulations and measurements. In vivo experiments in six pigs are performed in a cone-beam CT; measured doses are compared to current gold-standard intervention guidance represented by a clinical fluoroscopy system. Phantom studies show maximum image quality for identical overall doses in the range of 14 to 21 projections per reconstruction. In vivo studies reveal that interventional materials can be followed in 4D visualization and that PrIDICT, compared to PICCS and ASD-POCS, shows superior reconstruction results and fewer artifacts in the periphery with dose in the order of biplane fluoroscopy. These results suggest that 4D intervention guidance can be realized with today’s flat detector and gantry systems using the herein presented reconstruction scheme.

  14. FluoRender: An Application of 2D Image Space Methods for 3D and 4D Confocal Microscopy Data Visualization in Neurobiology Research

    PubMed Central

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2013-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists’ demands for qualitative analysis of confocal microscopy data. PMID:23584131

  15. FluoRender: An Application of 2D Image Space Methods for 3D and 4D Confocal Microscopy Data Visualization in Neurobiology Research.

    PubMed

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2012-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists' demands for qualitative analysis of confocal microscopy data.

  16. Photo-consistency registration of a 4D cardiac motion model to endoscopic video for image guidance of robotic coronary artery bypass

    NASA Astrophysics Data System (ADS)

    Figl, Michael; Rueckert, Daniel; Edwards, Eddie

    2009-02-01

    The aim of the work described in this paper is registration of a 4D preoperative motion model of the heart to the video view of the patient through the intraoperative endoscope. The heart motion is cyclical and can be modelled using multiple reconstructions of cardiac gated coronary CT. We propose the use of photoconsistency between the two views through the da Vinci endoscope to align to the preoperative heart surface model from CT. The temporal alignment from the video to the CT model could in principle be obtained from the ECG signal. We propose averaging of the photoconsistency over the cardiac cycle to improve the registration compared to a single view. Though there is considerable motion of the heart, after correct temporal alignment we suggest that the remaining motion should be close to rigid. Results are presented for simulated renderings and for real video of a beating heart phantom. We found much smoother sections at the minimum when using multiple phases for the registration, furthermore convergence was found to be better when more phases are used.

  17. 4D (x-y-z-t) imaging of thick biological samples by means of Two-Photon inverted Selective Plane Illumination Microscopy (2PE-iSPIM)

    NASA Astrophysics Data System (ADS)

    Lavagnino, Zeno; Sancataldo, Giuseppe; D’Amora, Marta; Follert, Philipp; de Pietri Tonelli, Davide; Diaspro, Alberto; Cella Zanacchi, Francesca

    2016-04-01

    In the last decade light sheet fluorescence microscopy techniques, such as selective plane illumination microscopy (SPIM), has become a well established method for developmental biology. However, conventional SPIM architectures hardly permit imaging of certain tissues since the common sample mounting procedure, based on gel embedding, could interfere with the sample morphology. In this work we propose an inverted selective plane microscopy system (iSPIM), based on non-linear excitation, suitable for 3D tissue imaging. First, the iSPIM architecture provides flexibility on the sample mounting, getting rid of the gel-based mounting typical of conventional SPIM, permitting 3D imaging of hippocampal slices from mouse brain. Moreover, all the advantages brought by two photon excitation (2PE) in terms of reduction of scattering effects and contrast improvement are exploited, demonstrating an improved image quality and contrast compared to single photon excitation. The system proposed represents an optimal platform for tissue imaging and it smooths the way to the applicability of light sheet microscopy to a wider range of samples including those that have to be mounted on non-transparent surfaces.

  18. 4D (x-y-z-t) imaging of thick biological samples by means of Two-Photon inverted Selective Plane Illumination Microscopy (2PE-iSPIM)

    PubMed Central

    Lavagnino, Zeno; Sancataldo, Giuseppe; d’Amora, Marta; Follert, Philipp; De Pietri Tonelli, Davide; Diaspro, Alberto; Cella Zanacchi, Francesca

    2016-01-01

    In the last decade light sheet fluorescence microscopy techniques, such as selective plane illumination microscopy (SPIM), has become a well established method for developmental biology. However, conventional SPIM architectures hardly permit imaging of certain tissues since the common sample mounting procedure, based on gel embedding, could interfere with the sample morphology. In this work we propose an inverted selective plane microscopy system (iSPIM), based on non-linear excitation, suitable for 3D tissue imaging. First, the iSPIM architecture provides flexibility on the sample mounting, getting rid of the gel-based mounting typical of conventional SPIM, permitting 3D imaging of hippocampal slices from mouse brain. Moreover, all the advantages brought by two photon excitation (2PE) in terms of reduction of scattering effects and contrast improvement are exploited, demonstrating an improved image quality and contrast compared to single photon excitation. The system proposed represents an optimal platform for tissue imaging and it smooths the way to the applicability of light sheet microscopy to a wider range of samples including those that have to be mounted on non-transparent surfaces. PMID:27033347

  19. Evaluation of the cone beam CT for internal target volume localization in lung stereotactic radiotherapy in comparison with 4D MIP images

    SciTech Connect

    Wang, Lu; Chen, Xiaoming; Lin, Mu-Han; Lin, Teh; Fan, Jiajin; Jin, Lihui; Ma, Charlie M.; Xue, Jun

    2013-11-15

    Purpose: To investigate whether the three-dimensional cone-beam CT (CBCT) is clinically equivalent to the four-dimensional computed tomography (4DCT) maximum intensity projection (MIP) reconstructed images for internal target volume (ITV) localization in image-guided lung stereotactic radiotherapy.Methods: A ball-shaped polystyrene phantom with built-in cube, sphere, and cone of known volumes was attached to a motor-driven platform, which simulates a sinusoidal movement with changeable motion amplitude and frequency. Target motion was simulated in the patient in a superior-inferior (S-I) direction with three motion periods and 2 cm peak-to-peak amplitudes. The Varian onboard Exact-Arms kV CBCT system and the GE LightSpeed four-slice CT integrated with the respiratory-position-management 4DCT scanner were used to scan the moving phantom. MIP images were generated from the 4DCT images. The clinical equivalence of the two sets of images was evaluated by comparing the extreme locations of the moving objects along the motion direction, the centroid position of the ITV, and the ITV volumes that were contoured automatically by Velocity or calculated with an imaging gradient method. The authors compared the ITV volumes determined by the above methods with those theoretically predicted by taking into account the physical object dimensions and the motion amplitudes. The extreme locations were determined by the gradient method along the S-I axis through the center of the object. The centroid positions were determined by autocenter functions. The effect of motion period on the volume sizes was also studied.Results: It was found that the extreme locations of the objects determined from the two image modalities agreed with each other satisfactorily. They were not affected by the motion period. The average difference between the two modalities in the extreme locations was 0.68% for the cube, 1.35% for the sphere, and 0.5% for the cone, respectively. The maximum difference in the

  20. Denoised and texture enhanced MVCT to improve soft tissue conspicuity

    SciTech Connect

    Sheng, Ke Qi, Sharon X.; Gou, Shuiping; Wu, Jiaolong

    2014-10-15

    Purpose: MVCT images have been used in TomoTherapy treatment to align patients based on bony anatomies but its usefulness for soft tissue registration, delineation, and adaptive radiation therapy is limited due to insignificant photoelectric interaction components and the presence of noise resulting from low detector quantum efficiency of megavoltage x-rays. Algebraic reconstruction with sparsity regularizers as well as local denoising methods has not significantly improved the soft tissue conspicuity. The authors aim to utilize a nonlocal means denoising method and texture enhancement to recover the soft tissue information in MVCT (DeTECT). Methods: A block matching 3D (BM3D) algorithm was adapted to reduce the noise while keeping the texture information of the MVCT images. Following imaging denoising, a saliency map was created to further enhance visual conspicuity of low contrast structures. In this study, BM3D and saliency maps were applied to MVCT images of a CT imaging quality phantom, a head and neck, and four prostate patients. Following these steps, the contrast-to-noise ratios (CNRs) were quantified. Results: By applying BM3D denoising and saliency map, postprocessed MVCT images show remarkable improvements in imaging contrast without compromising resolution. For the head and neck patient, the difficult-to-see lymph nodes and vein in the carotid space in the original MVCT image became conspicuous in DeTECT. For the prostate patients, the ambiguous boundary between the bladder and the prostate in the original MVCT was clarified. The CNRs of phantom low contrast inserts were improved from 1.48 and 3.8 to 13.67 and 16.17, respectively. The CNRs of two regions-of-interest were improved from 1.5 and 3.17 to 3.14 and 15.76, respectively, for the head and neck patient. DeTECT also increased the CNR of prostate from 0.13 to 1.46 for the four prostate patients. The results are substantially better than a local denoising method using anisotropic diffusion

  1. Abdominal and pancreatic motion correlation using 4D CT, 4D transponders, and a gating belt.

    PubMed

    Betancourt, Ricardo; Zou, Wei; Plastaras, John P; Metz, James M; Teo, Boon-Keng; Kassaee, Alireza

    2013-01-01

    The correlation between the pancreatic and external abdominal motion due to respiration was investigated on two patients. These studies utilized four dimensional computer tomography (4D CT), a four dimensional (4D) electromagnetic transponder system, and a gating belt system. One 4D CT study was performed during simulation to quantify the pancreatic motion using computer tomography images at eight breathing phases. The motion under free breathing and breath-hold were analyzed for the 4D electromagnetic transponder system and the gating belt system during treatment. A linear curve was fitted for all data sets and correlation factors were evaluated between the 4D electromagnetic transponder system and the gating belt system data. The 4D CT study demonstrated a modest correlation between the external marker and the pancreatic motion with R-square values larger than 0.8 for the inferior-superior (inf-sup). Then, the relative pressure from the belt gating system correlated well with the 4D electromagnetic transponder system's motion in the anterior-posterior (ant-post) and the inf-post directions. These directions have a correlation value of -0.93 and 0.76, while the lateral only had a 0.03 correlation coefficient. Based on our limited study, external surrogates can be used as predictors of the pancreatic motion in the inf-sup and the ant-post directions. Although there is a low correlation on the lateral direction, its motion is significantly shorter. In conclusion, an appropriate treatment delivery can be used for pancreatic cancer when an internal tracking system, such as the 4D electromagnetic transponder system, is unavailable. PMID:23652242

  2. Abdominal and pancreatic motion correlation using 4D CT, 4D transponders, and a gating belt.

    PubMed

    Betancourt, Ricardo; Zou, Wei; Plastaras, John P; Metz, James M; Teo, Boon-Keng; Kassaee, Alireza

    2013-05-06

    The correlation between the pancreatic and external abdominal motion due to respiration was investigated on two patients. These studies utilized four dimensional computer tomography (4D CT), a four dimensional (4D) electromagnetic transponder system, and a gating belt system. One 4D CT study was performed during simulation to quantify the pancreatic motion using computer tomography images at eight breathing phases. The motion under free breathing and breath-hold were analyzed for the 4D electromagnetic transponder system and the gating belt system during treatment. A linear curve was fitted for all data sets and correlation factors were evaluated between the 4D electromagnetic transponder system and the gating belt system data. The 4D CT study demonstrated a modest correlation between the external marker and the pancreatic motion with R-square values larger than 0.8 for the inferior-superior (inf-sup). Then, the relative pressure from the belt gating system correlated well with the 4D electromagnetic transponder system's motion in the anterior-posterior (ant-post) and the inf-post directions. These directions have a correlation value of -0.93 and 0.76, while the lateral only had a 0.03 correlation coefficient. Based on our limited study, external surrogates can be used as predictors of the pancreatic motion in the inf-sup and the ant-post directions. Although there is a low correlation on the lateral direction, its motion is significantly shorter. In conclusion, an appropriate treatment delivery can be used for pancreatic cancer when an internal tracking system, such as the 4D electromagnetic transponder system, is unavailable.

  3. Accuracy and Utility of Deformable Image Registration in {sup 68}Ga 4D PET/CT Assessment of Pulmonary Perfusion Changes During and After Lung Radiation Therapy

    SciTech Connect

    Hardcastle, Nicholas; Hofman, Michael S.; Hicks, Rodney J.; Callahan, Jason; Kron, Tomas; MacManus, Michael P.; Ball, David L.; Jackson, Price; Siva, Shankar

    2015-09-01

    Purpose: Measuring changes in lung perfusion resulting from radiation therapy dose requires registration of the functional imaging to the radiation therapy treatment planning scan. This study investigates registration accuracy and utility for positron emission tomography (PET)/computed tomography (CT) perfusion imaging in radiation therapy for non–small cell lung cancer. Methods: {sup 68}Ga 4-dimensional PET/CT ventilation-perfusion imaging was performed before, during, and after radiation therapy for 5 patients. Rigid registration and deformable image registration (DIR) using B-splines and Demons algorithms was performed with the CT data to obtain a deformation map between the functional images and planning CT. Contour propagation accuracy and correspondence of anatomic features were used to assess registration accuracy. Wilcoxon signed-rank test was used to determine statistical significance. Changes in lung perfusion resulting from radiation therapy dose were calculated for each registration method for each patient and averaged over all patients. Results: With B-splines/Demons DIR, median distance to agreement between lung contours reduced modestly by 0.9/1.1 mm, 1.3/1.6 mm, and 1.3/1.6 mm for pretreatment, midtreatment, and posttreatment (P<.01 for all), and median Dice score between lung contours improved by 0.04/0.04, 0.05/0.05, and 0.05/0.05 for pretreatment, midtreatment, and posttreatment (P<.001 for all). Distance between anatomic features reduced with DIR by median 2.5 mm and 2.8 for pretreatment and midtreatment time points, respectively (P=.001) and 1.4 mm for posttreatment (P>.2). Poorer posttreatment results were likely caused by posttreatment pneumonitis and tumor regression. Up to 80% standardized uptake value loss in perfusion scans was observed. There was limited change in the loss in lung perfusion between registration methods; however, Demons resulted in larger interpatient variation compared with rigid and B-splines registration

  4. Nanowires: Enhanced Optoelectronic Performance of a Passivated Nanowire-Based Device: Key Information from Real-Space Imaging Using 4D Electron Microscopy (Small 17/2016).

    PubMed

    Khan, Jafar I; Adhikari, Aniruddha; Sun, Jingya; Priante, Davide; Bose, Riya; Shaheen, Basamat S; Ng, Tien Khee; Zhao, Chao; Bakr, Osman M; Ooi, Boon S; Mohammed, Omar F

    2016-05-01

    Selective mapping of surface charge carrier dynamics of InGaN nanowires before and after surface passivation with octadecylthiol (ODT) is reported by O. F. Mohammed and co-workers on page 2313, using scanning ultrafast electron microscopy. In a typical experiment, the 343 nm output of the laser beam is used to excite the microscope tip to generate pulsed electrons for probing, and the 515 nm output is used as a clocking excitation pulse to initiate dynamics. Time-resolved images demonstrate clearly that carrier recombination is significantly slowed after ODT treatment, which supports the efficient removal of surface trap states. PMID:27124006

  5. ASTER and USGS EROS emergency imaging for hurricane disasters: Chapter 4D in Science and the storms-the USGS response to the hurricanes of 2005

    USGS Publications Warehouse

    Duda, Kenneth A.; Abrams, Michael

    2007-01-01

    Satellite images have been extremely useful in a variety of emergency response activities, including hurricane disasters. This article discusses the collaborative efforts of the U.S. Geological Survey (USGS), the Joint United States-Japan Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Science Team, and the National Aeronautics and Space Administration (NASA) in responding to crisis situations by tasking the ASTER instrument and rapidly providing information to initial responders. Insight is provided on the characteristics of the ASTER systems, and specific details are presented regarding Hurricane Katrina support.

  6. 4D analysis of the microstructural evolution of Si-based electrodes during lithiation: Time-lapse X-ray imaging and digital volume correlation

    NASA Astrophysics Data System (ADS)

    Paz-Garcia, J. M.; Taiwo, O. O.; Tudisco, E.; Finegan, D. P.; Shearing, P. R.; Brett, D. J. L.; Hall, S. A.

    2016-07-01

    Silicon is a promising candidate to substitute or complement graphite as anode material in Li-ion batteries due, mainly, to its high energy density. However, the lithiation/delithiation processes of silicon particles are inherently related to drastic volume changes which, within a battery's physically constrained case, can induce significant deformation of the fundamental components of the battery that can eventually cause it to fail. In this work, we use non-destructive time-lapse X-ray imaging techniques to study the coupled electrochemo-mechanical phenomena in Li-ion batteries. We present X-ray computed tomography data acquired at different times during the first lithiation of custom-built silicon-lithium battery cells. Microstructural volume changes have been quantified using full 3D strain field measurements from digital volume correlation analysis. Furthermore, the extent of lithiation of silicon particles has been quantified in 3D from the grey-scale of the tomography images. Correlation of the volume expansion and grey-scale changes over the silicon-based electrode volume indicates that the process of lithiation is kinetically affected by the reaction at the Si/LixSi interface.

  7. Iterative weighted maximum likelihood denoising with probabilistic patch-based weights.

    PubMed

    Deledalle, Charles-Alban; Denis, Loïc; Tupin, Florence

    2009-12-01

    Image denoising is an important problem in image processing since noise may interfere with visual or automatic interpretation. This paper presents a new approach for image denoising in the case of a known uncorrelated noise model. The proposed filter is an extension of the nonlocal means (NL means) algorithm introduced by Buades , which performs a weighted average of the values of similar pixels. Pixel similarity is defined in NL means as the Euclidean distance between patches (rectangular windows centered on each two pixels). In this paper, a more general and statistically grounded similarity criterion is proposed which depends on the noise distribution model. The denoising process is expressed as a weighted maximum likelihood estimation problem where the weights are derived in a data-driven way. These weights can be iteratively refined based on both the similarity between noisy patches and the similarity of patches extracted from the previous estimate. We show that this iterative process noticeably improves the denoising performance, especially in the case of low signal-to-noise ratio images such as synthetic aperture radar (SAR) images. Numerical experiments illustrate that the technique can be successfully applied to the classical case of additive Gaussian noise but also to cases such as multiplicative speckle noise. The proposed denoising technique seems to improve on the state of the art performance in that latter case.

  8. Advances in 4D radiation therapy for managing respiration: part II - 4D treatment planning.

    PubMed

    Rosu, Mihaela; Hugo, Geoffrey D

    2012-12-01

    The development of 4D CT imaging technology made possible the creation of patient models that are reflective of respiration-induced anatomical changes by adding a temporal dimension to the conventional 3D, spatial-only, patient description. This had opened a new venue for treatment planning and radiation delivery, aimed at creating a comprehensive 4D radiation therapy process for moving targets. Unlike other breathing motion compensation strategies (e.g. breath-hold and gating techniques), 4D radiotherapy assumes treatment delivery over the entire respiratory cycle - an added bonus for both patient comfort and treatment time efficiency. The time-dependent positional and volumetric information holds the promise for optimal, highly conformal, radiotherapy for targets experiencing movements caused by respiration, with potentially elevated dose prescriptions and therefore higher cure rates, while avoiding the uninvolved nearby structures. In this paper, the current state of the 4D treatment planning is reviewed, from theory to the established practical routine. While the fundamental principles of 4D radiotherapy are well defined, the development of a complete, robust and clinically feasible process still remains a challenge, imposed by limitations in the available treatment planning and radiation delivery systems.

  9. Advances in 4D Radiation Therapy for Managing Respiration: Part II – 4D Treatment Planning

    PubMed Central

    Rosu, Mihaela; Hugo, Geoffrey D.

    2014-01-01

    The development of 4D CT imaging technology made possible the creation of patient models that are reflective of respiration-induced anatomical changes by adding a temporal dimension to the conventional 3D, spatial-only, patient description. This had opened a new venue for treatment planning and radiation delivery, aimed at creating a comprehensive 4D radiation therapy process for moving targets. Unlike other breathing motion compensation strategies (e.g. breath-hold and gating techniques), 4D radiotherapy assumes treatment delivery over the entire respiratory cycle – an added bonus for both patient comfort and treatment time efficiency. The time-dependent positional and volumetric information holds the promise for optimal, highly conformal, radiotherapy for targets experiencing movements caused by respiration, with potentially elevated dose prescriptions and therefore higher cure rates, while avoiding the uninvolved nearby structures. In this paper, the current state of the 4D treatment planning is reviewed, from theory to the established practical routine. While the fundamental principles of 4D radiotherapy are well defined, the development of a complete, robust and clinically feasible process still remains a challenge, imposed by limitations in the available treatment planning and radiation delivery systems. PMID:22796324

  10. Diagnostic accuracy of late iodine enhancement on cardiac computed tomography with a denoise filter for the evaluation of myocardial infarction.

    PubMed

    Matsuda, Takuya; Kido, Teruhito; Itoh, Toshihide; Saeki, Hideyuki; Shigemi, Susumu; Watanabe, Kouki; Kido, Tomoyuki; Aono, Shoji; Yamamoto, Masaya; Matsuda, Takeshi; Mochizuki, Teruhito

    2015-12-01

    We evaluated the image quality and diagnostic performance of late iodine enhancement (LIE) in dual-source computed tomography (DSCT) with low kilo-voltage peak (kVp) images and a denoise filter for the detection of acute myocardial infarction (AMI) in comparison with late gadolinium enhancement (LGE) magnetic resonance imaging (MRI). The Hospital Ethics Committee approved the study protocol. Before discharge, 19 patients who received percutaneous coronary intervention after AMI underwent DSCT and 1.5 T MRI. Immediately after coronary computed tomography (CT) angiography, contrast medium was administered at a slow injection rate. LIE-CT scans were acquired via dual-energy CT and reconstructed as 100-, 140-kVp, and mixed images. An iterative three-dimensional edge-preserved smoothing filter was applied to the 100-kVp images to obtain denoised 100-kVp images. The mixed, 140-kVp, 100-kVp, and denoised 100-kVp images were assessed using contrast-to-noise ratio (CNR), and their diagnostic performance in comparison with MRI and infarcted volumes were evaluated. Three hundred four segments of 19 patients were evaluated. Fifty-three segments showed LGE in MRI. The median CNR of the mixed, 140-, 100-kVp and denoised 100-kVp images was 3.49, 1.21, 3.57, and 6.08, respectively. The median CNR was significantly higher in the denoised 100-kVp images than in the other three images (P < 0.05). The denoised 100-kVp images showed the highest diagnostic accuracy and sensitivity. The percentage of myocardium in the four CT image types was significantly correlated with the respective MRI findings. The use of a denoise filter with a low-kVp image can improve CNR, sensitivity, and accuracy in LIE-CT.

  11. Diagnostic accuracy of late iodine enhancement on cardiac computed tomography with a denoise filter for the evaluation of myocardial infarction.

    PubMed

    Matsuda, Takuya; Kido, Teruhito; Itoh, Toshihide; Saeki, Hideyuki; Shigemi, Susumu; Watanabe, Kouki; Kido, Tomoyuki; Aono, Shoji; Yamamoto, Masaya; Matsuda, Takeshi; Mochizuki, Teruhito

    2015-12-01

    We evaluated the image quality and diagnostic performance of late iodine enhancement (LIE) in dual-source computed tomography (DSCT) with low kilo-voltage peak (kVp) images and a denoise filter for the detection of acute myocardial infarction (AMI) in comparison with late gadolinium enhancement (LGE) magnetic resonance imaging (MRI). The Hospital Ethics Committee approved the study protocol. Before discharge, 19 patients who received percutaneous coronary intervention after AMI underwent DSCT and 1.5 T MRI. Immediately after coronary computed tomography (CT) angiography, contrast medium was administered at a slow injection rate. LIE-CT scans were acquired via dual-energy CT and reconstructed as 100-, 140-kVp, and mixed images. An iterative three-dimensional edge-preserved smoothing filter was applied to the 100-kVp images to obtain denoised 100-kVp images. The mixed, 140-kVp, 100-kVp, and denoised 100-kVp images were assessed using contrast-to-noise ratio (CNR), and their diagnostic performance in comparison with MRI and infarcted volumes were evaluated. Three hundred four segments of 19 patients were evaluated. Fifty-three segments showed LGE in MRI. The median CNR of the mixed, 140-, 100-kVp and denoised 100-kVp images was 3.49, 1.21, 3.57, and 6.08, respectively. The median CNR was significantly higher in the denoised 100-kVp images than in the other three images (P < 0.05). The denoised 100-kVp images showed the highest diagnostic accuracy and sensitivity. The percentage of myocardium in the four CT image types was significantly correlated with the respective MRI findings. The use of a denoise filter with a low-kVp image can improve CNR, sensitivity, and accuracy in LIE-CT. PMID:26202159

  12. Fractional Diffusion, Low Exponent Lévy Stable Laws, and ‘Slow Motion’ Denoising of Helium Ion Microscope Nanoscale Imagery

    PubMed Central

    Carasso, Alfred S.; Vladár, András E.

    2012-01-01

    Helium ion microscopes (HIM) are capable of acquiring images with better than 1 nm resolution, and HIM images are particularly rich in morphological surface details. However, such images are generally quite noisy. A major challenge is to denoise these images while preserving delicate surface information. This paper presents a powerful slow motion denoising technique, based on solving linear fractional diffusion equations forward in time. The method is easily implemented computationally, using fast Fourier transform (FFT) algorithms. When applied to actual HIM images, the method is found to reproduce the essential surface morphology of the sample with high fidelity. In contrast, such highly sophisticated methodologies as Curvelet Transform denoising, and Total Variation denoising using split Bregman iterations, are found to eliminate vital fine scale information, along with the noise. Image Lipschitz exponents are a useful image metrology tool for quantifying the fine structure content in an image. In this paper, this tool is applied to rank order the above three distinct denoising approaches, in terms of their texture preserving properties. In several denoising experiments on actual HIM images, it was found that fractional diffusion smoothing performed noticeably better than split Bregman TV, which in turn, performed slightly better than Curvelet denoising. PMID:26900518

  13. Nonlocal transform-domain filter for volumetric data denoising and reconstruction.

    PubMed

    Maggioni, Matteo; Katkovnik, Vladimir; Egiazarian, Karen; Foi, Alessandro

    2013-01-01

    We present an extension of the BM3D filter to volumetric data. The proposed algorithm, BM4D, implements the grouping and collaborative filtering paradigm, where mutually similar d-dimensional patches are stacked together in a (d+1)-dimensional array and jointly filtered in transform domain. While in BM3D the basic data patches are blocks of pixels, in BM4D we utilize cubes of voxels, which are stacked into a 4-D "group." The 4-D transform applied on the group simultaneously exploits the local correlation present among voxels in each cube and the nonlocal correlation between the corresponding voxels of different cubes. Thus, the spectrum of the group is highly sparse, leading to very effective separation of signal and noise through coefficient shrinkage. After inverse transformation, we obtain estimates of each grouped cube, which are then adaptively aggregated at their original locations. We evaluate the algorithm on denoising of volumetric data corrupted by Gaussian and Rician noise, as well as on reconstruction of volumetric phantom data with non-zero phase from noisy and incomplete Fourier-domain (k-space) measurements. Experimental results demonstrate the state-of-the-art denoising performance of BM4D, and its effectiveness when exploited as a regularizer in volumetric data reconstruction. PMID:22868570

  14. MRI noise estimation and denoising using non-local PCA.

    PubMed

    Manjón, José V; Coupé, Pierrick; Buades, Antonio

    2015-05-01

    This paper proposes a novel method for MRI denoising that exploits both the sparseness and self-similarity properties of the MR images. The proposed method is a two-stage approach that first filters the noisy image using a non local PCA thresholding strategy by automatically estimating the local noise level present in the image and second uses this filtered image as a guide image within a rotationally invariant non-local means filter. The proposed method internally estimates the amount of local noise presents in the images that enables applying it automatically to images with spatially varying noise levels and also corrects the Rician noise induced bias locally. The proposed approach has been compared with related state-of-the-art methods showing competitive results in all the studied cases. PMID:25725303

  15. GPU-based cone-beam reconstruction using wavelet denoising

    NASA Astrophysics Data System (ADS)

    Jin, Kyungchan; Park, Jungbyung; Park, Jongchul

    2012-03-01

    The scattering noise artifact resulted in low-dose projection in repetitive cone-beam CT (CBCT) scans decreases the image quality and lessens the accuracy of the diagnosis. To improve the image quality of low-dose CT imaging, the statistical filtering is more effective in noise reduction. However, image filtering and enhancement during the entire reconstruction process exactly may be challenging due to high performance computing. The general reconstruction algorithm for CBCT data is the filtered back-projection, which for a volume of 512×512×512 takes up to a few minutes on a standard system. To speed up reconstruction, massively parallel architecture of current graphical processing unit (GPU) is a platform suitable for acceleration of mathematical calculation. In this paper, we focus on accelerating wavelet denoising and Feldkamp-Davis-Kress (FDK) back-projection using parallel processing on GPU, utilize compute unified device architecture (CUDA) platform and implement CBCT reconstruction based on CUDA technique. Finally, we evaluate our implementation on clinical tooth data sets. Resulting implementation of wavelet denoising is able to process a 1024×1024 image within 2 ms, except data loading process, and our GPU-based CBCT implementation reconstructs a 512×512×512 volume from 400 projection data in less than 1 minute.

  16. Observer Performance in the Detection and Classification of Malignant Hepatic Nodules and Masses with CT Image-Space Denoising and Iterative Reconstruction

    PubMed Central

    Yu, Lifeng; Li, Zhoubo; Manduca, Armando; Blezek, Daniel J.; Hough, David M.; Venkatesh, Sudhakar K.; Brickner, Gregory C.; Cernigliaro, Joseph C.; Hara, Amy K.; Fidler, Jeff L.; Lake, David S.; Shiung, Maria; Lewis, David; Leng, Shuai; Augustine, Kurt E.; Carter, Rickey E.; Holmes, David R.; McCollough, Cynthia H.

    2015-01-01

    Purpose To determine if lower-dose computed tomographic (CT) scans obtained with adaptive image-based noise reduction (adaptive nonlocal means [ANLM]) or iterative reconstruction (sinogram-affirmed iterative reconstruction [SAFIRE]) result in reduced observer performance in the detection of malignant hepatic nodules and masses compared with routine-dose scans obtained with filtered back projection (FBP). Materials and Methods This study was approved by the institutional review board and was compliant with HIPAA. Informed consent was obtained from patients for the retrospective use of medical records for research purposes. CT projection data from 33 abdominal and 27 liver or pancreas CT examinations were collected (median volume CT dose index, 13.8 and 24.0 mGy, respectively). Hepatic malignancy was defined by progression or regression or with histopathologic findings. Lower-dose data were created by using a validated noise insertion method (10.4 mGy for abdominal CT and 14.6 mGy for liver or pancreas CT) and images reconstructed with FBP, ANLM, and SAFIRE. Four readers evaluated routine-dose FBP images and all lower-dose images, circumscribing liver lesions and selecting diagnosis. The jackknife free-response receiver operating characteristic figure of merit (FOM) was calculated on a per–malignant nodule or per-mass basis. Noninferiority was defined by the lower limit of the 95% confidence interval (CI) of the difference between lower-dose and routine-dose FOMs being less than −0.10. Results Twenty-nine patients had 62 malignant hepatic nodules and masses. Estimated FOM differences between lower-dose FBP and lower-dose ANLM versus routine-dose FBP were noninferior (difference: −0.041 [95% CI: −0.090, 0.009] and −0.003 [95% CI: −0.052, 0.047], respectively). In patients with dedicated liver scans, lower-dose ANLM images were noninferior (difference: +0.015 [95% CI: −0.077, 0.106]), whereas lower-dose FBP images were not (difference −0.049 [95% CI:

  17. 4-D reconstruction for dynamic fluorescence diffuse optical tomography.

    PubMed

    Liu, Xin; Zhang, Bin; Luo, Jianwen; Bai, Jing

    2012-11-01

    Dynamic fluorescence diffuse optical tomography (FDOT) is important for the research of drug delivery, medical diagnosis and treatment. Conventionally, dynamic tomographic images are reconstructed frame by frame, independently. This approach fails to account for the temporal correlations in measurement data. Ideally, the entire image sequence should be considered as a whole and a four-dimensional (4-D) reconstruction should be performed. However, the fully 4-D reconstruction is computationally intensive. In this paper, we propose a new 4-D reconstruction approach for dynamic FDOT, which is achieved by applying a temporal Karhunen-Loève (KL) transformation to the imaging equation. By taking advantage of the decorrelation and compression properties of the KL transformation, the complex 4-D optical reconstruction problem is greatly simplified. To evaluate the performance of the method, simulation, phantom, and in vivo experiments (N=7) are performed on a hybrid FDOT/x-ray computed tomography imaging system. The experimental results indicate that the reconstruction images obtained by the KL method provide good reconstruction quality. Additionally, by discarding high-order KL components, the computation time involved with fully 4-D reconstruction can be greatly reduced in contrast to the conventional frame-by-frame reconstruction.

  18. Sparsity-based Poisson denoising with dictionary learning.

    PubMed

    Giryes, Raja; Elad, Michael

    2014-12-01

    The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR. PMID:25312930

  19. Determination and Visualization of pH Values in Anaerobic Digestion of Water Hyacinth and Rice Straw Mixtures Using Hyperspectral Imaging with Wavelet Transform Denoising and Variable Selection

    PubMed Central

    Zhang, Chu; Ye, Hui; Liu, Fei; He, Yong; Kong, Wenwen; Sheng, Kuichuan

    2016-01-01

    Biomass energy represents a huge supplement for meeting current energy demands. A hyperspectral imaging system covering the spectral range of 874–1734 nm was used to determine the pH value of anaerobic digestion liquid produced by water hyacinth and rice straw mixtures used for methane production. Wavelet transform (WT) was used to reduce noises of the spectral data. Successive projections algorithm (SPA), random frog (RF) and variable importance in projection (VIP) were used to select 8, 15 and 20 optimal wavelengths for the pH value prediction, respectively. Partial least squares (PLS) and a back propagation neural network (BPNN) were used to build the calibration models on the full spectra and the optimal wavelengths. As a result, BPNN models performed better than the corresponding PLS models, and SPA-BPNN model gave the best performance with a correlation coefficient of prediction (rp) of 0.911 and root mean square error of prediction (RMSEP) of 0.0516. The results indicated the feasibility of using hyperspectral imaging to determine pH values during anaerobic digestion. Furthermore, a distribution map of the pH values was achieved by applying the SPA-BPNN model. The results in this study would help to develop an on-line monitoring system for biomass energy producing process by hyperspectral imaging. PMID:26901202

  20. Texture preservation in de-noising UAV surveillance video through multi-frame sampling

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Fevig, Ronald A.; Schultz, Richard R.

    2009-02-01

    Image de-noising is a widely-used technology in modern real-world surveillance systems. Methods can seldom do both de-noising and texture preservation very well without a direct knowledge of the noise model. Most of the neighborhood fusion-based de-noising methods tend to over-smooth the images, which causes a significant loss of detail. Recently, a new non-local means method has been developed, which is based on the similarities among the different pixels. This technique results in good preservation of the textures; however, it also causes some artifacts. In this paper, we utilize the scale-invariant feature transform (SIFT) [1] method to find the corresponding region between different images, and then reconstruct the de-noised images by a weighted sum of these corresponding regions. Both hard and soft criteria are chosen in order to minimize the artifacts. Experiments applied to real unmanned aerial vehicle thermal infrared surveillance video show that our method is superior to popular methods in the literature.

  1. Motion artifacts occurring at the lung/diaphragm interface using 4D CT attenuation correction of 4D PET scans.

    PubMed

    Killoran, Joseph H; Gerbaudo, Victor H; Mamede, Marcelo; Ionascu, Dan; Park, Sang-June; Berbeco, Ross

    2011-11-15

    For PET/CT, fast CT acquisition time can lead to errors in attenuation correction, particularly at the lung/diaphragm interface. Gated 4D PET can reduce motion artifacts, though residual artifacts may persist depending on the CT dataset used for attenuation correction. We performed phantom studies to evaluate 4D PET images of targets near a density interface using three different methods for attenuation correction: a single 3D CT (3D CTAC), an averaged 4D CT (CINE CTAC), and a fully phase matched 4D CT (4D CTAC). A phantom was designed with two density regions corresponding to diaphragm and lung. An 8 mL sphere phantom loaded with 18F-FDG was used to represent a lung tumor and background FDG included at an 8:1 ratio. Motion patterns of sin(x) and sin4(x) were used for dynamic studies. Image data was acquired using a GE Discovery DVCT-PET/CT scanner. Attenuation correction methods were compared based on normalized recovery coefficient (NRC), as well as a novel quantity "fixed activity volume" (FAV) introduced in our report. Image metrics were compared to those determined from a 3D PET scan with no motion present (3D STATIC). Values of FAV and NRC showed significant variation over the motion cycle when corrected by 3D CTAC images. 4D CTAC- and CINE CTAC-corrected PET images reduced these motion artifacts. The amount of artifact reduction is greater when the target is surrounded by lower density material and when motion was based on sin4(x). 4D CTAC reduced artifacts more than CINE CTAC for most scenarios. For a target surrounded by water equivalent material, there was no advantage to 4D CTAC over CINE CTAC when using the sin(x) motion pattern. Attenuation correction using both 4D CTAC or CINE CTAC can reduce motion artifacts in regions that include a tissue interface such as the lung/diaphragm border. 4D CTAC is more effective than CINE CTAC at reducing artifacts in some, but not all, scenarios.

  2. Application of wavelet analysis in laser Doppler vibration signal denoising

    NASA Astrophysics Data System (ADS)

    Lan, Yu-fei; Xue, Hui-feng; Li, Xin-liang; Liu, Dan

    2010-10-01

    Large number of experiments show that, due to external disturbances, the measured surface is too rough and other factors make use of laser Doppler technique to detect the vibration signal contained complex information, low SNR, resulting in Doppler frequency shift signals unmeasured, can not be demodulated Doppler phase and so on. This paper first analyzes the laser Doppler signal model and feature in the vibration test, and studies the most commonly used three ways of wavelet denoising techniques: the modulus maxima wavelet denoising method, the spatial correlation denoising method and wavelet threshold denoising method. Here we experiment with the vibration signals and achieve three ways by MATLAB simulation. Processing results show that the wavelet modulus maxima denoising method at low laser Doppler vibration SNR, has an advantage for the signal which mixed with white noise and contained more singularities; the spatial correlation denoising method is more suitable for denoising the laser Doppler vibration signal which noise level is not very high, and has a better edge reconstruction capacity; wavelet threshold denoising method has a wide range of adaptability, computational efficiency, and good denoising effect. Specifically, in the wavelet threshold denoising method, we estimate the original noise variance by spatial correlation method, using an adaptive threshold denoising method, and make some certain amendments in practice. Test can be shown that, compared with conventional threshold denoising, this method is more effective to extract the feature of laser Doppler vibration signal.

  3. 4D-DSA and 4D fluoroscopy: preliminary implementation

    NASA Astrophysics Data System (ADS)

    Mistretta, C. A.; Oberstar, E.; Davis, B.; Brodsky, E.; Strother, C. M.

    2010-04-01

    We have described methods that allow highly accelerated MRI using under-sampled acquisitions and constrained reconstruction. One is a hybrid acquisition involving the constrained reconstruction of time dependent information obtained from a separate scan of longer duration. We have developed reconstruction algorithms for DSA that allow use of a single injection to provide the temporal data required for flow visualization and the steady state data required for construction of a 3D-DSA vascular volume. The result is time resolved 3D volumes with typical resolution of 5123 at frame rates of 20-30 fps. Full manipulation of these images is possible during each stage of vascular filling thereby allowing for simplified interpretation of vascular dynamics. For intravenous angiography this time resolved 3D capability overcomes the vessel overlap problem that greatly limited the use of conventional intravenous 2D-DSA. Following further hardware development, it will be also be possible to rotate fluoroscopic volumes for use as roadmaps that can be viewed at arbitrary angles without a need for gantry rotation. The most precise implementation of this capability requires availability of biplane fluoroscopy data. Since the reconstruction of 3D volumes presently suppresses the contrast in the soft tissue, the possibility of using these techniques to derive complete indications of perfusion deficits based on cerebral blood volume (CBV), mean transit time (MTT) and time to peak (TTP) parameters requires further investigation. Using MATLAB post-processing, successful studies in animals and humans done in conjunction with both intravenous and intra-arterial injections have been completed. Real time implementation is in progress.

  4. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition.

  5. HARDI denoising using nonlocal means on S2

    NASA Astrophysics Data System (ADS)

    Kuurstra, Alan; Dolui, Sudipto; Michailovich, Oleg

    2012-02-01

    Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.

  6. Study on torpedo fuze signal denoising method based on WPT

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Sun, Changcun; Zhang, Tao; Ren, Zhiliang

    2013-07-01

    Torpedo fuze signal denoising is an important action to ensure reliable operation of fuze. Based on the good characteristics of wavelet packet transform (WPT) in signal denoising, the paper used wavelet packet transform to denoise the fuze signal under a complex background interference, and a simulation of the denoising results with Matlab is performed. Simulation result shows that the WPT denoising method can effectively eliminate background noise exist in torpedo fuze target signal with higher precision and less distortion, leading to advance the reliability of torpedo fuze operation.

  7. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  8. Wavelet denoising of multiframe optical coherence tomography data

    PubMed Central

    Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2012-01-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103

  9. Fuzzy logic recursive change detection for tracking and denoising of video sequences

    NASA Astrophysics Data System (ADS)

    Zlokolica, Vladimir; De Geyter, Matthias; Schulte, Stefan; Pizurica, Aleksandra; Philips, Wilfried; Kerre, Etienne

    2005-03-01

    In this paper we propose a fuzzy logic recursive scheme for motion detection and temporal filtering that can deal with the Gaussian noise and unsteady illumination conditions both in temporal and spatial direction. Our focus is on applications concerning tracking and denoising of image sequences. We process an input noisy sequence with fuzzy logic motion detection in order to determine the degree of motion confidence. The proposed motion detector combines the membership degree appropriately using defined fuzzy rules, where the membership degree of motion for each pixel in a 2D-sliding-window is determined by the proposed membership function. Both fuzzy membership function and fuzzy rules are defined in such a way that the performance of the motion detector is optimized in terms of its robustness to noise and unsteady lighting conditions. We perform simultaneously tracking and recursive adaptive temporal filtering, where the amount of filtering is inversely proportional to the confidence with respect to the existence of motion. Finally, temporally filtered frames are further processed by the proposed spatial filter in order to obtain denoised image sequence. The main contribution of this paper is the robust novel fuzzy recursive scheme for motion detection and temporal filtering. We evaluate the proposed motion detection algorithm using two criteria: robustness to noise and changing illumination conditions and motion blur in temporal recursive denoising. Additionally, we make comparisons in terms of noise reduction with other state of the art video denoising techniques.

  10. SU-E-CAMPUS-I-05: Internal Dosimetric Calculations for Several Imaging Radiopharmaceuticals in Preclinical Studies and Quantitative Assessment of the Mouse Size Impact On Them. Realistic Monte Carlo Simulations Based On the 4D-MOBY Model

    SciTech Connect

    Kostou, T; Papadimitroulas, P; Kagadis, GC; Loudos, G

    2014-06-15

    Purpose: Commonly used radiopharmaceuticals were tested to define the most important dosimetric factors in preclinical studies. Dosimetric calculations were applied in two different whole-body mouse models, with varying organ size, so as to determine their impact on absorbed doses and S-values. Organ mass influence was evaluated with computational models and Monte Carlo(MC) simulations. Methods: MC simulations were executed on GATE to determine dose distribution in the 4D digital MOBY mouse phantom. Two mouse models, 28 and 34 g respectively, were constructed based on realistic preclinical exams to calculate the absorbed doses and S-values of five commonly used radionuclides in SPECT/PET studies (18F, 68Ga, 177Lu, 111In and 99mTc).Radionuclide biodistributions were obtained from literature. Realistic statistics (uncertainty lower than 4.5%) were acquired using the standard physical model in Geant4. Comparisons of the dosimetric calculations on the two different phantoms for each radiopharmaceutical are presented. Results: Dose per organ in mGy was calculated for all radiopharmaceuticals. The two models introduced a difference of 0.69% in their brain masses, while the largest differences were observed in the marrow 18.98% and in the thyroid 18.65% masses.Furthermore, S-values of the most important target-organs were calculated for each isotope. Source-organ was selected to be the whole mouse body.Differences on the S-factors were observed in the 6.0–30.0% range. Tables with all the calculations as reference dosimetric data were developed. Conclusion: Accurate dose per organ and the most appropriate S-values are derived for specific preclinical studies. The impact of the mouse model size is rather high (up to 30% for a 17.65% difference in the total mass), and thus accurate definition of the organ mass is a crucial parameter for self-absorbed S values calculation.Our goal is to extent the study for accurate estimations in small animal imaging, whereas it is known

  11. Wavelet-domain TI Wiener-like filtering for complex MR data denoising.

    PubMed

    Hu, Kai; Cheng, Qiaocui; Gao, Xieping

    2016-10-01

    Magnetic resonance (MR) images are affected by random noises, which degrade many image processing and analysis tasks. It has been shown that the noise in magnitude MR images follows a Rician distribution. Unlike additive Gaussian noise, the noise is signal-dependent, and consequently difficult to reduce, especially in low signal-to-noise ratio (SNR) images. Wirestam et al. in [20] proposed a Wiener-like filtering technique in wavelet-domain to reduce noise before construction of the magnitude MR image. Based on Wirestam's study, we propose a wavelet-domain translation-invariant (TI) Wiener-like filtering algorithm for noise reduction in complex MR data. The proposed denoising algorithm shows the following improvements compared with Wirestam's method: (1) we introduce TI property into the Wiener-like filtering in wavelet-domain to suppress artifacts caused by translations of the signal; (2) we integrate one Stein's Unbiased Risk Estimator (SURE) thresholding with two Wiener-like filters to make the hard-thresholding scale adaptive; and (3) the first Wiener-like filtering is used to filter the original noisy image in which the noise obeys Gaussian distribution and it provides more reasonable results. The proposed algorithm is applied to denoise the real and imaginary parts of complex MR images. To evaluate our proposed algorithm, we conduct extensive denoising experiments using T1-weighted simulated MR images, diffusion-weighted (DW) phantom and in vivo data. We compare our algorithm with other popular denoising methods. The results demonstrate that our algorithm outperforms others in term of both efficiency and robustness. PMID:27238055

  12. Wavelet-domain TI Wiener-like filtering for complex MR data denoising.

    PubMed

    Hu, Kai; Cheng, Qiaocui; Gao, Xieping

    2016-10-01

    Magnetic resonance (MR) images are affected by random noises, which degrade many image processing and analysis tasks. It has been shown that the noise in magnitude MR images follows a Rician distribution. Unlike additive Gaussian noise, the noise is signal-dependent, and consequently difficult to reduce, especially in low signal-to-noise ratio (SNR) images. Wirestam et al. in [20] proposed a Wiener-like filtering technique in wavelet-domain to reduce noise before construction of the magnitude MR image. Based on Wirestam's study, we propose a wavelet-domain translation-invariant (TI) Wiener-like filtering algorithm for noise reduction in complex MR data. The proposed denoising algorithm shows the following improvements compared with Wirestam's method: (1) we introduce TI property into the Wiener-like filtering in wavelet-domain to suppress artifacts caused by translations of the signal; (2) we integrate one Stein's Unbiased Risk Estimator (SURE) thresholding with two Wiener-like filters to make the hard-thresholding scale adaptive; and (3) the first Wiener-like filtering is used to filter the original noisy image in which the noise obeys Gaussian distribution and it provides more reasonable results. The proposed algorithm is applied to denoise the real and imaginary parts of complex MR images. To evaluate our proposed algorithm, we conduct extensive denoising experiments using T1-weighted simulated MR images, diffusion-weighted (DW) phantom and in vivo data. We compare our algorithm with other popular denoising methods. The results demonstrate that our algorithm outperforms others in term of both efficiency and robustness.

  13. The 4-D approach to visual control of autonomous systems

    NASA Technical Reports Server (NTRS)

    Dickmanns, Ernst D.

    1994-01-01

    Development of a 4-D approach to dynamic machine vision is described. Core elements of this method are spatio-temporal models oriented towards objects and laws of perspective projection in a foward mode. Integration of multi-sensory measurement data was achieved through spatio-temporal models as invariants for object recognition. Situation assessment and long term predictions were allowed through maintenance of a symbolic 4-D image of processes involving objects. Behavioral capabilities were easily realized by state feedback and feed-foward control.

  14. Non-local mean denoising in diffusion tensor space

    PubMed Central

    SU, BAIHAI; LIU, QIANG; CHEN, JIE; WU, XI

    2014-01-01

    The aim of the present study was to present a novel non-local mean (NLM) method to denoise diffusion tensor imaging (DTI) data in the tensor space. Compared with the original NLM method, which uses intensity similarity to weigh the voxel, the proposed method weighs the voxel using tensor similarity measures in the diffusion tensor space. Euclidean distance with rotational invariance, and Riemannian distance and Log-Euclidean distance with affine invariance were implemented to compare the geometric and orientation features of the diffusion tensor comprehensively. The accuracy and efficacy of the proposed novel NLM method using these three similarity measures in DTI space, along with unbiased novel NLM in diffusion-weighted image space, were compared quantitatively and qualitatively in the present study. PMID:25009599

  15. 4D embryonic cardiography using gated optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Jenkins, M. W.; Rothenberg, F.; Roy, D.; Nikolski, V. P.; Hu, Z.; Watanabe, M.; Wilson, D. L.; Efimov, I. R.; Rollins, A. M.

    2006-01-01

    Simultaneous imaging of very early embryonic heart structure and function has technical limitations of spatial and temporal resolution. We have developed a gated technique using optical coherence tomography (OCT) that can rapidly image beating embryonic hearts in four-dimensions (4D), at high spatial resolution (10-15 μm), and with a depth penetration of 1.5 - 2.0 mm that is suitable for the study of early embryonic hearts. We acquired data from paced, excised, embryonic chicken and mouse hearts using gated sampling and employed image processing techniques to visualize the hearts in 4D and measure physiologic parameters such as cardiac volume, ejection fraction, and wall thickness. This technique is being developed to longitudinally investigate the physiology of intact embryonic hearts and events that lead to congenital heart defects.

  16. Mapping motion from 4D-MRI to 3D-CT for use in 4D dose calculations: A technical feasibility study

    SciTech Connect

    Boye, Dirk; Lomax, Tony; Knopf, Antje

    2013-06-15

    Purpose: Target sites affected by organ motion require a time resolved (4D) dose calculation. Typical 4D dose calculations use 4D-CT as a basis. Unfortunately, 4D-CT images have the disadvantage of being a 'snap-shot' of the motion during acquisition and of assuming regularity of breathing. In addition, 4D-CT acquisitions involve a substantial additional dose burden to the patient making many, repeated 4D-CT acquisitions undesirable. Here the authors test the feasibility of an alternative approach to generate patient specific 4D-CT data sets. Methods: In this approach motion information is extracted from 4D-MRI. Simulated 4D-CT data sets [which the authors call 4D-CT(MRI)] are created by warping extracted deformation fields to a static 3D-CT data set. The employment of 4D-MRI sequences for this has the advantage that no assumptions on breathing regularity are made, irregularities in breathing can be studied and, if necessary, many repeat imaging studies (and consequently simulated 4D-CT data sets) can be performed on patients and/or volunteers. The accuracy of 4D-CT(MRI)s has been validated by 4D proton dose calculations. Our 4D dose algorithm takes into account displacements as well as deformations on the originating 4D-CT/4D-CT(MRI) by calculating the dose of each pencil beam based on an individual time stamp of when that pencil beam is applied. According to corresponding displacement and density-variation-maps the position and the water equivalent range of the dose grid points is adjusted at each time instance. Results: 4D dose distributions, using 4D-CT(MRI) data sets as input were compared to results based on a reference conventional 4D-CT data set capturing similar motion characteristics. Almost identical 4D dose distributions could be achieved, even though scanned proton beams are very sensitive to small differences in the patient geometry. In addition, 4D dose calculations have been performed on the same patient, but using 4D-CT(MRI) data sets based on

  17. Adaptive non-local means filtering based on local noise level for CT denoising

    NASA Astrophysics Data System (ADS)

    Li, Zhoubo; Yu, Lifeng; Trzasko, Joshua D.; Fletcher, Joel G.; McCollough, Cynthia H.; Manduca, Armando

    2012-03-01

    Radiation dose from CT scans is an increasing health concern in the practice of radiology. Higher dose scans can produce clearer images with high diagnostic quality, but may increase the potential risk of radiation-induced cancer or other side effects. Lowering radiation dose alone generally produces a noisier image and may degrade diagnostic performance. Recently, CT dose reduction based on non-local means (NLM) filtering for noise reduction has yielded promising results. However, traditional NLM denoising operates under the assumption that image noise is spatially uniform noise, while in CT images the noise level varies significantly within and across slices. Therefore, applying NLM filtering to CT data using a global filtering strength cannot achieve optimal denoising performance. In this work, we have developed a technique for efficiently estimating the local noise level for CT images, and have modified the NLM algorithm to adapt to local variations in noise level. The local noise level estimation technique matches the true noise distribution determined from multiple repetitive scans of a phantom object very well. The modified NLM algorithm provides more effective denoising of CT data throughout a volume, and may allow significant lowering of radiation dose. Both the noise map calculation and the adaptive NLM filtering can be performed in times that allow integration with the clinical workflow.

  18. Los Alamos National Laboratory 4D Database

    SciTech Connect

    Atencio, Julian J.

    2014-05-02

    4D is an integrated development platform - a single product comprised of the components you need to create and distribute professional applications. You get a graphical design environment, SQL database, a programming language, integrated PHP execution, HTTP server, application server, executable generator, and much more. 4D offers multi-platform development and deployment, meaning whatever you create on a Mac can be used on Windows, and vice-versa. Beyond productive development, 4D is renowned for its great flexibility in maintenance and modification of existing applications, and its extreme ease of implementation in its numerous deployment options. Your professional application can be put into production more quickly, at a lower cost, and will always be instantly scalable. 4D makes it easy, whether you're looking to create a classic desktop application, a client-server system, a distributed solution for Web or mobile clients - or all of the above!

  19. Feasibility study of dose reduction in digital breast tomosynthesis using non-local denoising algorithms

    NASA Astrophysics Data System (ADS)

    Vieira, Marcelo A. C.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Borges, Lucas R.; Bakic, Predrag R.; Barufaldi, Bruno; Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2015-03-01

    The main purpose of this work is to study the ability of denoising algorithms to reduce the radiation dose in Digital Breast Tomosynthesis (DBT) examinations. Clinical use of DBT is normally performed in "combo-mode", in which, in addition to DBT projections, a 2D mammogram is taken with the standard radiation dose. As a result, patients have been exposed to radiation doses higher than used in digital mammography. Thus, efforts to reduce the radiation dose in DBT examinations are of great interest. However, a decrease in dose leads to an increased quantum noise level, and related decrease in image quality. This work is aimed at addressing this problem by the use of denoising techniques, which could allow for dose reduction while keeping the image quality acceptable. We have studied two "state of the art" denoising techniques for filtering the quantum noise due to the reduced dose in DBT projections: Non-local Means (NLM) and Block-matching 3D (BM3D). We acquired DBT projections at different dose levels of an anthropomorphic physical breast phantom with inserted simulated microcalcifications. Then, we found the optimal filtering parameters where the denoising algorithms are capable of recovering the quality from the DBT images acquired with the standard radiation dose. Results using objective image quality assessment metrics showed that BM3D algorithm achieved better noise adjustment (mean difference in peak signal to noise ratio < 0.1dB) and less blurring (mean difference in image sharpness ~ 6%) than the NLM for the projections acquired with lower radiation doses.

  20. Phosphodiesterase4D (PDE4D)--A risk factor for atrial fibrillation and stroke?

    PubMed

    Jørgensen, Carina; Yasmeen, Saiqa; Iversen, Helle K; Kruuse, Christina

    2015-12-15

    Mutations in the gene encoding phosphodiesterase 4D (PDE4D) enzyme are associated with ischemic stroke; however the functional implications of such mutations are not well understood. PDE4D is part of a complex protein family modulating intracellular signalling by cyclic nucleotides. The PDE4 family includes subtypes A-D, all of which show unique intracellular, cellular and tissue distribution. PDE4D is the major subtype expressed in human atrial myocytes and involved in the pathophysiology of arrhythmias, such as atrial fibrillation. The PDE4D enzyme hydrolyses cyclic adenosine monophosphate (cAMP). Though diverging results are reported, several population based studies describe association of various PDE4D single nucleotide polymorphisms (SNP) with cardio-embolic stroke in particular. Functionally, a down regulation of PDE4D variants has been reported in stroke patients. The anti-inflammatory and vasodilator properties of PDE4 inhibitors make them suitable for treatment of stroke and cardiovascular disease. PDE4D has recently been suggested as factor in atrial fibrillation. This review summarizes the possible function of PDE4D in the brain, heart, and vasculature. Further, association of the described SNPs, in particular, with cardioembolic stroke, is reviewed. Current findings on the PDE4D mutations suggest functionality involves an increased cardiac risk factor as well as augmented risk of atrial fibrillation. PMID:26671126

  1. 4D electron microscopy: principles and applications.

    PubMed

    Flannigan, David J; Zewail, Ahmed H

    2012-10-16

    achievable with short intense pulses containing a large number of electrons, however, are limited to tens of nanometers and nanoseconds, respectively. This is because Coulomb repulsion is significant in such a pulse, and the electrons spread in space and time, thus limiting the beam coherence. It is therefore not possible to image the ultrafast elementary dynamics of complex transformations. The challenge was to retain the high spatial resolution of a conventional TEM while simultaneously enabling the temporal resolution required to visualize atomic-scale motions. In this Account, we discuss the development of four-dimensional ultrafast electron microscopy (4D UEM) and summarize techniques and applications that illustrate the power of the approach. In UEM, images are obtained either stroboscopically with coherent single-electron packets or with a single electron bunch. Coulomb repulsion is absent under the single-electron condition, thus permitting imaging, diffraction, and spectroscopy, all with high spatiotemporal resolution, the atomic scale (sub-nanometer and femtosecond). The time resolution is limited only by the laser pulse duration and energy carried by the electron packets; the CCD camera has no bearing on the temporal resolution. In the regime of single pulses of electrons, the temporal resolution of picoseconds can be attained when hundreds of electrons are in the bunch. The applications given here are selected to highlight phenomena of different length and time scales, from atomic motions during structural dynamics to phase transitions and nanomechanical oscillations. We conclude with a brief discussion of emerging methods, which include scanning ultrafast electron microscopy (S-UEM), scanning transmission ultrafast electron microscopy (ST-UEM) with convergent beams, and time-resolved imaging of biological structures at ambient conditions with environmental cells.

  2. Impact of incorporating visual biofeedback in 4D MRI.

    PubMed

    To, David T; Kim, Joshua P; Price, Ryan G; Chetty, Indrin J; Glide-Hurst, Carri K

    2016-05-08

    Precise radiation therapy (RT) for abdominal lesions is complicated by respiratory motion and suboptimal soft tissue contrast in 4D CT. 4D MRI offers improved con-trast although long scan times and irregular breathing patterns can be limiting. To address this, visual biofeedback (VBF) was introduced into 4D MRI. Ten volunteers were consented to an IRB-approved protocol. Prospective respiratory-triggered, T2-weighted, coronal 4D MRIs were acquired on an open 1.0T MR-SIM. VBF was integrated using an MR-compatible interactive breath-hold control system. Subjects visually monitored their breathing patterns to stay within predetermined tolerances. 4D MRIs were acquired with and without VBF for 2- and 8-phase acquisitions. Normalized respiratory waveforms were evaluated for scan time, duty cycle (programmed/acquisition time), breathing period, and breathing regularity (end-inhale coefficient of variation, EI-COV). Three reviewers performed image quality assessment to compare artifacts with and without VBF. Respiration-induced liver motion was calculated via centroid difference analysis of end-exhale (EE) and EI liver contours. Incorporating VBF reduced 2-phase acquisition time (4.7 ± 1.0 and 5.4 ± 1.5 min with and without VBF, respectively) while reducing EI-COV by 43.8% ± 16.6%. For 8-phase acquisitions, VBF reduced acquisition time by 1.9 ± 1.6 min and EI-COVs by 38.8% ± 25.7% despite breathing rate remaining similar (11.1 ± 3.8 breaths/min with vs. 10.5 ± 2.9 without). Using VBF yielded higher duty cycles than unguided free breathing (34.4% ± 5.8% vs. 28.1% ± 6.6%, respectively). Image grading showed that out of 40 paired evaluations, 20 cases had equivalent and 17 had improved image quality scores with VBF, particularly for mid-exhale and EI. Increased liver excursion was observed with VBF, where superior-inferior, anterior-posterior, and left-right EE-EI displacements were 14.1± 5.8, 4.9 ± 2.1, and 1.5 ± 1.0 mm, respectively, with VBF compared to 11.9

  3. Shadow-driven 4D haptic visualization.

    PubMed

    Zhang, Hui; Hanson, Andrew

    2007-01-01

    Just as we can work with two-dimensional floor plans to communicate 3D architectural design, we can exploit reduced-dimension shadows to manipulate the higher-dimensional objects generating the shadows. In particular, by taking advantage of physically reactive 3D shadow-space controllers, we can transform the task of interacting with 4D objects to a new level of physical reality. We begin with a teaching tool that uses 2D knot diagrams to manipulate the geometry of 3D mathematical knots via their projections; our unique 2D haptic interface allows the user to become familiar with sketching, editing, exploration, and manipulation of 3D knots rendered as projected imageson a 2D shadow space. By combining graphics and collision-sensing haptics, we can enhance the 2D shadow-driven editing protocol to successfully leverage 2D pen-and-paper or blackboard skills. Building on the reduced-dimension 2D editing tool for manipulating 3D shapes, we develop the natural analogy to produce a reduced-dimension 3D tool for manipulating 4D shapes. By physically modeling the correct properties of 4D surfaces, their bending forces, and their collisions in the 3D haptic controller interface, we can support full-featured physical exploration of 4D mathematical objects in a manner that is otherwise far beyond the experience accessible to human beings. As far as we are aware, this paper reports the first interactive system with force-feedback that provides "4D haptic visualization" permitting the user to model and interact with 4D cloth-like objects.

  4. The scientific value of 4D visualizations

    NASA Astrophysics Data System (ADS)

    Minster, J.; Olsen, K.; Day, S.; Moore, R.; Jordan, T. H.; Maechling, P.; Chourasia, A.

    2006-12-01

    Significant scientific insights derive from viewing measured, or calculated three-dimensional, time-dependent -- that is four-dimensional-- fields. This issue cuts across all disciplines of Earth Sciences. Addressing it calls for close collaborations between "domain" scientists and "IT" visualization specialists. Techniques to display such 4D fields in a intuitive way are a major challenge, especially when the relevant variables to be displayed are not scalars but tensors. This talk will illustrate some attempts to deal with this challenge, using seismic wave fields as specific objects to display. We will highlight how 4D displays can help address very difficult issues of significant scientific import.

  5. A sinogram warping strategy for pre-reconstruction 4D PET optimization.

    PubMed

    Gianoli, Chiara; Riboldi, Marco; Fontana, Giulia; Kurz, Christopher; Parodi, Katia; Baroni, Guido

    2016-03-01

    A novel strategy for 4D PET optimization in the sinogram domain is proposed, aiming at motion model application before image reconstruction ("sinogram warping" strategy). Compared to state-of-the-art 4D-MLEM reconstruction, the proposed strategy is able to optimize the image SNR, avoiding iterative direct and inverse warping procedures, which are typical of the 4D-MLEM algorithm. A full-count statistics sinogram of the motion-compensated 4D PET reference phase is generated by warping the sinograms corresponding to the different PET phases. This is achieved relying on a motion model expressed in the sinogram domain. The strategy was tested on the anthropomorphic 4D PET-CT NCAT phantom in comparison with the 4D-MLEM algorithm, with particular reference to robustness to PET-CT co-registrations artefacts. The MLEM reconstruction of the warped sinogram according to the proposed strategy exhibited better accuracy (up to +40.90 % with respect to the ideal value), whereas images reconstructed according to the 4D-MLEM reconstruction resulted in less noisy (down to -26.90 % with respect to the ideal value) but more blurred. The sinogram warping strategy demonstrates advantages with respect to 4D-MLEM algorithm. These advantages are paid back by introducing approximation of the deformation field, and further efforts are required to mitigate the impact of such an approximation in clinical 4D PET reconstruction.

  6. 4D-Var Developement at GMAO

    NASA Technical Reports Server (NTRS)

    Pelc, Joanna S.; Todling, Ricardo; Akkraoui, Amal El

    2014-01-01

    The Global Modeling and Assimilation Offce (GMAO) is currently using an IAU-based 3D-Var data assimilation system. GMAO has been experimenting with a 3D-Var-hybrid version of its data assimilation system (DAS) for over a year now, which will soon become operational and it will rapidly progress toward a 4D-EnVar. Concurrently, the machinery to exercise traditional 4DVar is in place and it is desirable to have a comparison of the traditional 4D approach with the other available options, and evaluate their performance in the Goddard Earth Observing System (GEOS) DAS. This work will also explore the possibility for constructing a reduced order model (ROM) to make traditional 4D-Var computationally attractive for increasing model resolutions. Part of the research on ROM will be to search for a suitably acceptable space to carry on the corresponding reduction. This poster illustrates how the IAU-based 4D-Var assimilation compares with our currently used IAU-based 3D-Var.

  7. Multicolor 4D Fluorescence Microscopy using Ultrathin Bessel Light Sheets.

    PubMed

    Zhao, Teng; Lau, Sze Cheung; Wang, Ying; Su, Yumian; Wang, Hao; Cheng, Aifang; Herrup, Karl; Ip, Nancy Y; Du, Shengwang; Loy, M M T

    2016-01-01

    We demonstrate a simple and efficient method for producing ultrathin Bessel ('non-diffracting') light sheets of any color using a line-shaped beam and an annulus filter. With this robust and cost-effective technology, we obtained two-color, 3D images of biological samples with lateral/axial resolution of 250 nm/400 nm, and high-speed, 4D volume imaging of 20 μm sized live sample at 1 Hz temporal resolution. PMID:27189786

  8. Multicolor 4D Fluorescence Microscopy using Ultrathin Bessel Light Sheets

    PubMed Central

    Zhao, Teng; Lau, Sze Cheung; Wang, Ying; Su, Yumian; Wang, Hao; Cheng, Aifang; Herrup, Karl; Ip, Nancy Y.; Du, Shengwang; Loy, M. M. T.

    2016-01-01

    We demonstrate a simple and efficient method for producing ultrathin Bessel (‘non-diffracting’) light sheets of any color using a line-shaped beam and an annulus filter. With this robust and cost-effective technology, we obtained two-color, 3D images of biological samples with lateral/axial resolution of 250 nm/400 nm, and high-speed, 4D volume imaging of 20 μm sized live sample at 1 Hz temporal resolution. PMID:27189786

  9. 4D micro-CT using fast prospective gating

    NASA Astrophysics Data System (ADS)

    Guo, Xiaolian; Johnston, Samuel M.; Qi, Yi; Johnson, G. Allan; Badea, Cristian T.

    2012-01-01

    Micro-CT is currently used in preclinical studies to provide anatomical information. But, there is also significant interest in using this technology to obtain functional information. We report here a new sampling strategy for 4D micro-CT for functional cardiac and pulmonary imaging. Rapid scanning of free-breathing mice is achieved with fast prospective gating (FPG) implemented on a field programmable gate array. The method entails on-the-fly computation of delays from the R peaks of the ECG signals or the peaks of the respiratory signals for the triggering pulses. Projection images are acquired for all cardiac or respiratory phases at each angle before rotating to the next angle. FPG can deliver the faster scan time of retrospective gating (RG) with the regular angular distribution of conventional prospective gating for cardiac or respiratory gating. Simultaneous cardio-respiratory gating is also possible with FPG in a hybrid retrospective/prospective approach. We have performed phantom experiments to validate the new sampling protocol and compared the results from FPG and RG in cardiac imaging of a mouse. Additionally, we have evaluated the utility of incorporating respiratory information in 4D cardiac micro-CT studies with FPG. A dual-source micro-CT system was used for image acquisition with pulsed x-ray exposures (80 kVp, 100 mA, 10 ms). The cardiac micro-CT protocol involves the use of a liposomal blood pool contrast agent containing 123 mg I ml-1 delivered via a tail vein catheter in a dose of 0.01 ml g-1 body weight. The phantom experiment demonstrates that FPG can distinguish the successive phases of phantom motion with minimal motion blur, and the animal study demonstrates that respiratory FPG can distinguish inspiration and expiration. 4D cardiac micro-CT imaging with FPG provides image quality superior to RG at an isotropic voxel size of 88 µm and 10 ms temporal resolution. The acquisition time for either sampling approach is less than 5 min. The

  10. Lidar signal de-noising by singular value decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Huanxue; Liu, Jianguo; Zhang, Tianshu

    2014-11-01

    Signal de-noising remains an important problem in lidar signal processing. This paper presents a de-noising method based on singular value decomposition. Experimental results on lidar simulated signal and real signal show that the proposed algorithm not only improves the signal-to-noise ratio effectively, but also preserves more detail information.

  11. 4D-Flow validation, numerical and experimental framework

    NASA Astrophysics Data System (ADS)

    Sansom, Kurt; Liu, Haining; Canton, Gador; Aliseda, Alberto; Yuan, Chun

    2015-11-01

    This work presents a group of assessment metrics of new 4D MRI flow sequences, an imaging modality that allows for visualization of three-dimensional pulsatile flow in the cardiovascular anatomy through time-resolved three-dimensional blood velocity measurements from cardiac-cycle synchronized MRI acquisition. This is a promising tool for clinical assessment but lacks a robust validation framework. First, 4D-MRI flow in a subject's stenotic carotid bifurcation is compared with a patient-specific CFD model using two different boundary condition methods. Second, Particle Image Velocimetry in a patient-specific phantom is used as a benchmark to compare the 4D-MRI in vivo measurements and CFD simulations under the same conditions. Comparison of estimated and measureable flow parameters such as wall shear stress, fluctuating velocity rms, Lagrangian particle residence time, will be discussed, with justification for their biomechanics relevance and the insights they can provide on the pathophysiology of arterial disease: atherosclerosis and intimal hyperplasia. Lastly, the framework is applied to a new sequence to provide a quantitative assessment. A parametric analysis on the carotid bifurcation pulsatile flow conditions will be presented and an accuracy assessment provided.

  12. Raman spectral data denoising based on wavelet analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Peng, Fei; Cheng, Qinghua; Xu, Dahai

    2008-12-01

    Abstract As one kind of molecule scattering spectroscopy, Raman spectroscopy (RS) is characterized by the frequency excursion that can show the information of molecule. RS has a broad application in biological, chemical, environmental and industrial fields. But signals in Raman spectral analysis often have noise, which greatly influences the achievement of accurate analytical results. The de-noising of RS signals is an important part of spectral analysis. Wavelet transform has been established with the Fourier transform as a data-processing method in analytical fields. The main fields of application are related to de-noising, compression, variable reduction, and signal suppression. In de-noising of Raman Spectroscopy, wavelet is chosen to construct de-noising function because of its excellent properties. In this paper, bior wavelet is adopted to remove the noise in the Raman spectra. It eliminates noise obviously and the result is satisfying. This method can provide some bases for practical de-noising in Raman spectra.

  13. Brain tissue segmentation in 4D CT using voxel classification

    NASA Astrophysics Data System (ADS)

    van den Boom, R.; Oei, M. T. H.; Lafebre, S.; Oostveen, L. J.; Meijer, F. J. A.; Steens, S. C. A.; Prokop, M.; van Ginneken, B.; Manniesing, R.

    2012-02-01

    A method is proposed to segment anatomical regions of the brain from 4D computer tomography (CT) patient data. The method consists of a three step voxel classification scheme, each step focusing on structures that are increasingly difficult to segment. The first step classifies air and bone, the second step classifies vessels and the third step classifies white matter, gray matter and cerebrospinal fluid. As features the time averaged intensity value and the temporal intensity change value were used. In each step, a k-Nearest-Neighbor classifier was used to classify the voxels. Training data was obtained by placing regions of interest in reconstructed 3D image data. The method has been applied to ten 4D CT cerebral patient data. A leave-one-out experiment showed consistent and accurate segmentation results.

  14. SU-E-J-241: Creation of Ventilation CT From Daily 4D CTs Or 4D Conebeam CTs Acquired During IGRT for Thoracic Cancers

    SciTech Connect

    Tai, A; Ahunbay, E; Li, X

    2014-06-01

    Purpose: To develop a method to create ventilation CTs from daily 4D CTs or 4D KV conebeam CTs (4DCBCT) acquired during image-guided radiation therapy (IGRT) for thoracic tumors, and to explore the potential for using the ventilation CTs as a means for early detection of lung injury during radiation treatment. Methods: 4DCT acquired using an in-room CT (CTVision, Siemens) and 4DCBCT acquired using the X-ray Volume Imaging (XVI) system (Infinity, Elekta) for representative lung cancer patients were analyzed. These 4D data sets were sorted into 10 phase images. A newly-available deformable image registration tool (ADMIRE, Elekta) is used to deform the phase images at the end of exhale (EE) to the phase images at the end of inhale (EI). The lung volumes at EI and EE were carefully contoured using an intensity-based auto-contour tool and then manually edited. The ventilation images were calculated from the variations of CT numbers of those voxels masked by the lung contour at EI between the registered phase images. The deformable image registration is also performed between the daily 4D images and planning 4DCT, and the resulting deformable field vector (DFV) is used to deform the planning doses to the daily images by an in-house Matlab program. Results: The ventilation images were successfully created. The tide volumes calculated using the ventilation images agree with those measured through volume difference of contours at EE and EI, indicating the accuracy of ventilation images. The association between the delivered doses and the change of lung ventilation from the daily ventilation CTs is identified. Conclusions: A method to create the ventilation CT using daily 4DCTs or 4D KV conebeam CTs was developed and demonstrated.

  15. Actively triggered 4d cone-beam CT acquisition

    SciTech Connect

    Fast, Martin F.; Wisotzky, Eric; Oelfke, Uwe; Nill, Simeon

    2013-09-15

    Purpose: 4d cone-beam computed tomography (CBCT) scans are usually reconstructed by extracting the motion information from the 2d projections or an external surrogate signal, and binning the individual projections into multiple respiratory phases. In this “after-the-fact” binning approach, however, projections are unevenly distributed over respiratory phases resulting in inefficient utilization of imaging dose. To avoid excess dose in certain respiratory phases, and poor image quality due to a lack of projections in others, the authors have developed a novel 4d CBCT acquisition framework which actively triggers 2d projections based on the forward-predicted position of the tumor.Methods: The forward-prediction of the tumor position was independently established using either (i) an electromagnetic (EM) tracking system based on implanted EM-transponders which act as a surrogate for the tumor position, or (ii) an external motion sensor measuring the chest-wall displacement and correlating this external motion to the phase-shifted diaphragm motion derived from the acquired images. In order to avoid EM-induced artifacts in the imaging detector, the authors devised a simple but effective “Faraday” shielding cage. The authors demonstrated the feasibility of their acquisition strategy by scanning an anthropomorphic lung phantom moving on 1d or 2d sinusoidal trajectories.Results: With both tumor position devices, the authors were able to acquire 4d CBCTs free of motion blurring. For scans based on the EM tracking system, reconstruction artifacts stemming from the presence of the EM-array and the EM-transponders were greatly reduced using newly developed correction algorithms. By tuning the imaging frequency independently for each respiratory phase prior to acquisition, it was possible to harmonize the number of projections over respiratory phases. Depending on the breathing period (3.5 or 5 s) and the gantry rotation time (4 or 5 min), between ∼90 and 145

  16. Interactive animation of 4D performance capture.

    PubMed

    Casas, Dan; Tejera, Margara; Guillemaut, Jean-Yves; Hilton, Adrian

    2013-05-01

    A 4D parametric motion graph representation is presented for interactive animation from actor performance capture in a multiple camera studio. The representation is based on a 4D model database of temporally aligned mesh sequence reconstructions for multiple motions. High-level movement controls such as speed and direction are achieved by blending multiple mesh sequences of related motions. A real-time mesh sequence blending approach is introduced, which combines the realistic deformation of previous nonlinear solutions with efficient online computation. Transitions between different parametric motion spaces are evaluated in real time based on surface shape and motion similarity. Four-dimensional parametric motion graphs allow real-time interactive character animation while preserving the natural dynamics of the captured performance.

  17. Nondipole Effects in Xe 4d Photoemission

    SciTech Connect

    Hemmers, O; Guillemin, R; Wolska, A; Lindle, D W; Rolles, D; Cheng, K T; Johnson, W R; Zhou, H L; Manson, S T

    2004-07-14

    We measured the nondipole parameters for the spin-orbit doublets Xe 4d{sub 5/2} and Xe 4d{sub 3/2} over a photon-energy range from 100 eV to 250 eV at beamline 8.0.1.3 of the Advanced Light Source at the Lawrence Berkeley National Laboratory. Significant nondipole effects are found at relatively low energies as a result of Cooper minima in dipole channels and interchannel coupling in quadrupole channels. Most importantly, sharp disagreement between experiment and theory, when otherwise excellent agreement was expected, has provided the first evidence of satellite two-electron quadrupole photoionization transitions, along with their crucial importance for a quantitatively accurate theory.

  18. Interactive animation of 4D performance capture.

    PubMed

    Casas, Dan; Tejera, Margara; Guillemaut, Jean-Yves; Hilton, Adrian

    2013-05-01

    A 4D parametric motion graph representation is presented for interactive animation from actor performance capture in a multiple camera studio. The representation is based on a 4D model database of temporally aligned mesh sequence reconstructions for multiple motions. High-level movement controls such as speed and direction are achieved by blending multiple mesh sequences of related motions. A real-time mesh sequence blending approach is introduced, which combines the realistic deformation of previous nonlinear solutions with efficient online computation. Transitions between different parametric motion spaces are evaluated in real time based on surface shape and motion similarity. Four-dimensional parametric motion graphs allow real-time interactive character animation while preserving the natural dynamics of the captured performance. PMID:23492379

  19. IMRT treatment planning on 4D geometries for the era of dynamic MLC tracking.

    PubMed

    Suh, Yelin; Murray, Walter; Keall, Paul J

    2014-12-01

    The problem addressed here was to obtain optimal and deliverable dynamic multileaf collimator (MLC) leaf sequences from four-dimensional (4D) geometries for dynamic MLC tracking delivery. The envisaged scenario was where respiratory phase and position information of the target was available during treatment, from which the optimal treatment plan could be further adapted in real time. A tool for 4D treatment plan optimization was developed that integrates a commercially available treatment planning system and a general-purpose optimization system. The 4D planning method was applied to the 4D computed tomography planning scans of three lung cancer patients. The optimization variables were MLC leaf positions as a function of monitor units and respiratory phase. The objective function was the deformable dose-summed 4D treatment plan score. MLC leaf motion was constrained by the maximum leaf velocity between control points in terms of monitor units for tumor motion parallel to the leaf travel direction and between phases for tumor motion parallel to the leaf travel direction. For comparison and a starting point for the 4D optimization, three-dimensional (3D) optimization was performed on each of the phases. The output of the 4D IMRT planning process is a leaf sequence which is a function of both monitor unit and phase, which can be delivered to a patient whose breathing may vary between the imaging and treatment sessions. The 4D treatment plan score improved during 4D optimization by 34%, 4%, and 50% for Patients A, B, and C, respectively, indicating 4D optimization generated a better 4D treatment plan than the deformable sum of individually optimized phase plans. The dose-volume histograms for each phase remained similar, indicating robustness of the 4D treatment plan to respiratory variations expected during treatment delivery. In summary, 4D optimization for respiratory phase-dependent treatment planning with dynamic MLC motion tracking improved the 4D treatment plan

  20. Atlas construction for dynamic (4D) PET using diffeomorphic transformations.

    PubMed

    Bieth, Marie; Lombaert, Hervé; Reader, Andrew J; Siddiqi, Kaleem

    2013-01-01

    A novel dynamic (4D) PET to PET image registration procedure is proposed and applied to multiple PET scans acquired with the high resolution research tomograph (HRRT), the highest resolution human brain PET scanner available in the world. By extending the recent diffeomorphic log-demons (DLD) method and applying it to multiple dynamic [11C]raclopride scans from the HRRT, an important step towards construction of a PET atlas of unprecedented quality for [11C]raclopride imaging of the human brain has been achieved. Accounting for the temporal dimension in PET data improves registration accuracy when compared to registration of 3D to 3D time-averaged PET images. The DLD approach was chosen for its ease in providing both an intensity and shape template, through iterative sequential pair-wise registrations with fast convergence. The proposed method is applicable to any PET radiotracer, providing 4D atlases with useful applications in high accuracy PET data simulations and automated PET image analysis. PMID:24579121

  1. Unsupervised dealiasing and denoising of color-Doppler data.

    PubMed

    Muth, Stéphan; Dort, Sarah; Sebag, Igal A; Blais, Marie-Josée; Garcia, Damien

    2011-08-01

    Color Doppler imaging (CDI) is the premiere modality to analyze blood flow in clinical practice. In the prospect of producing new CDI-based tools, we developed a fast unsupervised denoiser and dealiaser (DeAN) algorithm for color Doppler raw data. The proposed technique uses robust and automated image post-processing techniques that make the DeAN clinically compliant. The DeAN includes three consecutive advanced and hands-off numerical tools: (1) statistical region merging segmentation, (2) recursive dealiasing process, and (3) regularized robust smoothing. The performance of the DeAN was evaluated using Monte-Carlo simulations on mock Doppler data corrupted by aliasing and inhomogeneous noise. Fifty aliased Doppler images of the left ventricle acquired with a clinical ultrasound scanner were also analyzed. The analytical study demonstrated that color Doppler data can be reconstructed with high accuracy despite the presence of strong corruption. The normalized RMS error on the numerical data was less than 8% even with signal-to-noise ratio as low as 10dB. The algorithm also allowed us to recover highly reliable Doppler flows in clinical data. The DeAN is fast, accurate and not observer-dependent. Preliminary results showed that it is also directly applicable to 3-D data. This will offer the possibility of developing new tools to better decipher the blood flow dynamics in cardiovascular diseases.

  2. A 4D Hyperspherical Interpretation of q-Space

    PubMed Central

    Hosseinbor, A. Pasha; Chung, Moo K.; Wu, Yu-Chien; Bendlin, Barbara B.; Alexander, Andrew L.

    2015-01-01

    3D q-space can be viewed as the surface of a 4D hypersphere. In this paper, we seek to develop a 4D hyperspherical interpretation of q-space by projecting it onto a hypersphere and subsequently modeling the q-space signal via 4D hyperspherical harmonics (HSH). Using this orthonormal basis, we derive several well-established q-space indices and numerically estimate the diffusion orientation distribution function (dODF). We also derive the integral transform describing the relationship between the diffusion signal and propagator on a hypersphere. Most importantly, we will demonstrate that for hybrid diffusion imaging (HYDI) acquisitions low order linear expansion of the HSH basis is sufficient to characterize diffusion in neural tissue. In fact, the HSH basis achieves comparable signal and better dODF reconstructions than other well-established methods, such as Bessel Fourier orientation reconstruction (BFOR), using fewer fitting parameters. All in all, this work provides a new way of looking at q-space. PMID:25624043

  3. Evaluation of a 4D cone-beam CT reconstruction approach using a simulation framework.

    PubMed

    Hartl, Alexander; Yaniv, Ziv

    2009-01-01

    Current image-guided navigation systems for thoracic abdominal interventions utilize three dimensional (3D) images acquired at breath-hold. As a result they can only provide guidance at a specific point in the respiratory cycle. The intervention is thus performed in a gated manner, with the physician advancing only when the patient is at the same respiratory cycle in which the 3D image was acquired. To enable a more continuous workflow we propose to use 4D image data. We describe an approach to constructing a set of 4D images from a diagnostic CT acquired at breath-hold and a set of intraoperative cone-beam CT (CBCT) projection images acquired while the patient is freely breathing. Our approach is based on an initial reconstruction of a gated 4D CBCT data set. The 3D CBCT images for each respiratory phase are then non-rigidly registered to the diagnostic CT data. Finally the diagnostic CT is deformed based on the registration results, providing a 4D data set with sufficient quality for navigation purposes. In this work we evaluate the proposed reconstruction approach using a simulation framework. A 3D CBCT dataset of an anthropomorphic phantom is deformed using internal motion data acquired from an animal model to create a ground truth 4D CBCT image. Simulated projection images are then created from the 4D image and the known CBCT scan parameters. Finally, the original 3D CBCT and the simulated X-ray images are used as input to our reconstruction method. The resulting 4D data set is then compared to the known ground truth by normalized cross correlation(NCC). We show that the deformed diagnostic CTs are of better quality than the gated reconstructions with a mean NCC value of 0.94 versus a mean 0.81 for the reconstructions. PMID:19964143

  4. Denoising solar radiation data using coiflet wavelets

    SciTech Connect

    Karim, Samsul Ariffin Abdul Janier, Josefina B. Muthuvalu, Mohana Sundaram; Hasan, Mohammad Khatim; Sulaiman, Jumat; Ismail, Mohd Tahir

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  5. Qualitative grading of aortic regurgitation: a pilot study comparing CMR 4D flow and echocardiography.

    PubMed

    Chelu, Raluca G; van den Bosch, Annemien E; van Kranenburg, Matthijs; Hsiao, Albert; van den Hoven, Allard T; Ouhlous, Mohamed; Budde, Ricardo P J; Beniest, Kirsten M; Swart, Laurens E; Coenen, Adriaan; Lubbers, Marisa M; Wielopolski, Piotr A; Vasanawala, Shreyas S; Roos-Hesselink, Jolien W; Nieman, Koen

    2016-02-01

    Over the past 10 years there has been intense research in the development of volumetric visualization of intracardiac flow by cardiac magnetic resonance (CMR).This volumetric time resolved technique called CMR 4D flow imaging has several advantages over standard CMR. It offers anatomical, functional and flow information in a single free-breathing, ten-minute acquisition. However, the data obtained is large and its processing requires dedicated software. We evaluated a cloud-based application package that combines volumetric data correction and visualization of CMR 4D flow data, and assessed its accuracy for the detection and grading of aortic valve regurgitation using transthoracic echocardiography as reference. Between June 2014 and January 2015, patients planned for clinical CMR were consecutively approached to undergo the supplementary CMR 4D flow acquisition. Fifty four patients(median age 39 years, 32 males) were included. Detection and grading of the aortic valve regurgitation using CMR4D flow imaging were evaluated against transthoracic echocardiography. The agreement between 4D flow CMR and transthoracic echocardiography for grading of aortic valve regurgitation was good (j = 0.73). To identify relevant,more than mild aortic valve regurgitation, CMR 4D flow imaging had a sensitivity of 100 % and specificity of 98 %. Aortic regurgitation can be well visualized, in a similar manner as transthoracic echocardiography, when using CMR 4D flow imaging. PMID:26498478

  6. Geometric validation of self-gating k-space-sorted 4D-MRI vs 4D-CT using a respiratory motion phantom

    SciTech Connect

    Yue, Yong Yang, Wensha; McKenzie, Elizabeth; Tuli, Richard; Wallace, Robert; Fraass, Benedick; Fan, Zhaoyang; Pang, Jianing; Deng, Zixin; Li, Debiao

    2015-10-15

    Purpose: MRI is increasingly being used for radiotherapy planning, simulation, and in-treatment-room motion monitoring. To provide more detailed temporal and spatial MR data for these tasks, we have recently developed a novel self-gated (SG) MRI technique with advantage of k-space phase sorting, high isotropic spatial resolution, and high temporal resolution. The current work describes the validation of this 4D-MRI technique using a MRI- and CT-compatible respiratory motion phantom and comparison to 4D-CT. Methods: The 4D-MRI sequence is based on a spoiled gradient echo-based 3D projection reconstruction sequence with self-gating for 4D-MRI at 3 T. Respiratory phase is resolved by using SG k-space lines as the motion surrogate. 4D-MRI images are reconstructed into ten temporal bins with spatial resolution 1.56 × 1.56 × 1.56 mm{sup 3}. A MRI-CT compatible phantom was designed to validate the performance of the 4D-MRI sequence and 4D-CT imaging. A spherical target (diameter 23 mm, volume 6.37 ml) filled with high-concentration gadolinium (Gd) gel is embedded into a plastic box (35 × 40 × 63 mm{sup 3}) and stabilized with low-concentration Gd gel. The phantom, driven by an air pump, is able to produce human-type breathing patterns between 4 and 30 respiratory cycles/min. 4D-CT of the phantom has been acquired in cine mode, and reconstructed into ten phases with slice thickness 1.25 mm. The 4D images sets were imported into a treatment planning software for target contouring. The geometrical accuracy of the 4D MRI and CT images has been quantified using target volume, flattening, and eccentricity. The target motion was measured by tracking the centroids of the spheres in each individual phase. Motion ground-truth was obtained from input signals and real-time video recordings. Results: The dynamic phantom has been operated in four respiratory rate (RR) settings, 6, 10, 15, and 20/min, and was scanned with 4D-MRI and 4D-CT. 4D-CT images have target

  7. 4D VMAT, gated VMAT, and 3D VMAT for stereotactic body radiation therapy in lung.

    PubMed

    Chin, E; Loewen, S K; Nichol, A; Otto, K

    2013-02-21

    Four-dimensional volumetric modulated arc therapy (4D VMAT) is a treatment strategy for lung cancers that aims to exploit relative target and tissue motion to improve organ at risk (OAR) sparing. The algorithm incorporates the entire patient respiratory cycle using 4D CT data into the optimization process. Resulting treatment plans synchronize the delivery of each beam aperture to a specific phase of target motion. Stereotactic body radiation therapy treatment plans for 4D VMAT, gated VMAT, and 3D VMAT were generated on three patients with non-small cell lung cancer. Tumour motion ranged from 1.4-3.4 cm. The dose and fractionation scheme was 48 Gy in four fractions. A B-spline transformation model registered the 4D CT images. 4D dose volume histograms (4D DVH) were calculated from total dose accumulated at the maximum exhalation. For the majority of OARs, gated VMAT achieved the most radiation sparing but treatment times were 77-148% longer than 3D VMAT. 4D VMAT plan qualities were comparable to gated VMAT, but treatment times were only 11-25% longer than 3D VMAT. 4D VMAT's improvement of healthy tissue sparing can allow for further dose escalation. Future study could potentially adapt 4D VMAT to irregular patient breathing patterns.

  8. 4D VMAT, gated VMAT, and 3D VMAT for stereotactic body radiation therapy in lung

    NASA Astrophysics Data System (ADS)

    Chin, E.; Loewen, S. K.; Nichol, A.; Otto, K.

    2013-02-01

    Four-dimensional volumetric modulated arc therapy (4D VMAT) is a treatment strategy for lung cancers that aims to exploit relative target and tissue motion to improve organ at risk (OAR) sparing. The algorithm incorporates the entire patient respiratory cycle using 4D CT data into the optimization process. Resulting treatment plans synchronize the delivery of each beam aperture to a specific phase of target motion. Stereotactic body radiation therapy treatment plans for 4D VMAT, gated VMAT, and 3D VMAT were generated on three patients with non-small cell lung cancer. Tumour motion ranged from 1.4-3.4 cm. The dose and fractionation scheme was 48 Gy in four fractions. A B-spline transformation model registered the 4D CT images. 4D dose volume histograms (4D DVH) were calculated from total dose accumulated at the maximum exhalation. For the majority of OARs, gated VMAT achieved the most radiation sparing but treatment times were 77-148% longer than 3D VMAT. 4D VMAT plan qualities were comparable to gated VMAT, but treatment times were only 11-25% longer than 3D VMAT. 4D VMAT's improvement of healthy tissue sparing can allow for further dose escalation. Future study could potentially adapt 4D VMAT to irregular patient breathing patterns.

  9. New methods for MRI denoising based on sparseness and self-similarity.

    PubMed

    Manjón, José V; Coupé, Pierrick; Buades, Antonio; Louis Collins, D; Robles, Montserrat

    2012-01-01

    This paper proposes two new methods for the three-dimensional denoising of magnetic resonance images that exploit the sparseness and self-similarity properties of the images. The proposed methods are based on a three-dimensional moving-window discrete cosine transform hard thresholding and a three-dimensional rotationally invariant version of the well-known nonlocal means filter. The proposed approaches were compared with related state-of-the-art methods and produced very competitive results. Both methods run in less than a minute, making them usable in most clinical and research settings. PMID:21570894

  10. A new denoising method in high-dimensional PCA-space

    NASA Astrophysics Data System (ADS)

    Do, Quoc Bao; Beghdadi, Azeddine; Luong, Marie

    2012-03-01

    Kernel-design based method such as Bilateral filter (BIL), non-local means (NLM) filter is known as one of the most attractive approaches for denoising. We propose in this paper a new noise filtering method inspired by BIL, NLM filters and principal component analysis (PCA). The main idea here is to perform the BIL in a multidimensional PCA-space using an anisotropic kernel. The filtered multidimensional signal is then transformed back onto the image spatial domain to yield the desired enhanced image. In this work, it is demonstrated that the proposed method is a generalization of kernel-design based methods. The obtained results are highly promising.

  11. Clinical evaluation of 4D PET motion compensation strategies for treatment verification in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia

    2016-06-01

    A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra

  12. Clinical evaluation of 4D PET motion compensation strategies for treatment verification in ion beam therapy.

    PubMed

    Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia

    2016-06-01

    A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra

  13. Parallel Wavefront Analysis for a 4D Interferometer

    NASA Technical Reports Server (NTRS)

    Rao, Shanti R.

    2011-01-01

    This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.

  14. A unified variational approach to denoising and bias correction in MR.

    PubMed

    Fan, Ayres; Wells, William M; Fisher, John W; Cetin, Müjdat; Haker, Steven; Mulkern, Robert; Tempany, Clare; Willsky, Alan S

    2003-07-01

    We propose a novel bias correction method for magnetic resonance (MR) imaging that uses complementary body coil and surface coil images. The former are spatially homogeneous but have low signal intensity; the latter provide excellent signal response but have large bias fields. We present a variational framework where we optimize an energy functional to estimate the bias field and the underlying image using both observed images. The energy functional contains smoothness-enforcing regularization for both the image and the bias field. We present extensions of our basic framework to a variety of imaging protocols. We solve the optimization problem using a computationally efficient numerical algorithm based on coordinate descent, preconditioned conjugate gradient, half-quadratic regularization, and multigrid techniques. We show qualitative and quantitative results demonstrating the effectiveness of the proposed method in producing debiased and denoised MR images. PMID:15344454

  15. Live 4D optical coherence tomography for early embryonic mouse cardiac phenotyping

    NASA Astrophysics Data System (ADS)

    Lopez, Andrew L.; Wang, Shang; Larin, Kirill V.; Overbeek, Paul A.; Larina, Irina V.

    2016-03-01

    Studying embryonic mouse development is important for our understanding of normal human embryogenesis and the underlying causes of congenital defects. Our research focuses on imaging early development in the mouse embryo to specifically understand cardiovascular development using optical coherence tomography (OCT). We have previously developed imaging approaches that combine static embryo culture, OCT imaging and advanced image processing to visualize the whole live mouse embryos and obtain 4D (3D+time) cardiodynamic datasets with cellular resolution. Here, we present the study of using 4D OCT for dynamic imaging of early embryonic heart in live mouse embryos to assess mutant cardiac phenotypes during development, including a cardiac looping defect. Our results indicate that the live 4D OCT imaging approach is an efficient phenotyping tool that can reveal structural and functional cardiac defects at very early stages. Further studies integrating live embryonic cardiodynamic phenotyping with molecular and genetic approaches in mouse mutants will help to elucidate the underlying signaling defects.

  16. Active origami by 4D printing

    NASA Astrophysics Data System (ADS)

    Ge, Qi; Dunn, Conner K.; Qi, H. Jerry; Dunn, Martin L.

    2014-09-01

    Recent advances in three dimensional (3D) printing technology that allow multiple materials to be printed within each layer enable the creation of materials and components with precisely controlled heterogeneous microstructures. In addition, active materials, such as shape memory polymers, can be printed to create an active microstructure within a solid. These active materials can subsequently be activated in a controlled manner to change the shape or configuration of the solid in response to an environmental stimulus. This has been termed 4D printing, with the 4th dimension being the time-dependent shape change after the printing. In this paper, we advance the 4D printing concept to the design and fabrication of active origami, where a flat sheet automatically folds into a complicated 3D component. Here we print active composites with shape memory polymer fibers precisely printed in an elastomeric matrix and use them as intelligent active hinges to enable origami folding patterns. We develop a theoretical model to provide guidance in selecting design parameters such as fiber dimensions, hinge length, and programming strains and temperature. Using the model, we design and fabricate several active origami components that assemble from flat polymer sheets, including a box, a pyramid, and two origami airplanes. In addition, we directly print a 3D box with active composite hinges and program it to assume a temporary flat shape that subsequently recovers to the 3D box shape on demand.

  17. 4D Proton treatment planning strategy for mobile lung tumors

    SciTech Connect

    Kang Yixiu; Zhang Xiaodong; Chang, Joe Y.; Wang He; Wei Xiong; Liao Zhongxing; Komaki, Ritsuko; Cox, James D.; Balter, Peter A.; Liu, Helen; Zhu, X. Ronald; Mohan, Radhe; Dong Lei . E-mail: ldong@mdanderson.org

    2007-03-01

    Purpose: To investigate strategies for designing compensator-based 3D proton treatment plans for mobile lung tumors using four-dimensional computed tomography (4DCT) images. Methods and Materials: Four-dimensional CT sets for 10 lung cancer patients were used in this study. The internal gross tumor volume (IGTV) was obtained by combining the tumor volumes at different phases of the respiratory cycle. For each patient, we evaluated four planning strategies based on the following dose calculations: (1) the average (AVE) CT; (2) the free-breathing (FB) CT; (3) the maximum intensity projection (MIP) CT; and (4) the AVE CT in which the CT voxel values inside the IGTV were replaced by a constant density (AVE{sub R}IGTV). For each strategy, the resulting cumulative dose distribution in a respiratory cycle was determined using a deformable image registration method. Results: There were dosimetric differences between the apparent dose distribution, calculated on a single CT dataset, and the motion-corrected 4D dose distribution, calculated by combining dose distributions delivered to each phase of the 4DCT. The AVE{sub R}IGTV plan using a 1-cm smearing parameter had the best overall target coverage and critical structure sparing. The MIP plan approach resulted in an unnecessarily large treatment volume. The AVE and FB plans using 1-cm smearing did not provide adequate 4D target coverage in all patients. By using a larger smearing value, adequate 4D target coverage could be achieved; however, critical organ doses were increased. Conclusion: The AVE{sub R}IGTV approach is an effective strategy for designing proton treatment plans for mobile lung tumors.

  18. Parallel Infrastructure Modeling and Inversion Module for E4D

    SciTech Connect

    2014-10-09

    Electrical resistivity tomography ERT is a method of imaging the electrical conductivity of the subsurface. Electrical conductivity is a useful metric for understanding the subsurface because it is governed by geomechanical and geochemical properties that drive subsurface systems. ERT works by injecting current into the subsurface across a pair of electrodes, and measuring the corresponding electrical potential response across another pair of electrodes. Many such measurements are strategically taken across an array of electrodes to produce an ERT data set. These data are then processed through a computationally demanding process known as inversion to produce an image of the subsurface conductivity structure that gave rise to the measurements. Data can be inverted to provide 2D images, 3D images, or in the case of time-lapse 3D imaging, 4D images. ERT is generally not well suited for environments with buried electrically conductive infrastructure such as pipes, tanks, or well casings, because these features tend to dominate and degrade ERT images. This reduces or eliminates the utility of ERT imaging where it would otherwise be highly useful for, for example, imaging fluid migration from leaking pipes, imaging soil contamination beneath leaking subusurface tanks, and monitoring contaminant migration in locations with dense network of metal cased monitoring wells. The location and dimension of buried metallic infrastructure is often known. If so, then the effects of the infrastructure can be explicitly modeled within the ERT imaging algorithm, and thereby removed from the corresponding ERT image. However,there are a number of obstacles limiting this application. 1) Metallic infrastructure cannot be accurately modeled with standard codes because of the large contrast in conductivity between the metal and host material. 2) Modeling infrastructure in true dimension requires the computational mesh to be highly refined near the metal inclusions, which increases

  19. Study on De-noising Technology of Radar Life Signal

    NASA Astrophysics Data System (ADS)

    Yang, Xiu-Fang; Wang, Lian-Huan; Ma, Jiang-Fei; Wang, Pei-Pei

    2016-05-01

    Radar detection is a kind of novel life detection technology, which can be applied to medical monitoring, anti-terrorism and disaster relief street fighting, etc. As the radar life signal is very weak, it is often submerged in the noise. Because of non-stationary and randomness of these clutter signals, it is necessary to denoise efficiently before extracting and separating the useful signal. This paper improves the radar life signal's theoretical model of the continuous wave, does de-noising processing by introducing lifting wavelet transform and determine the best threshold function through comparing the de-noising effects of different threshold functions. The result indicates that both SNR and MSE of the signal are better than the traditional ones by introducing lifting wave transform and using a new improved soft threshold function de-noising method..

  20. Denoising time-domain induced polarisation data using wavelet techniques

    NASA Astrophysics Data System (ADS)

    Deo, Ravin N.; Cull, James P.

    2016-05-01

    Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.

  1. Medical image noise reduction using the Sylvester-Lyapunov equation.

    PubMed

    Sanches, João M; Nascimento, Jacinto C; Marques, Jorge S

    2008-09-01

    Multiplicative noise is often present in medical and biological imaging, such as magnetic resonance imaging (MRI), Ultrasound, positron emission tomography (PET), single photon emission computed tomography (SPECT), and fluorescence microscopy. Noise reduction in medical images is a difficult task in which linear filtering algorithms usually fail. Bayesian algorithms have been used with success but they are time consuming and computationally demanding. In addition, the increasing importance of the 3-D and 4-D medical image analysis in medical diagnosis procedures increases the amount of data that must be efficiently processed. This paper presents a Bayesian denoising algorithm which copes with additive white Gaussian and multiplicative noise described by Poisson and Rayleigh distributions. The algorithm is based on the maximum a posteriori (MAP) criterion, and edge preserving priors which avoid the distortion of relevant anatomical details. The main contribution of the paper is the unification of a set of Bayesian denoising algorithms for additive and multiplicative noise using a well-known mathematical framework, the Sylvester-Lyapunov equation, developed in the context of the Control theory.

  2. Functional organization of the human 4D Nucleome

    PubMed Central

    Chen, Haiming; Chen, Jie; Muir, Lindsey A.; Ronquist, Scott; Meixner, Walter; Ljungman, Mats; Ried, Thomas; Smale, Stephen; Rajapakse, Indika

    2015-01-01

    The 4D organization of the interphase nucleus, or the 4D Nucleome (4DN), reflects a dynamical interaction between 3D genome structure and function and its relationship to phenotype. We present initial analyses of the human 4DN, capturing genome-wide structure using chromosome conformation capture and 3D imaging, and function using RNA-sequencing. We introduce a quantitative index that measures underlying topological stability of a genomic region. Our results show that structural features of genomic regions correlate with function with surprising persistence over time. Furthermore, constructing genome-wide gene-level contact maps aided in identifying gene pairs with high potential for coregulation and colocalization in a manner consistent with expression via transcription factories. We additionally use 2D phase planes to visualize patterns in 4DN data. Finally, we evaluated gene pairs within a circadian gene module using 3D imaging, and found periodicity in the movement of clock circadian regulator and period circadian clock 2 relative to each other that followed a circadian rhythm and entrained with their expression. PMID:26080430

  3. Complete valvular heart apparatus model from 4D cardiac CT.

    PubMed

    Grbic, Sasa; Ionasec, Razvan; Vitanovski, Dime; Voigt, Ingmar; Wang, Yang; Georgescu, Bogdan; Navab, Nassir; Comaniciu, Dorin

    2012-07-01

    The cardiac valvular apparatus, composed of the aortic, mitral, pulmonary and tricuspid valves, is an essential part of the anatomical, functional and hemodynamic characteristics of the heart and the cardiovascular system as a whole. Valvular heart diseases often involve multiple dysfunctions and require joint assessment and therapy of the valves. In this paper, we propose a complete and modular patient-specific model of the cardiac valvular apparatus estimated from 4D cardiac CT data. A new constrained Multi-linear Shape Model (cMSM), conditioned by anatomical measurements, is introduced to represent the complex spatio-temporal variation of the heart valves. The cMSM is exploited within a learning-based framework to efficiently estimate the patient-specific valve parameters from cine images. Experiments on 64 4D cardiac CT studies demonstrate the performance and clinical potential of the proposed method. Our method enables automatic quantitative evaluation of the complete valvular apparatus based on non-invasive imaging techniques. In conjunction with existent patient-specific chamber models, the presented valvular model enables personalized computation modeling and realistic simulation of the entire cardiac system.

  4. Wavelet-based noise-model driven denoising algorithm for differential phase contrast mammography.

    PubMed

    Arboleda, Carolina; Wang, Zhentian; Stampanoni, Marco

    2013-05-01

    Traditional mammography can be positively complemented by phase contrast and scattering x-ray imaging, because they can detect subtle differences in the electron density of a material and measure the local small-angle scattering power generated by the microscopic density fluctuations in the specimen, respectively. The grating-based x-ray interferometry technique can produce absorption, differential phase contrast (DPC) and scattering signals of the sample, in parallel, and works well with conventional X-ray sources; thus, it constitutes a promising method for more reliable breast cancer screening and diagnosis. Recently, our team proved that this novel technology can provide images superior to conventional mammography. This new technology was used to image whole native breast samples directly after mastectomy. The images acquired show high potential, but the noise level associated to the DPC and scattering signals is significant, so it is necessary to remove it in order to improve image quality and visualization. The noise models of the three signals have been investigated and the noise variance can be computed. In this work, a wavelet-based denoising algorithm using these noise models is proposed. It was evaluated with both simulated and experimental mammography data. The outcomes demonstrated that our method offers a good denoising quality, while simultaneously preserving the edges and important structural features. Therefore, it can help improve diagnosis and implement further post-processing techniques such as fusion of the three signals acquired.

  5. ICT4D: A Computer Science Perspective

    NASA Astrophysics Data System (ADS)

    Sutinen, Erkki; Tedre, Matti

    The term ICT4D refers to the opportunities of Information and Communication Technology (ICT) as an agent of development. Research in that field is often focused on evaluating the feasibility of existing technologies, mostly of Western or Far East Asian origin, in the context of developing regions. A computer science perspective is complementary to that agenda. The computer science perspective focuses on exploring the resources, or inputs, of a particular context and on basing the design of a technical intervention on the available resources, so that the output makes a difference in the development context. The modus operandi of computer science, construction, interacts with evaluation and exploration practices. An analysis of a contextualized information technology curriculum of Tumaini University in southern Tanzania shows the potential of the computer science perspective for designing meaningful information and communication technology for a developing region.

  6. Soft Route to 4D Tomography.

    PubMed

    Taillandier-Thomas, Thibault; Roux, Stéphane; Hild, François

    2016-07-01

    Based on the assumption that the time evolution of a sample observed by computed tomography requires many less parameters than the definition of the microstructure itself, it is proposed to reconstruct these changes based on the initial state (using computed tomography) and very few radiographs acquired at fixed intervals of time. This Letter presents a proof of concept that for a fatigue cracked sample its kinematics can be tracked from no more than two radiographs in situations where a complete 3D view would require several hundreds of radiographs. This 2 order of magnitude gain opens the way to a "computed" 4D tomography, which complements the recent progress achieved in fast or ultrafast computed tomography, which is based on beam brightness, detector sensitivity, and signal acquisition technologies.

  7. Soft Route to 4D Tomography

    NASA Astrophysics Data System (ADS)

    Taillandier-Thomas, Thibault; Roux, Stéphane; Hild, François

    2016-07-01

    Based on the assumption that the time evolution of a sample observed by computed tomography requires many less parameters than the definition of the microstructure itself, it is proposed to reconstruct these changes based on the initial state (using computed tomography) and very few radiographs acquired at fixed intervals of time. This Letter presents a proof of concept that for a fatigue cracked sample its kinematics can be tracked from no more than two radiographs in situations where a complete 3D view would require several hundreds of radiographs. This 2 order of magnitude gain opens the way to a "computed" 4D tomography, which complements the recent progress achieved in fast or ultrafast computed tomography, which is based on beam brightness, detector sensitivity, and signal acquisition technologies.

  8. Discrete shearlet transform on GPU with applications in anomaly detection and denoising

    NASA Astrophysics Data System (ADS)

    Gibert, Xavier; Patel, Vishal M.; Labate, Demetrio; Chellappa, Rama

    2014-12-01

    Shearlets have emerged in recent years as one of the most successful methods for the multiscale analysis of multidimensional signals. Unlike wavelets, shearlets form a pyramid of well-localized functions defined not only over a range of scales and locations, but also over a range of orientations and with highly anisotropic supports. As a result, shearlets are much more effective than traditional wavelets in handling the geometry of multidimensional data, and this was exploited in a wide range of applications from image and signal processing. However, despite their desirable properties, the wider applicability of shearlets is limited by the computational complexity of current software implementations. For example, denoising a single 512 × 512 image using a current implementation of the shearlet-based shrinkage algorithm can take between 10 s and 2 min, depending on the number of CPU cores, and much longer processing times are required for video denoising. On the other hand, due to the parallel nature of the shearlet transform, it is possible to use graphics processing units (GPU) to accelerate its implementation. In this paper, we present an open source stand-alone implementation of the 2D discrete shearlet transform using CUDA C++ as well as GPU-accelerated MATLAB implementations of the 2D and 3D shearlet transforms. We have instrumented the code so that we can analyze the running time of each kernel under different GPU hardware. In addition to denoising, we describe a novel application of shearlets for detecting anomalies in textured images. In this application, computation times can be reduced by a factor of 50 or more, compared to multicore CPU implementations.

  9. Opening the Black Box of ICT4D: Advancing Our Understanding of ICT4D Partnerships

    ERIC Educational Resources Information Center

    Park, Sung Jin

    2013-01-01

    The term, Information and Communication Technologies for Development (ICT4D), pertains to programs or projects that strategically use ICTs (e.g. mobile phones, computers, and the internet) as a means toward the socio-economic betterment for the poor in developing contexts. Gaining the political and financial support of the international community…

  10. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    PubMed Central

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  11. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 17 Commodity and Securities Exchanges 3 2013-04-01 2013-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  12. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2012 CFR

    2005-04-01

    ... 17 Commodity and Securities Exchanges 3 2005-04-01 2005-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  13. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2013 CFR

    2000-04-01

    ... 17 Commodity and Securities Exchanges 3 2000-04-01 2000-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a) Each application for an order under section 304(d)...

  14. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 3 2011-04-01 2011-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  15. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 3 2012-04-01 2012-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  16. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  17. 76 FR 55814 - 2,4-D; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-09

    ... AGENCY 40 CFR Part 180 2,4-D; Pesticide Tolerances AGENCY: Environmental Protection Agency (EPA). ACTION: Final rule. SUMMARY: This regulation establishes tolerances for residues of 2,4-D in or on teff, bran... 180.142 be amended by establishing a tolerance for residues of the herbicide 2,4-D...

  18. 17 CFR 260.4d-8 - Content.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 4 2014-04-01 2014-04-01 false Content. 260.4d-8 Section 260.4d-8 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, TRUST INDENTURE ACT OF 1939 Rules Under Section 304 § 260.4d-8 Content. (a)...

  19. Use of Split Bregman denoising for iterative reconstruction in fluorescence diffuse optical tomography.

    PubMed

    Chamorro-Servent, Judit; Abascal, Juan F P J; Aguirre, Juan; Arridge, Simon; Correia, Teresa; Ripoll, Jorge; Desco, Manuel; Vaquero, Juan J

    2013-07-01

    Fluorescence diffuse optical tomography (fDOT) is a noninvasive imaging technique that makes it possible to quantify the spatial distribution of fluorescent tracers in small animals. fDOT image reconstruction is commonly performed by means of iterative methods such as the algebraic reconstruction technique (ART). The useful results yielded by more advanced l1-regularized techniques for signal recovery and image reconstruction, together with the recent publication of Split Bregman (SB) procedure, led us to propose a new approach to the fDOT inverse problem, namely, ART-SB. This method alternates a cost-efficient reconstruction step (ART iteration) with a denoising filtering step based on minimization of total variation of the image using the SB method, which can be solved efficiently and quickly. We applied this method to simulated and experimental fDOT data and found that ART-SB provides substantial benefits over conventional ART.

  20. Evaluation of denoising algorithms for biological electron tomography.

    PubMed

    Narasimha, Rajesh; Aganj, Iman; Bennett, Adam E; Borgnia, Mario J; Zabransky, Daniel; Sapiro, Guillermo; McLaughlin, Steven W; Milne, Jacqueline L S; Subramaniam, Sriram

    2008-10-01

    Tomograms of biological specimens derived using transmission electron microscopy can be intrinsically noisy due to the use of low electron doses, the presence of a "missing wedge" in most data collection schemes, and inaccuracies arising during 3D volume reconstruction. Before tomograms can be interpreted reliably, for example, by 3D segmentation, it is essential that the data be suitably denoised using procedures that can be individually optimized for specific data sets. Here, we implement a systematic procedure to compare various nonlinear denoising techniques on tomograms recorded at room temperature and at cryogenic temperatures, and establish quantitative criteria to select a denoising approach that is most relevant for a given tomogram. We demonstrate that using an appropriate denoising algorithm facilitates robust segmentation of tomograms of HIV-infected macrophages and Bdellovibrio bacteria obtained from specimens at room and cryogenic temperatures, respectively. We validate this strategy of automated segmentation of optimally denoised tomograms by comparing its performance with manual extraction of key features from the same tomograms.

  1. Fully 4D list-mode reconstruction applied to respiratory-gated PET scans

    NASA Astrophysics Data System (ADS)

    Grotus, N; Reader, A J; Stute, S; Rosenwald, J C; Giraud, P; Buvat, I

    2009-03-01

    18F-fluoro-deoxy-glucose (18F-FDG) positron emission tomography (PET) is one of the most sensitive and specific imaging modalities for the diagnosis of non-small cell lung cancer. A drawback of PET is that it requires several minutes of acquisition per bed position, which results in images being affected by respiratory blur. Respiratory gating techniques have been developed to deal with respiratory motion in the PET images. However, these techniques considerably increase the level of noise in the reconstructed images unless the acquisition time is increased. The aim of this paper is to evaluate a four-dimensional (4D) image reconstruction algorithm that combines the acquired events in all the gates whilst preserving the motion deblurring. This algorithm was compared to classic ordered subset expectation maximization (OSEM) reconstruction of gated and non-gated images, and to temporal filtering of gated images reconstructed with OSEM. Two datasets were used for comparing the different reconstruction approaches: one involving the NEMA IEC/2001 body phantom in motion, the other obtained using Monte-Carlo simulations of the NCAT breathing phantom. Results show that 4D reconstruction reaches a similar performance in terms of the signal-to-noise ratio (SNR) as non-gated reconstruction whilst preserving the motion deblurring. In particular, 4D reconstruction improves the SNR compared to respiratory-gated images reconstructed with the OSEM algorithm. Temporal filtering of the OSEM-reconstructed images helps improve the SNR, but does not achieve the same performance as 4D reconstruction. 4D reconstruction of respiratory-gated images thus appears as a promising tool to reach the same performance in terms of the SNR as non-gated acquisitions while reducing the motion blur, without increasing the acquisition time.

  2. Wavelet Denoising of Mobile Radiation Data

    SciTech Connect

    Campbell, D B

    2008-10-31

    The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems.

  3. 4D reconstruction of the past

    NASA Astrophysics Data System (ADS)

    Doulamis, Anastasios; Ioannides, Marinos; Doulamis, Nikolaos; Hadjiprocopis, Andreas; Fritsch, Dieter; Balet, Olivier; Julien, Martine; Protopapadakis, Eftychios; Makantasis, Kostas; Weinlinger, Guenther; Johnsons, Paul S.; Klein, Michael; Fellner, Dieter; Stork, Andre; Santos, Pedro

    2013-08-01

    One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Search engines can search text for keywords using algorithms of varied intelligence and with limited success. Searching images is a much more complex and computationally intensive task but some initial steps have already been made in this direction, mainly in face recognition. This paper aims to describe our proposed pipeline for integrating data available on Internet repositories and social media, such as photographs, animation and text to produce 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EUROPEANA. Our main goal is to enable historians, architects, archaeologists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web.

  4. Non-local MRI denoising using random sampling.

    PubMed

    Hu, Jinrong; Zhou, Jiliu; Wu, Xi

    2016-09-01

    In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's. PMID:27114338

  5. Adaptive nonlocal means filtering based on local noise level for CT denoising

    SciTech Connect

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-15

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  6. A New Approach to Inverting and De-Noising Backscatter from Lidar Observations

    NASA Astrophysics Data System (ADS)

    Marais, Willem; Hen Hu, Yu; Holz, Robert; Eloranta, Edwin

    2016-06-01

    Atmospheric lidar observations provide a unique capability to directly observe the vertical profile of cloud and aerosol scattering properties and have proven to be an important capability for the atmospheric science community. For this reason NASA and ESA have put a major emphasis on developing both space and ground based lidar instruments. Measurement noise (solar background and detector noise) has proven to be a significant limitation and is typically reduced by temporal and vertical averaging. This approach has significant limitations as it results in significant reduction in the spatial information and can introduce biases due to the non-linear relationship between the signal and the retrieved scattering properties. This paper investigates a new approach to de-noising and retrieving cloud and aerosol backscatter properties from lidar observations that leverages a technique developed for medical imaging to de-blur and de-noise images; the accuracy is defined as the error between the true and inverted photon rates. Hence non-linear bias errors can be mitigated and spatial information can be preserved.

  7. Terrestrial Laser Scanner Data Denoising by Dictionary Learning of Sparse Coding

    NASA Astrophysics Data System (ADS)

    Smigiel, E.; Alby, E.; Grussenmeyer, P.

    2013-07-01

    Point cloud processing is basically a signal processing issue. The huge amount of data which are collected with Terrestrial Laser Scanners or photogrammetry techniques faces the classical questions linked with signal or image processing. Among others, denoising and compression are questions which have to be addressed in this context. That is why, one has to turn attention to signal theory because it is susceptible to guide one's good practices or to inspire new ideas from the latest developments of this field. The literature have been showing for decades how strong and dynamic, the theoretical field is and how efficient the derived algorithms have become. For about ten years, a new technique has appeared: known as compressive sensing or compressive sampling, it is based first on sparsity which is an interesting characteristic of many natural signals. Based on this concept, many denoising and compression techniques have shown their efficiencies. Sparsity can also be seen as redundancy removal of natural signals. Taken along with incoherent measurements, compressive sensing has appeared and uses the idea that redundancy could be removed at the very early stage of sampling. Hence, instead of sampling the signal at high sampling rate and removing redundancy as a second stage, the acquisition stage itself may be run with redundancy removal. This paper gives some theoretical aspects of these ideas with first simple mathematics. Then, the idea of compressive sensing for a Terrestrial Laser Scanner is examined as a potential research question and finally, a denoising scheme based on a dictionary learning of sparse coding is experienced. Both the theoretical discussion and the obtained results show that it is worth staying close to signal processing theory and its community to take benefit of its latest developments.

  8. Application of Wavelet Analysis Technique in the Signal Denoising of Life Sign Detection

    NASA Astrophysics Data System (ADS)

    Zhen, Zhang; Fang, LIU.

    In life sign detection, radar echo signal is very weak and hard to extract. For solve this problem, weak life signal de-noising based on wavelet transform is studied. Through the studies of wavelet threshold de-noising method, the use of it in weak life signal de-noising in strong noise background, and the verification of simulation by Matlab, the results shows that wavelet threshold de-noising method can remove the noise signal from weak life signal effectively and be an effective de-noising and extraction method for weak life signal.

  9. Performance prediction for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Rubel, Oleksii; Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2015-10-01

    Performance of denoising based on discrete cosine transform applied to multichannel remote sensing images corrupted by additive white Gaussian noise is analyzed. Images obtained by satellite Earth Observing-1 (EO-1) mission using hyperspectral imager instrument (Hyperion) that have high input SNR are taken as test images. Denoising performance is characterized by improvement of PSNR. For hard-thresholding 3D DCT-based denoising, simple statistics (probabilities to be less than a certain threshold) are used to predict denoising efficiency using curves fitted into scatterplots. It is shown that the obtained curves (approximations) provide prediction of denoising efficiency with high accuracy. Analysis is carried out for different numbers of channels processed jointly. Universality of prediction for different number of channels is proven.

  10. Denoising seismic data using wavelet methods: a comparison study

    NASA Astrophysics Data System (ADS)

    Hloupis, G.; Vallianatos, F.

    2009-04-01

    In order to derive onset times, amplitudes or other useful characteristic from a seismogram, the usual denoising procedure involves the use of a linear pass-band filter. This family of filters is zero-phase and is useful according to phase properties but their efficiency is reduced when transients are existing near seismic signals. The alternative solution is the Wiener filter which focuses on the elimination of the mean square error between recorded and expected signal. Its main disadvantage is the assumption that signal and noise are stationary. This assumption does not hold for the seismic signals leading to denoising solutions that does not assume stationarity. Solutions based on Wavelet Transform proved effective for denoising problems across several areas. Here we present recent WT denoising methods (WDM) that will applied later to seismic sequences of Seismological Network of Crete. Wavelet denoising schemes have proved to be well adapted to several types of signals. For non-stationary signals, such as seismograms, the use of linear and non-linear wavelet denoising methods seems promising. The contribution of this study is a comparison for wavelet denoising methods suitable for seismic signals, which proved from previous studies their superiority against appropriate conventional filtering techniques. The importance of wavelet denoising methods relies on two facts: they recovered the seismic signals having fewer artifacts than conventional filters (for high SNR seismograms) and at the same time they can provide satisfactory representations (for detecting the earthquake's primary arrival) for low SNR seismograms or microearthquakes. The latter is very important for a possible development of an automatic procedure for the regular daily detection of small or non-regional earthquakes especially when the number of the stations is quite big. Initially, their performance is measured over a database of synthetic seismic signals in order to evaluate the better wavelet

  11. Three-dimensional fuzzy-directional processing to impulse video color denoising in real time environment

    NASA Astrophysics Data System (ADS)

    Rosales-Silva, Alberto J.; Ponomaryov, Volodymyr; Gallegos-Funes, Francisco

    2009-05-01

    It is presented a robust three dimensional scheme using fuzzy and directional techniques in denoising video color images contaminated by impulsive random noise. This scheme estimates a noise and movement level in local area, detecting edges and fine details in an image video sequence. The proposed approach cares the chromaticity properties in multidimensional and multichannel images. The algorithm was specially designed to reduce computational charge, and its performance is quantified using objective criteria, such as Pick Signal Noise Relation, Mean Absolute Error and Normalized Color Difference, as well visual subjective views. Novel filter shows superiority rendering against other well known algorithms found in the literature. Real-time analysis is realized on Digital Signal Processor to outperform processing capability. The DSP was designed by Texas Instruments for multichannel processing in the multitask process, and permits to improve the performance of several tasks, and at the same time enhancing processing time and reducing computational charge in such a dedicated hardware.

  12. 4D GPR Experiments--Towards the Virtual Lysimeter

    NASA Astrophysics Data System (ADS)

    Grasmueck, M.; Viggiano, D. A.; Day-Lewis, F. D.; Drasdis, J. B.; Kruse, S. E.; Or, D.

    2006-05-01

    In-situ monitoring of infiltration, water flow and retention in the vadose zone currently rely primarily on invasive methods, which irreversibly disturb original soil structure and alter its hydrologic behavior in the vicinity of the measurement. For example, use of lysimeters requires extraction and repacking of soil samples, and time- domain reflectometry (TDR) requires insertion of probes into the soil profile. This study investigates the use of repeated high-density 3D ground penetrating radar surveys (also known as 4D GPR) as a non-invasive alternative for detailed visualization and quantification of water flow in the vadose zone. Evaluation of the 4D GPR method was based on a series of controlled point-source water injection experiments into undisturbed beach sand deposits at Crandon Park in Miami, Florida. The goal of the GPR surveys was to image the shape and evolution of a wet-bulb as it propagates from the injection points (~0.5 m) towards the water table at 2.2 m depth. The experimental design was guided by predictive modeling using Hydrus 2D and finite-difference GPR waveform codes. Input parameters for the modeling were derived from hydrologic and electromagnetic characterization of representative sand samples. Guided by modeling results, we injected 30 to 40 liters of tap water through plastic-cased boreholes with slotted bottom sections (0.1 m) located 0.4 to 0.6 m below the surface. During and after injection, an area of 25 m2 was surveyed every 20 minutes using 250 and 500 MHz antennas with a grid spacing of 0.05 x 0.025 m. A total of 20 3D GPR surveys were completed over 3 infiltration sites. To confirm wet-bulb shapes measured by GPR, we injected 2 liters of "brilliant blue" dye (~100 mg/l) along with a saline water tracer towards the end of one experiment. After completion of GPR scanning, a trench was excavated to examine the distribution of the saltwater and dye using TDR and visual inspection, respectively. Preliminary analysis of the 4D GPR

  13. Resolution enhancement of lung 4D-CT data using multiscale interphase iterative nonlocal means

    SciTech Connect

    Zhang Yu; Yap, Pew-Thian; Wu Guorong; Feng Qianjin; Chen Wufan; Lian Jun; Shen Dinggang

    2013-05-15

    Purpose: Four-dimensional computer tomography (4D-CT) has been widely used in lung cancer radiotherapy due to its capability in providing important tumor motion information. However, the prolonged scanning duration required by 4D-CT causes considerable increase in radiation dose. To minimize the radiation-related health risk, radiation dose is often reduced at the expense of interslice spatial resolution. However, inadequate resolution in 4D-CT causes artifacts and increases uncertainty in tumor localization, which eventually results in extra damages of healthy tissues during radiotherapy. In this paper, the authors propose a novel postprocessing algorithm to enhance the resolution of lung 4D-CT data. Methods: The authors' premise is that anatomical information missing in one phase can be recovered from the complementary information embedded in other phases. The authors employ a patch-based mechanism to propagate information across phases for the reconstruction of intermediate slices in the longitudinal direction, where resolution is normally the lowest. Specifically, the structurally matching and spatially nearby patches are combined for reconstruction of each patch. For greater sensitivity to anatomical details, the authors employ a quad-tree technique to adaptively partition the image for more fine-grained refinement. The authors further devise an iterative strategy for significant enhancement of anatomical details. Results: The authors evaluated their algorithm using a publicly available lung data that consist of 10 4D-CT cases. The authors' algorithm gives very promising results with significantly enhanced image structures and much less artifacts. Quantitative analysis shows that the authors' algorithm increases peak signal-to-noise ratio by 3-4 dB and the structural similarity index by 3%-5% when compared with the standard interpolation-based algorithms. Conclusions: The authors have developed a new algorithm to improve the resolution of 4D-CT. It outperforms

  14. Motion4D-library extended

    NASA Astrophysics Data System (ADS)

    Müller, Thomas

    2011-06-01

    The new version of the Motion4D-library now also includes the integration of a Sachs basis and the Jacobi equation to determine gravitational lensing of pointlike sources for arbitrary spacetimes.New version program summaryProgram title: Motion4D-libraryCatalogue identifier: AEEX_v3_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEX_v3_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 219 441No. of bytes in distributed program, including test data, etc.: 6 968 223Distribution format: tar.gzProgramming language: C++Computer: All platforms with a C++ compilerOperating system: Linux, WindowsRAM: 61 MbytesClassification: 1.5External routines: Gnu Scientic Library (GSL) (http://www.gnu.org/software/gsl/)Catalogue identifier of previous version: AEEX_v2_0Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 703Does the new version supersede the previous version?: YesNature of problem: Solve geodesic equation, parallel and Fermi-Walker transport in four-dimensional Lorentzian spacetimes. Determine gravitational lensing by integration of Jacobi equation and parallel transport of Sachs basis.Solution method: Integration of ordinary differential equations.Reasons for new version: The main novelty of the current version is the extension to integrate the Jacobi equation and the parallel transport of the Sachs basis along null geodesics. In combination, the change of the cross section of a light bundle and thus the gravitational lensing effect of a spacetime can be determined. Furthermore, we have implemented several new metrics.Summary of revisions: The main novelty of the current version is the integration of the Jacobi equation and the parallel transport of the Sachs basis along null geodesics. The corresponding set of equations readd2xμdλ2=-Γρ

  15. Image enhancement for pattern recognition

    NASA Astrophysics Data System (ADS)

    Huynh, Quyen Q.; Neretti, Nicola; Intrator, Nathan; Dobeck, Gerald J.

    1998-09-01

    We investigate various image enhancement techniques geared towards a specific detector. Our database consists of side- scan sonar images collected at the Naval Surface Warfare Center (NSWC), and the detector we use has proven to have excellent results on these data. We start by investigating various wavelet and wavelet packet denoising methods. Other methods we consider are based on more common filters (Gaussian and DOG filters). In wavelet based denoising we try different approaches, combining techniques that have been successfully used in signal and image denoising. We notice that the performance is mostly affected by the choice of the scale levels to which shrinkage is applied. We demonstrate that wavelet denoising can significantly improve detection performance while keeping low false alarm rates.

  16. SU-E-J-06: A Feasibility Study On Clinical Implementation of 4D-CBCT in Lung Cancer Treatment

    SciTech Connect

    Hu, Y; Stanford, J; Duggar, W; Ruan, C; He, R; Yang, C

    2014-06-01

    Purpose: Four-dimensional cone-beam CT (4D-CBCT) is a novel imaging technique to setup patients with pulmonary lesions in radiation therapy. This paper is to perform a feasibility study on the implementation of 4D-CBCT as image guidance for (1) SBRT and (2) Low Modulation (Low-Mod) IMRT in lung cancer treatment. Methods: Image artifacts and observers variability are evaluated by analyzing the 4D-CT QA phantom and patient 4D image data. There are two 4D-CBCT image artifacts: (1) Spatial artifact caused by the patient irregular breathing pattern will generate blurring and anatomy gap/overlap; (2) Cone beam scattering and hardening artifact will affect the image spatial and contrast resolution. The couch shift varies between 1mm to 3mm from different observers during the 4D-CBCT registration. Breath training is highly recommended to improve the respiratory regularity during CT simulation and treatment, especially for SBRT. Elekta XVI 4.5 Symmetry protocol is adopted in the patient 4DCBCT scanning and intensity-based registration. Physician adjustments on the auto-registration are involved prior to the treatment. Physician peer review on 4D-CBCT image acquisition and registration is also recommended to reduce the inter-observer variability. The average 4D-CT in reference volume coordinates is exported to MIM Vista 5.6.2 to manually fuse to the planning CT for further evaluation. Results: (1) SBRT: 4DCBCT is performed in dry-run and in each treatment fraction. Image registration and couch shift are reviewed by another physician on the 1st fraction before the treatment starts. (2) Low-Mod IMRT: 4D-CBCT is performed and peer reviewed on weekly basis. Conclusion: 4D-CBCT in SBRT dry-run can discover the ITV discrepancies caused by the low quality 4D-CT simulation. 4D-CBCT during SBRT and Low-Mod IMRT treatment provides physicians more confidence to target lung tumor and capability to evaluate inter-fractional ITV changes. More advanced 4D-CBCT scan protocol and

  17. Fast interactive exploration of 4D MRI flow data

    NASA Astrophysics Data System (ADS)

    Hennemuth, A.; Friman, O.; Schumann, C.; Bock, J.; Drexl, J.; Huellebrand, M.; Markl, M.; Peitgen, H.-O.

    2011-03-01

    1- or 2-directional MRI blood flow mapping sequences are an integral part of standard MR protocols for diagnosis and therapy control in heart diseases. Recent progress in rapid MRI has made it possible to acquire volumetric, 3-directional cine images in reasonable scan time. In addition to flow and velocity measurements relative to arbitrarily oriented image planes, the analysis of 3-dimensional trajectories enables the visualization of flow patterns, local features of flow trajectories or possible paths into specific regions. The anatomical and functional information allows for advanced hemodynamic analysis in different application areas like stroke risk assessment, congenital and acquired heart disease, aneurysms or abdominal collaterals and cranial blood flow. The complexity of the 4D MRI flow datasets and the flow related image analysis tasks makes the development of fast comprehensive data exploration software for advanced flow analysis a challenging task. Most existing tools address only individual aspects of the analysis pipeline such as pre-processing, quantification or visualization, or are difficult to use for clinicians. The goal of the presented work is to provide a software solution that supports the whole image analysis pipeline and enables data exploration with fast intuitive interaction and visualization methods. The implemented methods facilitate the segmentation and inspection of different vascular systems. Arbitrary 2- or 3-dimensional regions for quantitative analysis and particle tracing can be defined interactively. Synchronized views of animated 3D path lines, 2D velocity or flow overlays and flow curves offer a detailed insight into local hemodynamics. The application of the analysis pipeline is shown for 6 cases from clinical practice, illustrating the usefulness for different clinical questions. Initial user tests show that the software is intuitive to learn and even inexperienced users achieve good results within reasonable processing

  18. SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method

    SciTech Connect

    Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X

    2015-06-15

    Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed.

  19. SU-E-J-187: Individually Optimized Contrast-Enhancement 4D-CT for Pancreatic Adenocarcinoma in Radiotherapy Simulation

    SciTech Connect

    Xue, M; Patel, K; Regine, W; Lane, B; D'Souza, W; Lu, W; Klahr, P

    2014-06-01

    Purpose: To study the feasibility of individually optimized contrastenhancement (CE) 4D-CT for pancreatic adenocarcinoma (PDA) in radiotherapy simulation. To evaluate the image quality and contrast enhancement of tumor in the CE 4D-CT, compared to the clinical standard of CE 3D-CT and 4D-CT. Methods: In this IRB-approved study, each of the 7 PDA patients enrolled underwent 3 CT scans: a free-breathing 3D-CT with contrast (CE 3D-CT) followed by a 4D-CT without contrast (4D-CT) in the first study session, and a 4D-CT with individually synchronized contrast injection (CE 4D-CT) in the second study session. In CE 4D-CT, the time of full contrast injection was determined based on the time of peak enhancement for the test injection, injection rate, table speed, and longitudinal location and span of the pancreatic region. Physicians contoured both the tumor (T) and the normal pancreatic parenchyma (P) on the three CTs (end-of-exhalation for 4D-CT). The contrast between the tumor and normal pancreatic tissue was computed as the difference of the mean enhancement level of three 1 cm3 regions of interests in T and P, respectively. Wilcoxon rank sum test was used to statistically compare the scores and contrasts. Results: In qualitative evaluations, both CE 3D-CT and CE 4D-CT scored significantly better than 4D-CT (4.0 and 3.6 vs. 2.6). There was no significant difference between CE 3D-CT and CE 4D-CT. In quantitative evaluations, the contrasts between the tumor and the normal pancreatic parenchyma were 0.6±23.4, −2.1±8.0, and −19.6±28.8 HU, in CE 3D-CT, 4D-CT, and CE 4D-CT, respectively. Although not statistically significant, CE 4D-CT achieved better contrast enhancement between the tumor and the normal pancreatic parenchyma than both CE 3D-CT and 4DCT. Conclusion: CE 4D-CT achieved equivalent image quality and better contrast enhancement between tumor and normal pancreatic parenchyma than the clinical standard of CE 3D-CT and 4D-CT. This study was supported in part

  20. SU-E-J-120: Comparing 4D CT Computed Ventilation to Lung Function Measured with Hyperpolarized Xenon-129 MRI

    SciTech Connect

    Neal, B; Chen, Q

    2015-06-15

    Purpose: To correlate ventilation parameters computed from 4D CT to ventilation, profusion, and gas exchange measured with hyperpolarized Xenon-129 MRI for a set of lung cancer patients. Methods: Hyperpolarized Xe-129 MRI lung scans were acquired for lung cancer patients, before and after radiation therapy, measuring ventilation, perfusion, and gas exchange. In the standard clinical workflow, these patients also received 4D CT scans before treatment. Ventilation was computed from 4D CT using deformable image registration (DIR). All phases of the 4D CT scan were registered using a B-spline deformable registration. Ventilation at the voxel level was then computed for each phase based on a Jacobian volume expansion metric, yielding phase sorted ventilation images. Ventilation based upon 4D CT and Xe-129 MRI were co-registered, allowing qualitative visual comparison and qualitative comparison via the Pearson correlation coefficient. Results: Analysis shows a weak correlation between hyperpolarized Xe-129 MRI and 4D CT DIR ventilation, with a Pearson correlation coefficient of 0.17 to 0.22. Further work will refine the DIR parameters to optimize the correlation. The weak correlation could be due to the limitations of 4D CT, registration algorithms, or the Xe-129 MRI imaging. Continued development will refine parameters to optimize correlation. Conclusion: Current analysis yields a minimal correlation between 4D CT DIR and Xe-129 MRI ventilation. Funding provided by the 2014 George Amorino Pilot Grant in Radiation Oncology at the University of Virginia.