Sample records for sinogram blurring function

  1. Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.

    PubMed

    Zhang, Hua; Sonke, Jan-Jakob

    2013-01-01

    Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.

  2. Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-03-01

    Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.

  3. PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering

    NASA Astrophysics Data System (ADS)

    Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.

    2016-02-01

    Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.

  4. Efficient system modeling for a small animal PET scanner with tapered DOI detectors.

    PubMed

    Zhang, Mengxi; Zhou, Jian; Yang, Yongfeng; Rodríguez-Villafuerte, Mercedes; Qi, Jinyi

    2016-01-21

    A prototype small animal positron emission tomography (PET) scanner for mouse brain imaging has been developed at UC Davis. The new scanner uses tapered detector arrays with depth of interaction (DOI) measurement. In this paper, we present an efficient system model for the tapered PET scanner using matrix factorization and a virtual scanner geometry. The factored system matrix mainly consists of two components: a sinogram blurring matrix and a geometrical matrix. The geometric matrix is based on a virtual scanner geometry. The sinogram blurring matrix is estimated by matrix factorization. We investigate the performance of different virtual scanner geometries. Both simulation study and real data experiments are performed in the fully 3D mode to study the image quality under different system models. The results indicate that the proposed matrix factorization can maintain image quality while substantially reduce the image reconstruction time and system matrix storage cost. The proposed method can be also applied to other PET scanners with DOI measurement.

  5. Noise reduction for low-dose helical CT by 3D penalized weighted least-squares sinogram smoothing

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-03-01

    Helical computed tomography (HCT) has several advantages over conventional step-and-shoot CT for imaging a relatively large object, especially for dynamic studies. However, HCT may increase X-ray exposure significantly to the patient. This work aims to reduce the radiation by lowering the X-ray tube current (mA) and filtering the low-mA (or dose) sinogram noise. Based on the noise properties of HCT sinogram, a three-dimensional (3D) penalized weighted least-squares (PWLS) objective function was constructed and an optimal sinogram was estimated by minimizing the objective function. To consider the difference of signal correlation among different direction of the HCT sinogram, an anisotropic Markov random filed (MRF) Gibbs function was designed as the penalty. The minimization of the objection function was performed by iterative Gauss-Seidel updating strategy. The effectiveness of the 3D-PWLS sinogram smoothing for low-dose HCT was demonstrated by a 3D Shepp-Logan head phantom study. Comparison studies with our previously developed KL domain PWLS sinogram smoothing algorithm indicate that the KL+2D-PWLS algorithm shows better performance on in-plane noise-resolution trade-off while the 3D-PLWS shows better performance on z-axis noise-resolution trade-off. Receiver operating characteristic (ROC) studies by using channelized Hotelling observer (CHO) shows that 3D-PWLS and KL+2DPWLS algorithms have similar performance on detectability in low-contrast environment.

  6. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    NASA Astrophysics Data System (ADS)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  7. Development of virtual patient models for permanent implant brachytherapy Monte Carlo dose calculations: interdependence of CT image artifact mitigation and tissue assignment.

    PubMed

    Miksys, N; Xu, C; Beaulieu, L; Thomson, R M

    2015-08-07

    This work investigates and compares CT image metallic artifact reduction (MAR) methods and tissue assignment schemes (TAS) for the development of virtual patient models for permanent implant brachytherapy Monte Carlo (MC) dose calculations. Four MAR techniques are investigated to mitigate seed artifacts from post-implant CT images of a homogeneous phantom and eight prostate patients: a raw sinogram approach using the original CT scanner data and three methods (simple threshold replacement (STR), 3D median filter, and virtual sinogram) requiring only the reconstructed CT image. Virtual patient models are developed using six TAS ranging from the AAPM-ESTRO-ABG TG-186 basic approach of assigning uniform density tissues (resulting in a model not dependent on MAR) to more complex models assigning prostate, calcification, and mixtures of prostate and calcification using CT-derived densities. The EGSnrc user-code BrachyDose is employed to calculate dose distributions. All four MAR methods eliminate bright seed spot artifacts, and the image-based methods provide comparable mitigation of artifacts compared with the raw sinogram approach. However, each MAR technique has limitations: STR is unable to mitigate low CT number artifacts, the median filter blurs the image which challenges the preservation of tissue heterogeneities, and both sinogram approaches introduce new streaks. Large local dose differences are generally due to differences in voxel tissue-type rather than mass density. The largest differences in target dose metrics (D90, V100, V150), over 50% lower compared to the other models, are when uncorrected CT images are used with TAS that consider calcifications. Metrics found using models which include calcifications are generally a few percent lower than prostate-only models. Generally, metrics from any MAR method and any TAS which considers calcifications agree within 6%. Overall, the studied MAR methods and TAS show promise for further retrospective MC dose calculation studies for various permanent implant brachytherapy treatments.

  8. Sinogram restoration in computed tomography with an edge-preserving penalty

    PubMed Central

    Little, Kevin J.; La Rivière, Patrick J.

    2015-01-01

    Purpose: With the goal of producing a less computationally intensive alternative to fully iterative penalized-likelihood image reconstruction, our group has explored the use of penalized-likelihood sinogram restoration for transmission tomography. Previously, we have exclusively used a quadratic penalty in our restoration objective function. However, a quadratic penalty does not excel at preserving edges while reducing noise. Here, we derive a restoration update equation for nonquadratic penalties. Additionally, we perform a feasibility study to extend our sinogram restoration method to a helical cone-beam geometry and clinical data. Methods: A restoration update equation for nonquadratic penalties is derived using separable parabolic surrogates (SPS). A method for calculating sinogram degradation coefficients for a helical cone-beam geometry is proposed. Using simulated data, sinogram restorations are performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods are used to obtain reconstructions, and resolution-noise trade-offs are investigated. For the fan-beam geometry, a comparison is made to image-domain SPS reconstruction using the Huber penalty. The effects of varying object size and contrast are also investigated. For the helical cone-beam geometry, we investigate the effect of helical pitch (axial movement/rotation). Huber-penalty sinogram restoration is performed on 3D clinical data, and the reconstructed images are compared to those generated with no restoration. Results: We find that by applying the edge-preserving Huber penalty to our sinogram restoration methods, the reconstructed image has a better resolution-noise relationship than an image produced using a quadratic penalty in the sinogram restoration. However, we find that this relatively straightforward approach to edge preservation in the sinogram domain is affected by the physical size of imaged objects in addition to the contrast across the edge. This presents some disadvantages of this method relative to image-domain edge-preserving methods, although the computational burden of the sinogram-domain approach is much lower. For a helical cone-beam geometry, we found applying sinogram restoration in 3D was reasonable and that pitch did not make a significant difference in the general effect of sinogram restoration. The application of Huber-penalty sinogram restoration to clinical data resulted in a reconstruction with less noise while retaining resolution. Conclusions: Sinogram restoration with the Huber penalty is able to provide better resolution-noise performance than restoration with a quadratic penalty. Additionally, sinogram restoration with the Huber penalty is feasible for helical cone-beam CT and can be applied to clinical data. PMID:25735286

  9. Sinogram restoration in computed tomography with an edge-preserving penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Little, Kevin J., E-mail: little@uchicago.edu; La Rivière, Patrick J.

    2015-03-15

    Purpose: With the goal of producing a less computationally intensive alternative to fully iterative penalized-likelihood image reconstruction, our group has explored the use of penalized-likelihood sinogram restoration for transmission tomography. Previously, we have exclusively used a quadratic penalty in our restoration objective function. However, a quadratic penalty does not excel at preserving edges while reducing noise. Here, we derive a restoration update equation for nonquadratic penalties. Additionally, we perform a feasibility study to extend our sinogram restoration method to a helical cone-beam geometry and clinical data. Methods: A restoration update equation for nonquadratic penalties is derived using separable parabolic surrogatesmore » (SPS). A method for calculating sinogram degradation coefficients for a helical cone-beam geometry is proposed. Using simulated data, sinogram restorations are performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods are used to obtain reconstructions, and resolution-noise trade-offs are investigated. For the fan-beam geometry, a comparison is made to image-domain SPS reconstruction using the Huber penalty. The effects of varying object size and contrast are also investigated. For the helical cone-beam geometry, we investigate the effect of helical pitch (axial movement/rotation). Huber-penalty sinogram restoration is performed on 3D clinical data, and the reconstructed images are compared to those generated with no restoration. Results: We find that by applying the edge-preserving Huber penalty to our sinogram restoration methods, the reconstructed image has a better resolution-noise relationship than an image produced using a quadratic penalty in the sinogram restoration. However, we find that this relatively straightforward approach to edge preservation in the sinogram domain is affected by the physical size of imaged objects in addition to the contrast across the edge. This presents some disadvantages of this method relative to image-domain edge-preserving methods, although the computational burden of the sinogram-domain approach is much lower. For a helical cone-beam geometry, we found applying sinogram restoration in 3D was reasonable and that pitch did not make a significant difference in the general effect of sinogram restoration. The application of Huber-penalty sinogram restoration to clinical data resulted in a reconstruction with less noise while retaining resolution. Conclusions: Sinogram restoration with the Huber penalty is able to provide better resolution-noise performance than restoration with a quadratic penalty. Additionally, sinogram restoration with the Huber penalty is feasible for helical cone-beam CT and can be applied to clinical data.« less

  10. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob

    2017-03-01

    The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.

  11. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction.

    PubMed

    Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob

    2017-03-21

    The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.

  12. Sinogram noise reduction for low-dose CT by statistics-based nonlinear filters

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Lu, Hongbing; Li, Tianfang; Liang, Zhengrong

    2005-04-01

    Low-dose CT (computed tomography) sinogram data have been shown to be signal-dependent with an analytical relationship between the sample mean and sample variance. Spatially-invariant low-pass linear filters, such as the Butterworth and Hanning filters, could not adequately handle the data noise and statistics-based nonlinear filters may be an alternative choice, in addition to other choices of minimizing cost functions on the noisy data. Anisotropic diffusion filter and nonlinear Gaussian filters chain (NLGC) are two well-known classes of nonlinear filters based on local statistics for the purpose of edge-preserving noise reduction. These two filters can utilize the noise properties of the low-dose CT sinogram for adaptive noise reduction, but can not incorporate signal correlative information for an optimal regularized solution. Our previously-developed Karhunen-Loeve (KL) domain PWLS (penalized weighted least square) minimization considers the signal correlation via the KL strategy and seeks the PWLS cost function minimization for an optimal regularized solution for each KL component, i.e., adaptive to the KL components. This work compared the nonlinear filters with the KL-PWLS framework for low-dose CT application. Furthermore, we investigated the nonlinear filters for post KL-PWLS noise treatment in the sinogram space, where the filters were applied after ramp operation on the KL-PWLS treated sinogram data prior to backprojection operation (for image reconstruction). By both computer simulation and experimental low-dose CT data, the nonlinear filters could not outperform the KL-PWLS framework. The gain of post KL-PWLS edge-preserving noise filtering in the sinogram space is not significant, even the noise has been modulated by the ramp operation.

  13. Penalized weighted least-squares approach for low-dose x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-03-01

    The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.

  14. Comparing implementations of penalized weighted least-squares sinogram restoration.

    PubMed

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-11-01

    A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.

  15. List-mode reconstruction for the Biograph mCT with physics modeling and event-by-event motion correction

    NASA Astrophysics Data System (ADS)

    Jin, Xiao; Chan, Chung; Mulnix, Tim; Panin, Vladimir; Casey, Michael E.; Liu, Chi; Carson, Richard E.

    2013-08-01

    Whole-body PET/CT scanners are important clinical and research tools to study tracer distribution throughout the body. In whole-body studies, respiratory motion results in image artifacts. We have previously demonstrated for brain imaging that, when provided with accurate motion data, event-by-event correction has better accuracy than frame-based methods. Therefore, the goal of this work was to develop a list-mode reconstruction with novel physics modeling for the Siemens Biograph mCT with event-by-event motion correction, based on the MOLAR platform (Motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction). Application of MOLAR for the mCT required two algorithmic developments. First, in routine studies, the mCT collects list-mode data in 32 bit packets, where averaging of lines-of-response (LORs) by axial span and angular mashing reduced the number of LORs so that 32 bits are sufficient to address all sinogram bins. This degrades spatial resolution. In this work, we proposed a probabilistic LOR (pLOR) position technique that addresses axial and transaxial LOR grouping in 32 bit data. Second, two simplified approaches for 3D time-of-flight (TOF) scatter estimation were developed to accelerate the computationally intensive calculation without compromising accuracy. The proposed list-mode reconstruction algorithm was compared to the manufacturer's point spread function + TOF (PSF+TOF) algorithm. Phantom, animal, and human studies demonstrated that MOLAR with pLOR gives slightly faster contrast recovery than the PSF+TOF algorithm that uses the average 32 bit LOR sinogram positioning. Moving phantom and a whole-body human study suggested that event-by-event motion correction reduces image blurring caused by respiratory motion. We conclude that list-mode reconstruction with pLOR positioning provides a platform to generate high quality images for the mCT, and to recover fine structures in whole-body PET scans through event-by-event motion correction.

  16. List-mode Reconstruction for the Biograph mCT with Physics Modeling and Event-by-Event Motion Correction

    PubMed Central

    Jin, Xiao; Chan, Chung; Mulnix, Tim; Panin, Vladimir; Casey, Michael E.; Liu, Chi; Carson, Richard E.

    2013-01-01

    Whole-body PET/CT scanners are important clinical and research tools to study tracer distribution throughout the body. In whole-body studies, respiratory motion results in image artifacts. We have previously demonstrated for brain imaging that, when provided accurate motion data, event-by-event correction has better accuracy than frame-based methods. Therefore, the goal of this work was to develop a list-mode reconstruction with novel physics modeling for the Siemens Biograph mCT with event-by-event motion correction, based on the MOLAR platform (Motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction). Application of MOLAR for the mCT required two algorithmic developments. First, in routine studies, the mCT collects list-mode data in 32-bit packets, where averaging of lines of response (LORs) by axial span and angular mashing reduced the number of LORs so that 32 bits are sufficient to address all sinogram bins. This degrades spatial resolution. In this work, we proposed a probabilistic assignment of LOR positions (pLOR) that addresses axial and transaxial LOR grouping in 32-bit data. Second, two simplified approaches for 3D TOF scatter estimation were developed to accelerate the computationally intensive calculation without compromising accuracy. The proposed list-mode reconstruction algorithm was compared to the manufacturer's point spread function + time-of-flight (PSF+TOF) algorithm. Phantom, animal, and human studies demonstrated that MOLAR with pLOR gives slightly faster contrast recovery than the PSF+TOF algorithm that uses the average 32-bit LOR sinogram positioning. Moving phantom and a whole-body human study suggested that event-by-event motion correction reduces image blurring caused by respiratory motion. We conclude that list-mode reconstruction with pLOR positioning provides a platform to generate high quality images for the mCT, and to recover fine structures in whole-body PET scans through event-by-event motion correction. PMID:23892635

  17. Comparing implementations of penalized weighted least-squares sinogram restoration

    PubMed Central

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-01-01

    Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. Results: All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors’ previous penalized-likelihood implementation. Conclusions: Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes. PMID:21158306

  18. Gaussian diffusion sinogram inpainting for X-ray CT metal artifact reduction.

    PubMed

    Peng, Chengtao; Qiu, Bensheng; Li, Ming; Guan, Yihui; Zhang, Cheng; Wu, Zhongyi; Zheng, Jian

    2017-01-05

    Metal objects implanted in the bodies of patients usually generate severe streaking artifacts in reconstructed images of X-ray computed tomography, which degrade the image quality and affect the diagnosis of disease. Therefore, it is essential to reduce these artifacts to meet the clinical demands. In this work, we propose a Gaussian diffusion sinogram inpainting metal artifact reduction algorithm based on prior images to reduce these artifacts for fan-beam computed tomography reconstruction. In this algorithm, prior information that originated from a tissue-classified prior image is used for the inpainting of metal-corrupted projections, and it is incorporated into a Gaussian diffusion function. The prior knowledge is particularly designed to locate the diffusion position and improve the sparsity of the subtraction sinogram, which is obtained by subtracting the prior sinogram of the metal regions from the original sinogram. The sinogram inpainting algorithm is implemented through an approach of diffusing prior energy and is then solved by gradient descent. The performance of the proposed metal artifact reduction algorithm is compared with two conventional metal artifact reduction algorithms, namely the interpolation metal artifact reduction algorithm and normalized metal artifact reduction algorithm. The experimental datasets used included both simulated and clinical datasets. By evaluating the results subjectively, the proposed metal artifact reduction algorithm causes fewer secondary artifacts than the two conventional metal artifact reduction algorithms, which lead to severe secondary artifacts resulting from impertinent interpolation and normalization. Additionally, the objective evaluation shows the proposed approach has the smallest normalized mean absolute deviation and the highest signal-to-noise ratio, indicating that the proposed method has produced the image with the best quality. No matter for the simulated datasets or the clinical datasets, the proposed algorithm has reduced the metal artifacts apparently.

  19. WE-G-18A-06: Sinogram Restoration in Helical Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Little, K; Riviere, P La

    2014-06-15

    Purpose: To extend CT sinogram restoration, which has been shown in 2D to reduce noise and to correct for geometric effects and other degradations at a low computational cost, from 2D to a 3D helical cone-beam geometry. Methods: A method for calculating sinogram degradation coefficients for a helical cone-beam geometry was proposed. These values were used to perform penalized-likelihood sinogram restoration on simulated data that were generated from the FORBILD thorax phantom. Sinogram restorations were performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods were used to obtain reconstructions. Resolution-variance trade-offs weremore » investigated for several locations within the reconstructions for the purpose of comparing sinogram restoration to no restoration. In order to compare potential differences, reconstructions were performed using different groups of neighbors in the penalty, two analytical reconstruction methods (Katsevich and single-slice rebinning), and differing helical pitches. Results: The resolution-variance properties of reconstructions restored using sinogram restoration with a Huber penalty outperformed those of reconstructions with no restoration. However, the use of a quadratic sinogram restoration penalty did not lead to an improvement over performing no restoration at the outer regions of the phantom. Application of the Huber penalty to neighbors both within a view and across views did not perform as well as only applying the penalty to neighbors within a view. General improvements in resolution-variance properties using sinogram restoration with the Huber penalty were not dependent on the reconstruction method used or the magnitude of the helical pitch. Conclusion: Sinogram restoration for noise and degradation effects for helical cone-beam CT is feasible and should be able to be applied to clinical data. When applied with the edge-preserving Huber penalty, sinogram restoration leads to an improvement in resolution-variance tradeoffs.« less

  20. Fistulogram/Sinogram

    MedlinePlus

    ... Fistulogram/Sinogram A fistulogram uses a form of real-time x-ray called fluoroscopy and a barium-based ... best treatment plan for you. Fistulograms/sinograms provide real-time images that may be evaluated immediately. No radiation ...

  1. Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.

    PubMed

    Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg

    2009-07-01

    In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.

  2. Temporal Processing of Dynamic Positron Emission Tomography via Principal Component Analysis in the Sinogram Domain

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Parker, B. J.; Feng, D. D.; Fulton, R.

    2004-10-01

    In this paper, we compare various temporal analysis schemes applied to dynamic PET for improved quantification, image quality and temporal compression purposes. We compare an optimal sampling schedule (OSS) design, principal component analysis (PCA) applied in the image domain, and principal component analysis applied in the sinogram domain; for region-of-interest quantification, sinogram-domain PCA is combined with the Huesman algorithm to quantify from the sinograms directly without requiring reconstruction of all PCA channels. Using a simulated phantom FDG brain study and three clinical studies, we evaluate the fidelity of the compressed data for estimation of local cerebral metabolic rate of glucose by a four-compartment model. Our results show that using a noise-normalized PCA in the sinogram domain gives similar compression ratio and quantitative accuracy to OSS, but with substantially better precision. These results indicate that sinogram-domain PCA for dynamic PET can be a useful preprocessing stage for PET compression and quantification applications.

  3. Processing of CT sinograms acquired using a VRX detector

    NASA Astrophysics Data System (ADS)

    Jordan, Lawrence M.; DiBianca, Frank A.; Zou, Ping; Laughter, Joseph S.; Zeman, Herbert D.

    2000-04-01

    A 'variable resolution x-ray detector' (VRX) capable of resolving beyond 100 cycles/main a single dimension has been proposed by DiBianca, et al. The use of detectors of this design for computed-tomography (CT) imaging requires novel preprocessing of data to correct for the detector's non- uniform imaging characteristics over its range of view. This paper describes algorithms developed specifically to adjust VRX data for varying magnification, source-to-detector range and beam obliquity and to sharpen reconstructions by deconvolving the ray impulse function. The preprocessing also incorporates nonlinear interpolation of VRX raw data into canonical CT sinogram formats.

  4. Theory of reflectivity blurring in seismic depth imaging

    NASA Astrophysics Data System (ADS)

    Thomson, C. J.; Kitchenside, P. W.; Fletcher, R. P.

    2016-05-01

    A subsurface extended image gather obtained during controlled-source depth imaging yields a blurred kernel of an interface reflection operator. This reflectivity kernel or reflection function is comprised of the interface plane-wave reflection coefficients and so, in principle, the gather contains amplitude versus offset or angle information. We present a modelling theory for extended image gathers that accounts for variable illumination and blurring, under the assumption of a good migration-velocity model. The method involves forward modelling as well as migration or back propagation so as to define a receiver-side blurring function, which contains the effects of the detector array for a given shot. Composition with the modelled incident wave and summation over shots then yields an overall blurring function that relates the reflectivity to the extended image gather obtained from field data. The spatial evolution or instability of blurring functions is a key concept and there is generally not just spatial blurring in the apparent reflectivity, but also slowness or angle blurring. Gridded blurring functions can be estimated with, for example, a reverse-time migration modelling engine. A calibration step is required to account for ad hoc band limitedness in the modelling and the method also exploits blurring-function reciprocity. To demonstrate the concepts, we show numerical examples of various quantities using the well-known SIGSBEE test model and a simple salt-body overburden model, both for 2-D. The moderately strong slowness/angle blurring in the latter model suggests that the effect on amplitude versus offset or angle analysis should be considered in more realistic structures. Although the description and examples are for 2-D, the extension to 3-D is conceptually straightforward. The computational cost of overall blurring functions implies their targeted use for the foreseeable future, for example, in reservoir characterization. The description is for scalar waves, but the extension to elasticity is foreseeable and we emphasize the separation of the overburden and survey-geometry blurring effects from the nature of the target scatterer.

  5. Impact of joint statistical dual-energy CT reconstruction of proton stopping power images: Comparison to image- and sinogram-domain material decomposition approaches.

    PubMed

    Zhang, Shuangyue; Han, Dong; Politte, David G; Williamson, Jeffrey F; O'Sullivan, Joseph A

    2018-05-01

    The purpose of this study was to assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. All three selected DECT-SPR models predict the SPR of all tissue types with less than 0.2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0.05% and 0.3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by sixfold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are insensitive to size change. Among all the investigated methods, the JSIR-BVM method achieves the best performance for SPR estimation in our simulation phantom study. This novel method is robust with respect to sinogram noise and residual beam-hardening effects, yielding SPR estimation errors comparable to intrinsic BVM modeling error. In contrast, the achievable SPR estimation accuracy of the image- and sinogram-domain decomposition methods is dominated by the CT image intensity uncertainties introduced by the reconstruction and decomposition processes. © 2018 American Association of Physicists in Medicine.

  6. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach

    PubMed Central

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges

    2013-01-01

    Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922

  7. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach.

    PubMed

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges

    2013-10-01

    Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.

  8. Photographic Image Restoration

    NASA Technical Reports Server (NTRS)

    Hite, Gerald E.

    1991-01-01

    Deblurring capabilities would significantly improve the Flight Science Support Office's ability to monitor the effects of lift-off on the shuttle and landing on the orbiter. A deblurring program was written and implemented to extract information from blurred images containing a straight line or edge and to use that information to deblur the image. The program was successfully applied to an image blurred by improper focussing and two blurred by different amounts of blurring. In all cases, the reconstructed modulation transfer function not only had the same zero contours as the Fourier transform of the blurred image but the associated point spread function also had structure not easily described by simple parameterizations. The difficulties posed by the presence of noise in the blurred image necessitated special consideration. An amplitude modification technique was developed for the zero contours of the modulation transfer function at low to moderate frequencies and a smooth filter was used to suppress high frequency noise.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagesh, S Setlur; Rana, R; Russ, M

    Purpose: CMOS-based aSe detectors compared to CsI-TFT-based flat panels have the advantages of higher spatial sampling due to smaller pixel size and decreased blurring characteristic of direct rather than indirect detection. For systems with such detectors, the limiting factor degrading image resolution then becomes the focal-spot geometric unsharpness. This effect can seriously limit the use of such detectors in areas such as cone beam computed tomography, clinical fluoroscopy and angiography. In this work a technique to remove the effect of focal-spot blur is presented for a simulated aSe detector. Method: To simulate images from an aSe detector affected with focal-spotmore » blur, first a set of high-resolution images of a stent (FRED from Microvention, Inc.) were acquired using a 75µm pixel size Dexela-Perkin-Elmer detector and averaged to reduce quantum noise. Then the averaged image was blurred with a known Gaussian blur at two different magnifications to simulate an idealized focal spot. The blurred images were then deconvolved with a set of different Gaussian blurs to remove the effect of focal-spot blurring using a threshold-based, inverse-filtering method. Results: The blur was removed by deconvolving the images using a set of Gaussian functions for both magnifications. Selecting the correct function resulted in an image close to the original; however, selection of too wide a function would cause severe artifacts. Conclusion: Experimentally, focal-spot blur at different magnifications can be measured using a pin hole with a high resolution detector. This spread function can be used to deblur the input images that are acquired at corresponding magnifications to correct for the focal spot blur. For CBCT applications, the magnification of specific objects can be obtained using initial reconstructions then corrected for focal-spot blurring to improve resolution. Similarly, if object magnification can be determined such correction may be applied in fluoroscopy and angiography.« less

  10. SNR-weighted sinogram smoothing with improved noise-resolution properties for low-dose x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Li, Tianfang; Wang, Jing; Wen, Junhai; Li, Xiang; Lu, Hongbing; Hsieh, Jiang; Liang, Zhengrong

    2004-05-01

    To treat the noise in low-dose x-ray CT projection data more accurately, analysis of the noise properties of the data and development of a corresponding efficient noise treatment method are two major problems to be addressed. In order to obtain an accurate and realistic model to describe the x-ray CT system, we acquired thousands of repeated measurements on different phantoms at several fixed scan angles by a GE high-speed multi-slice spiral CT scanner. The collected data were calibrated and log-transformed by the sophisticated system software, which converts the detected photon energy into sinogram data that satisfies the Radon transform. From the analysis of these experimental data, a nonlinear relation between mean and variance for each datum of the sinogram was obtained. In this paper, we integrated this nonlinear relation into a penalized likelihood statistical framework for a SNR (signal-to-noise ratio) adaptive smoothing of noise in the sinogram. After the proposed preprocessing, the sinograms were reconstructed with unapodized FBP (filtered backprojection) method. The resulted images were evaluated quantitatively, in terms of noise uniformity and noise-resolution tradeoff, with comparison to other noise smoothing methods such as Hanning filter and Butterworth filter at different cutoff frequencies. Significant improvement on noise and resolution tradeoff and noise property was demonstrated.

  11. Restoration of motion blurred image with Lucy-Richardson algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jing; Liu, Zhao Hui; Zhou, Liang

    2015-10-01

    Images will be blurred by relative motion between the camera and the object of interest. In this paper, we analyzed the process of motion-blurred image, and demonstrated a restoration method based on Lucy-Richardson algorithm. The blur extent and angle can be estimated by Radon transform algorithm and auto-correlation function, respectively, and then the point spread function (PSF) of the motion-blurred image can be obtained. Thus with the help of the obtained PSF, the Lucy-Richardson restoration algorithm is used for experimental analysis on the motion-blurred images that have different blur extents, spatial resolutions and signal-to-noise ratios (SNR's). Further, its effectiveness is also evaluated by structural similarity (SSIM). Further studies show that, at first, for the image with a spatial frequency of 0.2 per pixel, the modulation transfer function (MTF) of the restored images can maintains above 0.7 when the blur extent is no bigger than 13 pixels. That means the method compensates low frequency information of the image, while attenuates high frequency information. At second, we fund that the method is more effective on condition that the product of the blur extent and spatial frequency is smaller than 3.75. Finally, the Lucy-Richardson algorithm is found insensitive to the Gaussian noise (of which the variance is not bigger than 0.1) by calculating the MTF of the restored image.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thiyagarajan, Rajesh; Karrthick, KP; Kataria, Tejinder

    Purpose: Performing DQA for Bilateral (B-L) breast tomotherapy is a challenging task due to the limitation of any commercially available detector array or film. Aim of this study is to perform DQA for B-L breast tomotherapy plan using MLC fluence sinogram. Methods: Treatment plan was generated on Tomotherapy system for B-L breast tumour. B-L breast targets were given 50.4 Gy prescribed over 28 fractions. Plan is generated with 6 MV photon beam & pitch was set to 0.3. As the width of the total target is 39 cm (left & right) length is 20 cm. DQA plan delivered without anymore » phantom on the mega voltage computed tomography (MCVT) detector system. The pulses recorded by MVCT system were exported to the delivery analysis software (Tomotherapy Inc.) for reconstruction. The detector signals are reconstructed to a sonogram and converted to MLC fluence sonogram. The MLC fluence sinogram compared with the planned fluence sinogram. Also point dose measured with cheese phantom and ionization chamber to verify the absolute dose component Results: Planned fluence sinogram and reconstructed MLC fluence sinogram were compared using Gamma metric. MLC positional difference and intensity of the beamlet were used as parameters to evaluate gamma. 3 mm positional difference and 3% beamlet intensity difference were used set for gamma calculation. A total of 26784 non-zero beamlets were included in the analysis out of which 161 beamlets had gamma more than 1. The gamma passing rate found to be 99.4%. Point dose measurements were within 1.3% of the calculated dose. Conclusion: MLC fluence sinogram based delivery quality assurance performed for bilateral breast irradiation. This would be a suitable alternate for large volume targets like bilateral breast, Total body irradiation etc. However conventional method of DQA should be used to validate this method periodically.« less

  13. Development of a high-performance noise-reduction filter for tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Kao, Chien-Min; Pan, Xiaochuan

    2001-07-01

    We propose a new noise-reduction method for tomographic reconstruction. The method incorporates a priori information on the source image for allowing the derivation of the energy spectrum of its ideal sinogram. In combination with the energy spectrum of the Poisson noise in the measured sinogram, we are able to derive a Wiener-like filter for effective suppression of the sinogram noise. The filtered backprojection (FBP) algorithm, with a ramp filter, is then applied to the filtered sinogram to produce tomographic images. The resulting filter has a closed-form expression in the frequency space and contains a single user-adjustable regularization parameter. The proposed method is hence simple to implement and easy to use. In contrast to the ad hoc apodizing windows, such as Hanning and Butterworth filters, that are commonly used in the conventional FBP reconstruction, the proposed filter is theoretically more rigorous as it is derived by basing upon an optimization criterion, subject to a known class of source image intensity distributions.

  14. A metal artifact reduction algorithm in CT using multiple prior images by recursive active contour segmentation

    PubMed Central

    Nam, Haewon

    2017-01-01

    We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794

  15. Blurred image restoration using knife-edge function and optimal window Wiener filtering.

    PubMed

    Wang, Min; Zhou, Shudao; Yan, Wei

    2018-01-01

    Motion blur in images is usually modeled as the convolution of a point spread function (PSF) and the original image represented as pixel intensities. The knife-edge function can be used to model various types of motion-blurs, and hence it allows for the construction of a PSF and accurate estimation of the degradation function without knowledge of the specific degradation model. This paper addresses the problem of image restoration using a knife-edge function and optimal window Wiener filtering. In the proposed method, we first calculate the motion-blur parameters and construct the optimal window. Then, we use the detected knife-edge function to obtain the system degradation function. Finally, we perform Wiener filtering to obtain the restored image. Experiments show that the restored image has improved resolution and contrast parameters with clear details and no discernible ringing effects.

  16. Blurred image restoration using knife-edge function and optimal window Wiener filtering

    PubMed Central

    Zhou, Shudao; Yan, Wei

    2018-01-01

    Motion blur in images is usually modeled as the convolution of a point spread function (PSF) and the original image represented as pixel intensities. The knife-edge function can be used to model various types of motion-blurs, and hence it allows for the construction of a PSF and accurate estimation of the degradation function without knowledge of the specific degradation model. This paper addresses the problem of image restoration using a knife-edge function and optimal window Wiener filtering. In the proposed method, we first calculate the motion-blur parameters and construct the optimal window. Then, we use the detected knife-edge function to obtain the system degradation function. Finally, we perform Wiener filtering to obtain the restored image. Experiments show that the restored image has improved resolution and contrast parameters with clear details and no discernible ringing effects. PMID:29377950

  17. Restoration of non-uniform exposure motion blurred image

    NASA Astrophysics Data System (ADS)

    Luo, Yuanhong; Xu, Tingfa; Wang, Ningming; Liu, Feng

    2014-11-01

    Restoring motion-blurred image is the key technologies in the opto-electronic detection system. The imaging sensors such as CCD and infrared imaging sensor, which are mounted on the motion platforms, quickly move together with the platforms of high speed. As a result, the images become blur. The image degradation will cause great trouble for the succeeding jobs such as objects detection, target recognition and tracking. So the motion-blurred images must be restoration before detecting motion targets in the subsequent images. On the demand of the real weapon task, in order to deal with targets in the complex background, this dissertation uses the new theories in the field of image processing and computer vision to research the new technology of motion deblurring and motion detection. The principle content is as follows: 1) When the prior knowledge about degradation function is unknown, the uniform motion blurred images are restored. At first, the blur parameters, including the motion blur extent and direction of PSF(point spread function), are estimated individually in domain of logarithmic frequency. The direction of PSF is calculated by extracting the central light line of the spectrum, and the extent is computed by minimizing the correction between the fourier spectrum of the blurred image and a detecting function. Moreover, in order to remove the strip in the deblurred image, windows technique is employed in the algorithm, which makes the deblurred image clear. 2) According to the principle of infrared image non-uniform exposure, a new restoration model for infrared blurred images is developed. The fitting of infrared image non-uniform exposure curve is performed by experiment data. The blurred images are restored by the fitting curve.

  18. Chromatic blur perception in the presence of luminance contrast.

    PubMed

    Jennings, Ben J; Kingdom, Frederick A A

    2017-06-01

    Hel-Or showed that blurring the chromatic but not the luminance layer of an image of a natural scene failed to elicit any impression of blur. Subsequent studies have suggested that this effect is due either to chromatic blur being masked by spatially contiguous luminance edges in the scene (Journal of Vision 13 (2013) 14), or to a relatively compressed transducer function for chromatic blur (Journal of Vision 15 (2015) 6). To test between the two explanations we conducted experiments using as stimuli both images of natural scenes as well as simple edges. First, we found that in color-and-luminance images of natural scenes more chromatic blur was needed to perceptually match a given level of blur in an isoluminant, i.e. colour-only scene. However, when the luminance layer in the scene was rotated relative to the chromatic layer, thus removing the colour-luminance edge correlations, the matched blur levels were near equal. Both results are consistent with Sharman et al.'s explanation. Second, when observers matched the blurs of luminance-only with isoluminant scenes, the matched blurs were equal, against Kingdom et al.'s prediction. Third, we measured the perceived blur in a square-wave as a function of (i) contrast (ii) number of luminance edges and (iii) the relative spatial phase between the colour and luminance edges. We found that the perceived chromatic blur was dependent on both relative phase and the number of luminance edges, or dependent on the luminance contrast if only a single edge is present. We conclude that this Hel-Or effect is largely due to masking of chromatic blur by spatially contiguous luminance edges. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Radiation dose reduction using 100-kVp and a sinogram-affirmed iterative reconstruction algorithm in adolescent head CT: Impact on grey-white matter contrast and image noise.

    PubMed

    Nagayama, Yasunori; Nakaura, Takeshi; Tsuji, Akinori; Urata, Joji; Furusawa, Mitsuhiro; Yuki, Hideaki; Hirarta, Kenichiro; Kidoh, Masafumi; Oda, Seitaro; Utsunomiya, Daisuke; Yamashita, Yasuyuki

    2017-07-01

    To retrospectively evaluate the image quality and radiation dose of 100-kVp scans with sinogram-affirmed iterative reconstruction (IR) for unenhanced head CT in adolescents. Sixty-nine patients aged 12-17 years underwent head CT under 120- (n = 34) or 100-kVp (n = 35) protocols. The 120-kVp images were reconstructed with filtered back-projection (FBP), 100-kVp images with FBP (100-kVp-F) and sinogram-affirmed IR (100-kVp-S). We compared the effective dose (ED), grey-white matter (GM-WM) contrast, image noise, and contrast-to-noise ratio (CNR) between protocols in supratentorial (ST) and posterior fossa (PS). We also assessed GM-WM contrast, image noise, sharpness, artifacts, and overall image quality on a four-point scale. ED was 46% lower with 100- than 120-kVp (p < 0.001). GM-WM contrast was higher, and image noise was lower, on 100-kVp-S than 120-kVp at ST (p < 0.001). CNR of 100-kVp-S was higher than of 120-kVp (p < 0.001). GM-WM contrast of 100-kVp-S was subjectively rated as better than of 120-kVp (p < 0.001). There were no significant differences in the other criteria between 100-kVp-S and 120-kVp (p = 0.072-0.966). The 100-kVp with sinogram-affirmed IR facilitated dramatic radiation reduction and better GM-WM contrast without increasing image noise in adolescent head CT. • 100-kVp head CT provides 46% radiation dose reduction compared with 120-kVp. • 100-kVp scanning improves subjective and objective GM-WM contrast. • Sinogram-affirmed IR decreases head CT image noise, especially in supratentorial region. • 100-kVp protocol with sinogram-affirmed IR is suited for adolescent head CT.

  20. Identification of Piecewise Linear Uniform Motion Blur

    NASA Astrophysics Data System (ADS)

    Patanukhom, Karn; Nishihara, Akinori

    A motion blur identification scheme is proposed for nonlinear uniform motion blurs approximated by piecewise linear models which consist of more than one linear motion component. The proposed scheme includes three modules that are a motion direction estimator, a motion length estimator and a motion combination selector. In order to identify the motion directions, the proposed scheme is based on a trial restoration by using directional forward ramp motion blurs along different directions and an analysis of directional information via frequency domain by using a Radon transform. Autocorrelation functions of image derivatives along several directions are employed for estimation of the motion lengths. A proper motion combination is identified by analyzing local autocorrelation functions of non-flat component of trial restored results. Experimental examples of simulated and real world blurred images are given to demonstrate a promising performance of the proposed scheme.

  1. Low-dose X-ray computed tomography image reconstruction with a combined low-mAs and sparse-view protocol.

    PubMed

    Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua

    2014-06-16

    To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as "ASR-TV-POCS." To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation.

  2. There is more to accommodation of the eye than simply minimizing retinal blur

    PubMed Central

    Marín-Franch, I.; Del Águila-Carrasco, A. J.; Bernal-Molina, P.; Esteve-Taboada, J. J.; López-Gil, N.; Montés-Micó, R.; Kruger, P. B.

    2017-01-01

    Eyes of children and young adults change their optical power to focus nearby objects at the retina. But does accommodation function by trial and error to minimize blur and maximize contrast as is generally accepted? Three experiments in monocular and monochromatic vision were performed under two conditions while aberrations were being corrected. In the first condition, feedback was available to the eye from both optical vergence and optical blur. In the second, feedback was only available from target blur. Accommodation was less precise for the second condition, suggesting that it is more than a trial-and-error function. Optical vergence itself seems to be an important cue for accommodation. PMID:29082097

  3. Richardson-Lucy deblurring for the star scene under a thinning motion path

    NASA Astrophysics Data System (ADS)

    Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining

    2015-05-01

    This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.

  4. Contributions of Optical and Non-Optical Blur to Variation in Visual Acuity

    PubMed Central

    McAnany, J. Jason; Shahidi, Mahnaz; Applegate, Raymond A.; Zelkha, Ruth; Alexander, Kenneth R.

    2011-01-01

    Purpose To determine the relative contributions of optical and non-optical sources of intrinsic blur to variations in visual acuity (VA) among normally sighted subjects. Methods Best-corrected VA of sixteen normally sighted subjects was measured using briefly presented (59 ms) tumbling E optotypes that were either unblurred or blurred through convolution with Gaussian functions of different widths. A standard model of intrinsic blur was used to estimate each subject’s equivalent intrinsic blur (σint) and VA for the unblurred tumbling E (MAR0). For 14 subjects, a radially averaged optical point spread function due to higher-order aberrations was derived by Shack-Hartmann aberrometry and fit with a Gaussian function. The standard deviation of the best-fit Gaussian function defined optical blur (σopt). An index of non-optical blur (η) was defined as: 1-σopt/σint. A control experiment was conducted on 5 subjects to evaluate the effect of stimulus duration on MAR0 and σint. Results Log MAR0 for the briefly presented E was correlated significantly with log σint (r = 0.95, p < 0.01), consistent with previous work. However, log MAR0 was not correlated significantly with log σopt (r = 0.46, p = 0.11). For subjects with log MAR0 equivalent to approximately 20/20 or better, log MAR0 was independent of log η, whereas for subjects with larger log MAR0 values, log MAR0 was proportional to log η. The control experiment showed a statistically significant effect of stimulus duration on log MAR0 (p < 0.01) but a non-significant effect on σint (p = 0.13). Conclusions The relative contributions of optical and non-optical blur to VA varied among the subjects, and were related to the subject’s VA. Evaluating optical and non-optical blur may be useful for predicting changes in VA following procedures that improve the optics of the eye in patients with both optical and non-optical sources of VA loss. PMID:21460756

  5. Blur adaptation: contrast sensitivity changes and stimulus extent.

    PubMed

    Venkataraman, Abinaya Priya; Winter, Simon; Unsbo, Peter; Lundström, Linda

    2015-05-01

    A prolonged exposure to foveal defocus is well known to affect the visual functions in the fovea. However, the effects of peripheral blur adaptation on foveal vision, or vice versa, are still unclear. In this study, we therefore examined the changes in contrast sensitivity function from baseline, following blur adaptation to small as well as laterally extended stimuli in four subjects. The small field stimulus (7.5° visual field) was a 30min video of forest scenery projected on a screen and the large field stimulus consisted of 7-tiles of the 7.5° stimulus stacked horizontally. Both stimuli were used for adaptation with optical blur (+2.00D trial lens) as well as for clear control conditions. After small field blur adaptation foveal contrast sensitivity improved in the mid spatial frequency region. However, these changes neither spread to the periphery nor occurred for the large field blur adaptation. To conclude, visual performance after adaptation is dependent on the lateral extent of the adaptation stimulus. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.

    PubMed

    de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph

    2008-01-01

    The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).

  7. Parallelization of a blind deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  8. Variance analysis of x-ray CT sinograms in the presence of electronic noise background.

    PubMed

    Ma, Jianhua; Liang, Zhengrong; Fan, Yi; Liu, Yan; Huang, Jing; Chen, Wufan; Lu, Hongbing

    2012-07-01

    Low-dose x-ray computed tomography (CT) is clinically desired. Accurate noise modeling is a fundamental issue for low-dose CT image reconstruction via statistics-based sinogram restoration or statistical iterative image reconstruction. In this paper, the authors analyzed the statistical moments of low-dose CT data in the presence of electronic noise background. The authors first studied the statistical moment properties of detected signals in CT transmission domain, where the noise of detected signals is considered as quanta fluctuation upon electronic noise background. Then the authors derived, via the Taylor expansion, a new formula for the mean-variance relationship of the detected signals in CT sinogram domain, wherein the image formation becomes a linear operation between the sinogram data and the unknown image, rather than a nonlinear operation in the CT transmission domain. To get insight into the derived new formula by experiments, an anthropomorphic torso phantom was scanned repeatedly by a commercial CT scanner at five different mAs levels from 100 down to 17. The results demonstrated that the electronic noise background is significant when low-mAs (or low-dose) scan is performed. The influence of the electronic noise background should be considered in low-dose CT imaging.

  9. Variance analysis of x-ray CT sinograms in the presence of electronic noise background

    PubMed Central

    Ma, Jianhua; Liang, Zhengrong; Fan, Yi; Liu, Yan; Huang, Jing; Chen, Wufan; Lu, Hongbing

    2012-01-01

    Purpose: Low-dose x-ray computed tomography (CT) is clinically desired. Accurate noise modeling is a fundamental issue for low-dose CT image reconstruction via statistics-based sinogram restoration or statistical iterative image reconstruction. In this paper, the authors analyzed the statistical moments of low-dose CT data in the presence of electronic noise background. Methods: The authors first studied the statistical moment properties of detected signals in CT transmission domain, where the noise of detected signals is considered as quanta fluctuation upon electronic noise background. Then the authors derived, via the Taylor expansion, a new formula for the mean–variance relationship of the detected signals in CT sinogram domain, wherein the image formation becomes a linear operation between the sinogram data and the unknown image, rather than a nonlinear operation in the CT transmission domain. To get insight into the derived new formula by experiments, an anthropomorphic torso phantom was scanned repeatedly by a commercial CT scanner at five different mAs levels from 100 down to 17. Results: The results demonstrated that the electronic noise background is significant when low-mAs (or low-dose) scan is performed. Conclusions: The influence of the electronic noise background should be considered in low-dose CT imaging. PMID:22830738

  10. Low-dose X-ray computed tomography image reconstruction with a combined low-mAs and sparse-view protocol

    PubMed Central

    Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua

    2014-01-01

    To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as “ASR-TV-POCS.” To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation. PMID:24977611

  11. Blind restoration of retinal images degraded by space-variant blur with adaptive blur estimation

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés. G.; Millán, María. S.; Å orel, Michal; Å roubek, Filip

    2013-11-01

    Retinal images are often degraded with a blur that varies across the field view. Because traditional deblurring algorithms assume the blur to be space-invariant they typically fail in the presence of space-variant blur. In this work we consider the blur to be both unknown and space-variant. To carry out the restoration, we assume that in small regions the space-variant blur can be approximated by a space-invariant point-spread function (PSF). However, instead of deblurring the image on a per-patch basis, we extend individual PSFs by linear interpolation and perform a global restoration. Because the blind estimation of local PSFs may fail we propose a strategy for the identification of valid local PSFs and perform interpolation to obtain the space-variant PSF. The method was tested on artificial and real degraded retinal images. Results show significant improvement in the visibility of subtle details like small blood vessels.

  12. Respiratory-gated CT as a tool for the simulation of breathing artifacts in PET and PET/CT.

    PubMed

    Hamill, J J; Bosmans, G; Dekker, A

    2008-02-01

    Respiratory motion in PET and PET/CT blurs the images and can cause attenuation-related errors in quantitative parameters such as standard uptake values. In rare instances, this problem even causes localization errors and the disappearance of tumors that should be detectable. Attenuation errors are severe near the diaphragm and can be enhanced when the attenuation correction is based on a CT series acquired during a breath-hold. To quantify the errors and identify the parameters associated with them, the authors performed a simulated PET scan based on respiratory-gated CT studies of five lung cancer patients. Diaphragmatic motion ranged from 8 to 25 mm in the five patients. The CT series were converted to 511-keV attenuation maps which were forward-projected and exponentiated to form sinograms of PET attenuation factors at each phase of respiration. The CT images were also segmented to form a PET object, moving with the same motion as the CT series. In the moving PET object, spherical 20 mm mobile tumors were created in the vicinity of the dome of the liver and immobile 20 mm tumors in the midchest region. The moving PET objects were forward-projected and attenuated, then reconstructed in several ways: phase-matched PET and CT, gated PET with ungated CT, ungated PET with gated CT, and conventional PET. Spatial resolution and statistical noise were not modeled. In each case, tumor uptake recovery factor was defined by comparing the maximum reconstructed pixel value with the known correct value. Mobile 10 and 30 mm tumors were also simulated in the case of a patient with 11 mm of breathing motion. Phase-matched gated PET and CT gave essentially perfect PET reconstructions in the simulation. Gated PET with ungated CT gave tumors of the correct shape, but recovery was too large by an amount that depended on the extent of the motion, as much as 90% for mobile tumors and 60% for immobile tumors. Gated CT with ungated PET resulted in blurred tumors and caused recovery errors between -50% and +75%. Recovery in clinical scans would be 0%-20% lower than stated because spatial resolution was not included in the simulation. Mobile tumors near the dome of the liver were subject to the largest errors in either case. Conventional PET for 20 mm tumors was quantitative in cases of motion less than 15 mm because of canceling errors in blurring and attenuation, but the recovery factors were too low by as much as 30% in cases of motion greater than 15 mm. The 10 mm tumors were blurred by motion to a greater extent, causing a greater SUV underestimation than in the case of 20 mm tumors, and the 30 mm tumors were blurred less. Quantitative PET imaging near the diaphragm requires proper matching of attenuation information to the emission information. The problem of missed tumors near the diaphragm can be reduced by acquiring attenuation-correction information near end expiration. A simple PET/CT protocol requiring no gating equipment also addresses this problem.

  13. Penalized Weighted Least-Squares Approach to Sinogram Noise Reduction and Image Reconstruction for Low-Dose X-Ray Computed Tomography

    PubMed Central

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-01-01

    Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831

  14. Blurred image recognition by legendre moment invariants

    PubMed Central

    Zhang, Hui; Shu, Huazhong; Han, Guo-Niu; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis

    2010-01-01

    Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments. PMID:19933003

  15. Robust x-ray based material identification using multi-energy sinogram decomposition

    NASA Astrophysics Data System (ADS)

    Yuan, Yaoshen; Tracey, Brian; Miller, Eric

    2016-05-01

    There is growing interest in developing X-ray computed tomography (CT) imaging systems with improved ability to discriminate material types, going beyond the attenuation imaging provided by most current systems. Dual- energy CT (DECT) systems can partially address this problem by estimating Compton and photoelectric (PE) coefficients of the materials being imaged, but DECT is greatly degraded by the presence of metal or other materials with high attenuation. Here we explore the advantages of multi-energy CT (MECT) systems based on photon-counting detectors. The utility of MECT has been demonstrated in medical applications where photon- counting detectors allow for the resolution of absorption K-edges. Our primary concern is aviation security applications where K-edges are rare. We simulate phantoms with differing amounts of metal (high, medium and low attenuation), both for switched-source DECT and for MECT systems, and include a realistic model of detector energy 0 resolution. We extend the DECT sinogram decomposition method of Ying et al. to MECT, allowing estimation of separate Compton and photoelectric sinograms. We furthermore introduce a weighting based on a quadratic approximation to the Poisson likelihood function that deemphasizes energy bins with low signal. Simulation results show that the proposed approach succeeds in estimating material properties even in high-attenuation scenarios where the DECT method fails, improving the signal to noise ratio of reconstructions by over 20 dB for the high-attenuation phantom. Our work demonstrates the potential of using photon counting detectors for stably recovering material properties even when high attenuation is present, thus enabling the development of improved scanning systems.

  16. Blind Bayesian restoration of adaptive optics telescope images using generalized Gaussian Markov random field models

    NASA Astrophysics Data System (ADS)

    Jeffs, Brian D.; Christou, Julian C.

    1998-09-01

    This paper addresses post processing for resolution enhancement of sequences of short exposure adaptive optics (AO) images of space objects. The unknown residual blur is removed using Bayesian maximum a posteriori blind image restoration techniques. In the problem formulation, both the true image and the unknown blur psf's are represented by the flexible generalized Gaussian Markov random field (GGMRF) model. The GGMRF probability density function provides a natural mechanism for expressing available prior information about the image and blur. Incorporating such prior knowledge in the deconvolution optimization is crucial for the success of blind restoration algorithms. For example, space objects often contain sharp edge boundaries and geometric structures, while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits smoothed, random , texture-like features on a peaked central core. By properly choosing parameters, GGMRF models can accurately represent both the blur psf and the object, and serve to regularize the deconvolution problem. These two GGMRF models also serve as discriminator functions to separate blur and object in the solution. Algorithm performance is demonstrated with examples from synthetic AO images. Results indicate significant resolution enhancement when applied to partially corrected AO images. An efficient computational algorithm is described.

  17. Restoration of retinal images with space-variant blur.

    PubMed

    Marrugo, Andrés G; Millán, María S; Sorel, Michal; Sroubek, Filip

    2014-01-01

    Retinal images are essential clinical resources for the diagnosis of retinopathy and many other ocular diseases. Because of improper acquisition conditions or inherent optical aberrations in the eye, the images are often degraded with blur. In many common cases, the blur varies across the field of view. Most image deblurring algorithms assume a space-invariant blur, which fails in the presence of space-variant (SV) blur. In this work, we propose an innovative strategy for the restoration of retinal images in which we consider the blur to be both unknown and SV. We model the blur by a linear operation interpreted as a convolution with a point-spread function (PSF) that changes with the position in the image. To achieve an artifact-free restoration, we propose a framework for a robust estimation of the SV PSF based on an eye-domain knowledge strategy. The restoration method was tested on artificially and naturally degraded retinal images. The results show an important enhancement, significant enough to leverage the images' clinical use.

  18. HeinzelCluster: accelerated reconstruction for FORE and OSEM3D.

    PubMed

    Vollmar, S; Michel, C; Treffert, J T; Newport, D F; Casey, M; Knöss, C; Wienhard, K; Liu, X; Defrise, M; Heiss, W D

    2002-08-07

    Using iterative three-dimensional (3D) reconstruction techniques for reconstruction of positron emission tomography (PET) is not feasible on most single-processor machines due to the excessive computing time needed, especially so for the large sinogram sizes of our high-resolution research tomograph (HRRT). In our first approach to speed up reconstruction time we transform the 3D scan into the format of a two-dimensional (2D) scan with sinograms that can be reconstructed independently using Fourier rebinning (FORE) and a fast 2D reconstruction method. On our dedicated reconstruction cluster (seven four-processor systems, Intel PIII@700 MHz, switched fast ethernet and Myrinet, Windows NT Server), we process these 2D sinograms in parallel. We have achieved a speedup > 23 using 26 processors and also compared results for different communication methods (RPC, Syngo, Myrinet GM). The other approach is to parallelize OSEM3D (implementation of C Michel), which has produced the best results for HRRT data so far and is more suitable for an adequate treatment of the sinogram gaps that result from the detector geometry of the HRRT. We have implemented two levels of parallelization for four dedicated cluster (a shared memory fine-grain level on each node utilizing all four processors and a coarse-grain level allowing for 15 nodes) reducing the time for one core iteration from over 7 h to about 35 min.

  19. Single neural code for blur in subjects with different interocular optical blur orientation

    PubMed Central

    Radhakrishnan, Aiswaryah; Sawides, Lucie; Dorronsoro, Carlos; Peli, Eli; Marcos, Susana

    2015-01-01

    The ability of the visual system to compensate for differences in blur orientation between eyes is not well understood. We measured the orientation of the internal blur code in both eyes of the same subject monocularly by presenting pairs of images blurred with real ocular point spread functions (PSFs) of similar blur magnitude but varying in orientations. Subjects assigned a level of confidence to their selection of the best perceived image in each pair. Using a classification-images–inspired paradigm and applying a reverse correlation technique, a classification map was obtained from the weighted averages of the PSFs, representing the internal blur code. Positive and negative neural PSFs were obtained from the classification map, representing the neural blur for best and worse perceived blur, respectively. The neural PSF was found to be highly correlated in both eyes, even for eyes with different ocular PSF orientations (rPos = 0.95; rNeg = 0.99; p < 0.001). We found that in subjects with similar and with different ocular PSF orientations between eyes, the orientation of the positive neural PSF was closer to the orientation of the ocular PSF of the eye with the better optical quality (average difference was ∼10°), while the orientation of the positive and negative neural PSFs tended to be orthogonal. These results suggest a single internal code for blur with orientation driven by the orientation of the optical blur of the eye with better optical quality. PMID:26114678

  20. Diagnostic features of Alzheimer's disease extracted from PET sinograms

    NASA Astrophysics Data System (ADS)

    Sayeed, A.; Petrou, M.; Spyrou, N.; Kadyrov, A.; Spinks, T.

    2002-01-01

    Texture analysis of positron emission tomography (PET) images of the brain is a very difficult task, due to the poor signal to noise ratio. As a consequence, very few techniques can be implemented successfully. We use a new global analysis technique known as the Trace transform triple features. This technique can be applied directly to the raw sinograms to distinguish patients with Alzheimer's disease (AD) from normal volunteers. FDG-PET images of 18 AD and 10 normal controls obtained from the same CTI ECAT-953 scanner were used in this study. The Trace transform triple feature technique was used to extract features that were invariant to scaling, translation and rotation, referred to as invariant features, as well as features that were sensitive to rotation but invariant to scaling and translation, referred to as sensitive features in this study. The features were used to classify the groups using discriminant function analysis. Cross-validation tests using stepwise discriminant function analysis showed that combining both sensitive and invariant features produced the best results, when compared with the clinical diagnosis. Selecting the five best features produces an overall accuracy of 93% with sensitivity of 94% and specificity of 90%. This is comparable with the classification accuracy achieved by Kippenhan et al (1992), using regional metabolic activity.

  1. Intelligent estimation of noise and blur variances using ANN for the restoration of ultrasound images.

    PubMed

    Uddin, Muhammad Shahin; Halder, Kalyan Kumar; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain

    2016-11-01

    Ultrasound (US) imaging is a widely used clinical diagnostic tool in medical imaging techniques. It is a comparatively safe, economical, painless, portable, and noninvasive real-time tool compared to the other imaging modalities. However, the image quality of US imaging is severely affected by the presence of speckle noise and blur during the acquisition process. In order to ensure a high-quality clinical diagnosis, US images must be restored by reducing their speckle noise and blur. In general, speckle noise is modeled as a multiplicative noise following a Rayleigh distribution and blur as a Gaussian function. Hereto, we propose an intelligent estimator based on artificial neural networks (ANNs) to estimate the variances of noise and blur, which, in turn, are used to obtain an image without discernible distortions. A set of statistical features computed from the image and its complex wavelet sub-bands are used as input to the ANN. In the proposed method, we solve the inverse Rayleigh function numerically for speckle reduction and use the Richardson-Lucy algorithm for de-blurring. The performance of this method is compared with that of the traditional methods by applying them to a synthetic, physical phantom and clinical data, which confirms better restoration results by the proposed method.

  2. Optical security verification for blurred fingerprints

    NASA Astrophysics Data System (ADS)

    Soon, Boon Y.; Karim, Mohammad A.; Alam, Mohammad S.

    1998-12-01

    Optical fingerprint security verification is gaining popularity, as it has the potential to perform correlation at the speed of light. With advancement in optical security verification techniques, authentication process can be almost foolproof and reliable for financial transaction, banking, etc. In law enforcement, when a fingerprint is obtained from a crime scene, it may be blurred and can be an unhealthy candidate for correlation purposes. Therefore, the blurred fingerprint needs to be clarified before it is used for the correlation process. There are a several different types of blur, such as linear motion blur and defocus blur, induced by aberration of imaging system. In addition, we may or may not know the blur function. In this paper, we propose the non-singularity inverse filtering in frequency/power domain for deblurring known motion-induced blur in fingerprints. This filtering process will be incorporated with the pow spectrum subtraction technique, uniqueness comparison scheme, and the separated target and references planes method in the joint transform correlator. The proposed hardware implementation is a hybrid electronic-optical correlator system. The performance of the proposed system would be verified with computer simulation for both cases: with and without additive random noise corruption.

  3. Blind estimation of blur in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir

    2017-10-01

    Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present on the original image, blind restoration methods can only be considered. By blind, we mean absolutely no knowledge neither of the blur point spread function (PSF) nor the original latent channel and the noise level. In this study, we address the blind restoration of the degraded channels component-wise, according to a sequential scheme. For each degraded channel, the sequential scheme estimates the blur point spread function (PSF) in a first stage and deconvolves the degraded channel in a second and final stage by means of using the PSF previously estimated. We propose a new component-wise blind method for estimating effectively and accurately the blur point spread function. This method follows recent approaches suggesting the detection, selection and use of sufficiently salient edges in the current processed channel for supporting the regularized blur PSF estimation. Several modifications are beneficially introduced in our work. A new selection of salient edges through thresholding adequately the cumulative distribution of their corresponding gradient magnitudes is introduced. Besides, quasi-automatic and spatially adaptive tuning of the involved regularization parameters is considered. To prove applicability and higher efficiency of the proposed method, we compare it against the method it originates from and four representative edge-sparsifying regularized methods of the literature already assessed in a previous work. Our attention is mainly paid to the objective analysis (via ݈l1-norm) of the blur PSF error estimation accuracy. The tests are performed on a synthetic hyperspectral image. This synthetic hyperspectral image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic spatial distribution of reference spectral signatures to recover after synthetic degradation. The synthetic hyperspectral image has been successively degraded with eight real blurs taken from the literature, each of a different support size. Conclusions, practical recommendations and perspectives are drawn from the results experimentally obtained.

  4. Evaluation of the spline reconstruction technique for PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kastis, George A., E-mail: gkastis@academyofathens.gr; Kyriakopoulou, Dimitra; Gaitanis, Anastasios

    2014-04-15

    Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors havemore » implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real point-source studies. In all simulated phantoms, the SRT exhibits higher contrast and lower bias than FBP at all noise levels, by increasing the COV in the reconstructed images. Finally, in real studies, whereas the contrast of the cold chambers are similar for both algorithms, the SRT reconstructed images of the NEMA phantom exhibit slightly higher COV values than those of FBP. In the Derenzo phantom, SRT resolves the 2-mm separated holes slightly better than FBP. The small-animal and human reconstructions via SRT exhibit slightly higher resolution and contrast than the FBP reconstructions. Conclusions: The SRT provides images of higher resolution, higher contrast, and lower bias than FBP, by increasing slightly the noise in the reconstructed images. Furthermore, it eliminates streak artifacts outside the object boundary. Unlike other analytic algorithms, the reconstruction time of SRT is comparable with that of FBP. The source code for SRT will become available in a future release of STIR.« less

  5. Photographic image enhancement

    NASA Technical Reports Server (NTRS)

    Hite, Gerald E.

    1990-01-01

    Deblurring capabilities would significantly improve the scientific return from Space Shuttle crew-acquired images of the Earth and the safety of Space Shuttle missions. Deblurring techniques were developed and demonstrated on two digitized images that were blurred in different ways. The first was blurred by a Gaussian blurring function analogous to that caused by atmospheric turbulence, while the second was blurred by improper focussing. It was demonstrated, in both cases, that the nature of the blurring (Gaussian and Airy) and the appropriate parameters could be obtained from the Fourier transformation of their images. The difficulties posed by the presence of noise necessitated special consideration. It was demonstrated that a modified Wiener frequency filter judiciously constructed to avoid over emphasis of frequency regions dominated by noise resulted in substantially improved images. Several important areas of future research were identified. Two areas of particular promise are the extraction of blurring information directly from the spatial images and improved noise abatement form investigations of select spatial regions and the elimination of spike noise.

  6. Quantifying how the combination of blur and disparity affects the perceived depth

    NASA Astrophysics Data System (ADS)

    Wang, Junle; Barkowsky, Marcus; Ricordel, Vincent; Le Callet, Patrick

    2011-03-01

    The influence of a monocular depth cue, blur, on the apparent depth of stereoscopic scenes will be studied in this paper. When 3D images are shown on a planar stereoscopic display, binocular disparity becomes a pre-eminent depth cue. But it induces simultaneously the conflict between accommodation and vergence, which is often considered as a main reason for visual discomfort. If we limit this visual discomfort by decreasing the disparity, the apparent depth also decreases. We propose to decrease the (binocular) disparity of 3D presentations, and to reinforce (monocular) cues to compensate the loss of perceived depth and keep an unaltered apparent depth. We conducted a subjective experiment using a twoalternative forced choice task. Observers were required to identify the larger perceived depth in a pair of 3D images with/without blur. By fitting the result to a psychometric function, we obtained points of subjective equality in terms of disparity. We found that when blur is added to the background of the image, the viewer can perceive larger depth comparing to the images without any blur in the background. The increase of perceived depth can be considered as a function of the relative distance between the foreground and background, while it is insensitive to the distance between the viewer and the depth plane at which the blur is added.

  7. An iterative algorithm for L1-TV constrained regularization in image restoration

    NASA Astrophysics Data System (ADS)

    Chen, K.; Loli Piccolomini, E.; Zama, F.

    2015-11-01

    We consider the problem of restoring blurred images affected by impulsive noise. The adopted method restores the images by solving a sequence of constrained minimization problems where the data fidelity function is the ℓ1 norm of the residual and the constraint, chosen as the image Total Variation, is automatically adapted to improve the quality of the restored images. Although this approach is general, we report here the case of vectorial images where the blurring model involves contributions from the different image channels (cross channel blur). A computationally convenient extension of the Total Variation function to vectorial images is used and the results reported show that this approach is efficient for recovering nearly optimal images.

  8. A novel SURE-based criterion for parametric PSF estimation.

    PubMed

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  9. Preprocessing of SAR interferometric data using anisotropic diffusion filter

    NASA Astrophysics Data System (ADS)

    Sartor, Kenneth; Allen, Josef De Vaughn; Ganthier, Emile; Tenali, Gnana Bhaskar

    2007-04-01

    The most commonly used smoothing algorithms for complex data processing are blurring functions (i.e., Hanning, Taylor weighting, Gaussian, etc.). Unfortunately, the filters so designed blur the edges in a Synthetic Aperture Radar (SAR) scene, reduce the accuracy of features, and blur the fringe lines in an interferogram. For the Digital Surface Map (DSM) extraction, the blurring of these fringe lines causes inaccuracies in the height of the unwrapped terrain surface. Our goal here is to perform spatially non-uniform smoothing to overcome the above mentioned disadvantages. This is achieved by using a Complex Anisotropic Non-Linear Diffuser (CANDI) filter that is a spatially varying. In particular, an appropriate choice of the convection function in the CANDI filter is able to accomplish the non-uniform smoothing. This boundary sharpening intra-region smoothing filter acts on interferometric SAR (IFSAR) data with noise to produce an interferogram with significantly reduced noise contents and desirable local smoothing. Results of CANDI filtering will be discussed and compared with those obtained by using the standard filters on simulated data.

  10. Visual uncertainty influences the extent of an especial skill.

    PubMed

    Czyż, S H; Kwon, O-S; Marzec, J; Styrkowiec, P; Breslin, G

    2015-12-01

    An especial skill in basketball emerges through highly repetitive practice at the 15 ft free throw line. The extent of the role vision plays in the emergence of an especial skill is unknown. We examined the especial skills of ten skilled basketball players in normal and blurred vision conditions where participants wore corrective lenses. As such, we selectively manipulated visual information without affecting the participants' explicit knowledge that they were shooting free throws. We found that shot efficiency was significantly lower in blurred vision conditions as expected, and that the concave shape of shot proficiency function in normal vision conditions became approximately linear in blurred vision conditions. By applying a recently proposed generalization model of especial skills, we suggest that the linearity of shot proficiency function reflects the participants' lesser dependence on especial skill in blurred vision conditions. The findings further characterize the role of visual context in the emergence of an especial skill. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Eye growth and myopia development: Unifying theory and Matlab model.

    PubMed

    Hung, George K; Mahadas, Kausalendra; Mohammad, Faisal

    2016-03-01

    The aim of this article is to present an updated unifying theory of the mechanisms underlying eye growth and myopia development. A series of model simulation programs were developed to illustrate the mechanism of eye growth regulation and myopia development. Two fundamental processes are presumed to govern the relationship between physiological optics and eye growth: genetically pre-programmed signaling and blur feedback. Cornea/lens is considered to have only a genetically pre-programmed component, whereas eye growth is considered to have both a genetically pre-programmed and a blur feedback component. Moreover, based on the Incremental Retinal-Defocus Theory (IRDT), the rate of change of blur size provides the direction for blur-driven regulation. The various factors affecting eye growth are shown in 5 simulations: (1 - unregulated eye growth): blur feedback is rendered ineffective, as in the case of form deprivation, so there is only genetically pre-programmed eye growth, generally resulting in myopia; (2 - regulated eye growth): blur feedback regulation demonstrates the emmetropization process, with abnormally excessive or reduced eye growth leading to myopia and hyperopia, respectively; (3 - repeated near-far viewing): simulation of large-to-small change in blur size as seen in the accommodative stimulus/response function, and via IRDT as well as nearwork-induced transient myopia (NITM), leading to the development of myopia; (4 - neurochemical bulk flow and diffusion): release of dopamine from the inner plexiform layer of the retina, and the subsequent diffusion and relay of neurochemical cascade show that a decrease in dopamine results in a reduction of proteoglycan synthesis rate, which leads to myopia; (5 - Simulink model): model of genetically pre-programmed signaling and blur feedback components that allows for different input functions to simulate experimental manipulations that result in hyperopia, emmetropia, and myopia. These model simulation programs (available upon request) can provide a useful tutorial for the general scientist and serve as a quantitative tool for researchers in eye growth and myopia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Influence of Sinogram-Affirmed Iterative Reconstruction on Computed Tomography-Based Lung Volumetry and Quantification of Pulmonary Emphysema.

    PubMed

    Baumueller, Stephan; Hilty, Regina; Nguyen, Thi Dan Linh; Weder, Walter; Alkadhi, Hatem; Frauenfelder, Thomas

    2016-01-01

    The purpose of this study was to evaluate the influence of sinogram-affirmed iterative reconstruction (SAFIRE) on quantification of lung volume and pulmonary emphysema in low-dose chest computed tomography compared with filtered back projection (FBP). Enhanced or nonenhanced low-dose chest computed tomography was performed in 20 patients with chronic obstructive pulmonary disease (group A) and in 20 patients without lung disease (group B). Data sets were reconstructed with FBP and SAFIRE strength levels 3 to 5. Two readers semiautomatically evaluated lung volumes and automatically quantified pulmonary emphysema, and another assessed image quality. Radiation dose parameters were recorded. Lung volume between FBP and SAFIRE 3 to 5 was not significantly different among both groups (all P > 0.05). When compared with those of FBP, total emphysema volume was significantly lower among reconstructions with SAFIRE 4 and 5 (mean difference, 0.56 and 0.79 L; all P < 0.001). There was no nondiagnostic image quality. Sinogram-affirmed iterative reconstruction does not alter lung volume measurements, although quantification of lung emphysema is affected at higher strength levels.

  13. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  14. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  15. Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate.

    PubMed

    Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan

    2017-07-24

    Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. 'scatter-tails'. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the 'scatter-tails'. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68 Ga-PSMA scan, and 23 whole-body 18 F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical 'halo' artifacts that are often observed in the vicinity of high focal uptake regions.

  16. Dynamic deformation image de-blurring and image processing for digital imaging correlation measurement

    NASA Astrophysics Data System (ADS)

    Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.

    2017-11-01

    This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

  17. Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor

    PubMed Central

    Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki

    2015-01-01

    This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760

  18. Application of side-oblique image-motion blur correction to Kuaizhou-1 agile optical images.

    PubMed

    Sun, Tao; Long, Hui; Liu, Bao-Cheng; Li, Ying

    2016-03-21

    Given the recent development of agile optical satellites for rapid-response land observation, side-oblique image-motion (SOIM) detection and blur correction have become increasingly essential for improving the radiometric quality of side-oblique images. The Chinese small-scale agile mapping satellite Kuaizhou-1 (KZ-1) was developed by the Harbin Institute of Technology and launched for multiple emergency applications. Like other agile satellites, KZ-1 suffers from SOIM blur, particularly in captured images with large side-oblique angles. SOIM detection and blur correction are critical for improving the image radiometric accuracy. This study proposes a SOIM restoration method based on segmental point spread function detection. The segment region width is determined by satellite parameters such as speed, height, integration time, and side-oblique angle. The corresponding algorithms and a matrix form are proposed for SOIM blur correction. Radiometric objective evaluation indices are used to assess the restoration quality. Beijing regional images from KZ-1 are used as experimental data. The radiometric quality is found to increase greatly after SOIM correction. Thus, the proposed method effectively corrects image motion for KZ-1 agile optical satellites.

  19. A noncoherent optical analog image processor.

    PubMed

    Swindell, W

    1970-11-01

    The description of a machine that performs a variety of image processing operations is given, together with a theoretical discussion of its operation. Spatial processing is performed by corrective convolution techniques. Density processing is achieved by means of an electrical transfer function generator included in the video circuit. Examples of images processed for removal of image motion blur, defocus, and atmospheric seeing blur are shown.

  20. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Influence of Blurred Ways on Pattern Recognition of a Scale-Free Hopfield Neural Network

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Li

    2010-01-01

    We investigate the influence of blurred ways on pattern recognition of a Barabási-Albert scale-free Hopfield neural network (SFHN) with a small amount of errors. Pattern recognition is an important function of information processing in brain. Due to heterogeneous degree of scale-free network, different blurred ways have different influences on pattern recognition with same errors. Simulation shows that among partial recognition, the larger loading ratio (the number of patterns to average degree P/langlekrangle) is, the smaller the overlap of SFHN is. The influence of directed (large) way is largest and the directed (small) way is smallest while random way is intermediate between them. Under the ratio of the numbers of stored patterns to the size of the network P/N is less than 0. 1 conditions, there are three families curves of the overlap corresponding to directed (small), random and directed (large) blurred ways of patterns and these curves are not associated with the size of network and the number of patterns. This phenomenon only occurs in the SFHN. These conclusions are benefit for understanding the relation between neural network structure and brain function.

  1. Visible Motion Blur

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor); Ahumada, Albert J. (Inventor)

    2014-01-01

    A method of measuring motion blur is disclosed comprising obtaining a moving edge temporal profile r(sub 1)(k) of an image of a high-contrast moving edge, calculating the masked local contrast m(sub1)(k) for r(sub 1)(k) and the masked local contrast m(sub 2)(k) for an ideal step edge waveform r(sub 2)(k) with the same amplitude as r(sub 1)(k), and calculating the measure or motion blur Psi as a difference function, The masked local contrasts are calculated using a set of convolution kernels scaled to simulate the performance of the human visual system, and Psi is measured in units of just-noticeable differences.

  2. A Comparative Study of Different Deblurring Methods Using Filters

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Kavitha, S.

    2011-12-01

    This paper attempts to undertake the study of Restored Gaussian Blurred Images by using four types of techniques of deblurring image viz., Wiener filter, Regularized filter, Lucy Richardson deconvolution algorithm and Blind deconvolution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image. The same is applied to the scanned image of seven months baby in the womb and they are compared with one another, so as to choose the best technique for restored or deblurring image. This paper also attempts to undertake the study of restored blurred image using Regualr Filter(RF) with no information about the Point Spread Function (PSF) by using the same four techniques after executing the guess of the PSF. The number of iterations and the weight threshold of it to choose the best guesses for restored or deblurring image of these techniques are determined.

  3. Blur kernel estimation with algebraic tomography technique and intensity profiles of object boundaries

    NASA Astrophysics Data System (ADS)

    Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry

    2018-04-01

    Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.

  4. Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging

    PubMed Central

    Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.

    2017-01-01

    Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time. PMID:29270539

  5. Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging.

    PubMed

    Li, Yusheng; Matej, Samuel; Karp, Joel S; Metzler, Scott D

    2017-05-01

    Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time.

  6. Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate

    NASA Astrophysics Data System (ADS)

    Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan

    2017-08-01

    Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. ‘scatter-tails’. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the ‘scatter-tails’. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68Ga-PSMA scan, and 23 whole-body 18F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical ‘halo’ artifacts that are often observed in the vicinity of high focal uptake regions.

  7. An iterative sinogram gap-filling method with object- and scanner-dedicated discrete cosine transform (DCT)-domain filters for high resolution PET scanners.

    PubMed

    Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon

    2018-01-01

    We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.

  8. Capturing the plenoptic function in a swipe

    NASA Astrophysics Data System (ADS)

    Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi

    2016-09-01

    Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.

  9. Effects of blur and repeated testing on sensitivity estimates with frequency doubling perimetry.

    PubMed

    Artes, Paul H; Nicolela, Marcelo T; McCormick, Terry A; LeBlanc, Raymond P; Chauhan, Balwantray C

    2003-02-01

    To investigate the effect of blur and repeated testing on sensitivity with frequency doubling technology (FDT) perimetry. One eye of 12 patients with glaucoma (mean deviation [MD] mean, -2.5 dB, range +0.5 to -4.3 dB) and 11 normal control subjects underwent six consecutive tests with the FDT N30 threshold program in each of two sessions. In session 1, blur was induced by trial lenses (-6.00, -3.00, 0.00, +3.00, and +6.00 D, in random order). In session 2, only the effects of repeated testing were evaluated. The MD and pattern standard deviation (PSD) indices were evaluated as functions of blur and of test order. By correcting the data of session 1 for the reduction of sensitivity with repeated testing (session 2), the effect of blur on FDT sensitivities was established, and its clinical consequences evaluated on total- and pattern-deviation probability maps. FDT sensitivities decreased with blur (by <0.5 dB/D) and with repeated testing (by approximately 2 dB between the first and sixth tests). Blur and repeated testing independently led to larger numbers of locations with significant total and pattern deviation. Sensitivity reductions were similar in normal control subjects and patients with glaucoma, at central and peripheral test locations and at locations with high and low sensitivities. However, patients with glaucoma showed larger deterioration in the total-deviation-probability maps. To optimize the performance of the device, refractive errors should be corrected and immediate retesting avoided. Further research is needed to establish the cause of sensitivity loss with repeated FDT testing.

  10. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  11. Optimization and Comparison of Different Digital Mammographic Tomosynthesis Reconstruction Methods

    DTIC Science & Technology

    2008-04-01

    physical measurements of impulse response analysis, modulation transfer function (MTF) and noise power spectrum (NPS). (Months 5- 12). This task has...and 2 impulse -added: projection images with simulated impulse and the I /r2 shading difference. Other system blur and noise issues are not...blur, and suppressed high frequency noise . Point-by-point BP rather than traditional SAA should be considered as the basis of further deblurring

  12. Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.

    PubMed

    Harikumar, G; Bresler, Y

    1999-01-01

    We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.

  13. Image Restoration by Spline Functions

    DTIC Science & Technology

    1976-08-31

    motion degradation, over- determined model. 71 Figure 4-7. Singular values for motion blur. 72 Figure 5-1. Models for film-grain noise and filtering. 85...Figure 5-2. Filtering of signal dependent noisy images. 86 Figure 5-3. Filtering of image lines degraded by film- grain noise . 87 Figure 5-4...phenomena. Fhese phenomena include such imperfect imaging cir- cumstances as defocus, motion blur, optical aberrations, and noise D1I r> . Phe pioneers

  14. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  15. Combined invariants to similarity transformation and to blur using orthogonal Zernike moments

    PubMed Central

    Beijing, Chen; Shu, Huazhong; Zhang, Hui; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis

    2011-01-01

    The derivation of moment invariants has been extensively investigated in the past decades. In this paper, we construct a set of invariants derived from Zernike moments which is simultaneously invariant to similarity transformation and to convolution with circularly symmetric point spread function (PSF). Two main contributions are provided: the theoretical framework for deriving the Zernike moments of a blurred image and the way to construct the combined geometric-blur invariants. The performance of the proposed descriptors is evaluated with various PSFs and similarity transformations. The comparison of the proposed method with the existing ones is also provided in terms of pattern recognition accuracy, template matching and robustness to noise. Experimental results show that the proposed descriptors perform on the overall better. PMID:20679028

  16. Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images

    PubMed Central

    Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi

    2016-01-01

    Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704

  17. A simulation of orientation dependent, global changes in camera sensitivity in ECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bieszk, J.A.; Hawman, E.G.; Malmin, R.E.

    1984-01-01

    ECT promises the abilities to: 1) observe radioisotope distributions in a patient without the summation of overlying activity to reduce contrast, and 2) measure quantitatively these distributions to further and more accurately assess organ function. Ideally, camera-based ECT systems should have a performance that is independent of camera orientation or gantry angle. This study is concerned with ECT quantitation errors that can arise from angle-dependent variations of camera sensitivity. Using simulated phantoms representative of heart and liver sections, the effects of sensitivity changes on reconstructed images were assessed both visually and quantitatively based on ROI sums. The sinogram for eachmore » test image was simulated with 128 linear digitization and 180 angular views. The global orientation-dependent sensitivity was modelled by applying an angular sensitivity dependence to the sinograms of the test images. Four sensitivity variations were studied. Amplitudes of 0% (as a reference), 5%, 10%, and 25% with a costheta dependence were studied as well as a cos2theta dependence with a 5% amplitude. Simulations were done with and without Poisson noise to: 1) determine trends in the quantitative effects as a function of the magnitude of the variation, and 2) to see how these effects are manifested in studies having statistics comparable to clinical cases. For the most realistic sensitivity variation (costheta, 5% ampl.), the ROIs chosen in the present work indicated changes of <0.5% in the noiseless case and <5% for the case with Poisson noise. The effects of statistics appear to dominate any effects due to global, sinusoidal, orientation-dependent sensitivity changes in the cases studied.« less

  18. Model-based quantification of image quality

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.

    1989-01-01

    In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.

  19. Motion-Blurred Particle Image Restoration for On-Line Wear Monitoring

    PubMed Central

    Peng, Yeping; Wu, Tonghai; Wang, Shuo; Kwok, Ngaiming; Peng, Zhongxiao

    2015-01-01

    On-line images of wear debris contain important information for real-time condition monitoring, and a dynamic imaging technique can eliminate particle overlaps commonly found in static images, for instance, acquired using ferrography. However, dynamic wear debris images captured in a running machine are unavoidably blurred because the particles in lubricant are in motion. Hence, it is difficult to acquire reliable images of wear debris with an adequate resolution for particle feature extraction. In order to obtain sharp wear particle images, an image processing approach is proposed. Blurred particles were firstly separated from the static background by utilizing a background subtraction method. Second, the point spread function was estimated using power cepstrum to determine the blur direction and length. Then, the Wiener filter algorithm was adopted to perform image restoration to improve the image quality. Finally, experiments were conducted with a large number of dynamic particle images to validate the effectiveness of the proposed method and the performance of the approach was also evaluated. This study provides a new practical approach to acquire clear images for on-line wear monitoring. PMID:25856328

  20. Study of blur discrimination for 3D stereo viewing

    NASA Astrophysics Data System (ADS)

    Subedar, Mahesh; Karam, Lina J.

    2014-03-01

    Blur is an important attribute in the study and modeling of the human visual system. Blur discrimination was studied extensively using 2D test patterns. In this study, we present the details of subjective tests performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. Specifically, the effect of disparity on the blur discrimination thresholds is studied on a passive stereoscopic 3D display. The blur discrimination thresholds are measured using stereoscopic 3D test patterns with positive, negative and zero disparity values, at multiple reference blur levels. A disparity value of zero represents the 2D viewing case where both the eyes will observe the same image. The subjective test results indicate that the blur discrimination thresholds remain constant as we vary the disparity value. This further indicates that binocular disparity does not affect blur discrimination thresholds and the models developed for 2D blur discrimination thresholds can be extended to stereoscopic 3D blur discrimination thresholds. We have presented fitting of the Weber model to the 3D blur discrimination thresholds measured from the subjective experiments.

  1. Blur Detection is Unaffected by Cognitive Load.

    PubMed

    Loschky, Lester C; Ringer, Ryan V; Johnson, Aaron P; Larson, Adam M; Neider, Mark; Kramer, Arthur F

    2014-03-01

    Blur detection is affected by retinal eccentricity, but is it also affected by attentional resources? Research showing effects of selective attention on acuity and contrast sensitivity suggests that allocating attention should increase blur detection. However, research showing that blur affects selection of saccade targets suggests that blur detection may be pre-attentive. To investigate this question, we carried out experiments in which viewers detected blur in real-world scenes under varying levels of cognitive load manipulated by the N -back task. We used adaptive threshold estimation to measure blur detection thresholds at 0°, 3°, 6°, and 9° eccentricity. Participants carried out blur detection as a single task, a single task with to-be-ignored letters, or an N-back task with four levels of cognitive load (0, 1, 2, or 3-back). In Experiment 1, blur was presented gaze-contingently for occasional single eye fixations while participants viewed scenes in preparation for an easy picture recognition memory task, and the N -back stimuli were presented auditorily. The results for three participants showed a large effect of retinal eccentricity on blur thresholds, significant effects of N -back level on N -back performance, scene recognition memory, and gaze dispersion, but no effect of N -back level on blur thresholds. In Experiment 2, we replicated Experiment 1 but presented the images tachistoscopically for 200 ms (half with, half without blur), to determine whether gaze-contingent blur presentation in Experiment 1 had produced attentional capture by blur onset during a fixation, thus eliminating any effect of cognitive load on blur detection. The results with three new participants replicated those of Experiment 1, indicating that the use of gaze-contingent blur presentation could not explain the lack of effect of cognitive load on blur detection. Thus, apparently blur detection in real-world scene images is unaffected by attentional resources, as manipulated by the cognitive load produced by the N -back task.

  2. Blind image deblurring based on trained dictionary and curvelet using sparse representation

    NASA Astrophysics Data System (ADS)

    Feng, Liang; Huang, Qian; Xu, Tingfa; Li, Shao

    2015-04-01

    Motion blur is one of the most significant and common artifacts causing poor image quality in digital photography, in which many factors resulted. In imaging process, if the objects are moving quickly in the scene or the camera moves in the exposure interval, the image of the scene would blur along the direction of relative motion between the camera and the scene, e.g. camera shake, atmospheric turbulence. Recently, sparse representation model has been widely used in signal and image processing, which is an effective method to describe the natural images. In this article, a new deblurring approach based on sparse representation is proposed. An overcomplete dictionary learned from the trained image samples via the KSVD algorithm is designed to represent the latent image. The motion-blur kernel can be treated as a piece-wise smooth function in image domain, whose support is approximately a thin smooth curve, so we employed curvelet to represent the blur kernel. Both of overcomplete dictionary and curvelet system have high sparsity, which improves the robustness to the noise and more satisfies the observer's visual demand. With the two priors, we constructed restoration model of blurred images and succeeded to solve the optimization problem with the help of alternating minimization technique. The experiment results prove the method can preserve the texture of original images and suppress the ring artifacts effectively.

  3. UAV remote sensing atmospheric degradation image restoration based on multiple scattering APSF estimation

    NASA Astrophysics Data System (ADS)

    Qiu, Xiang; Dai, Ming; Yin, Chuan-li

    2017-09-01

    Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.

  4. Effect of visual target blurring on accommodation under distance viewing

    NASA Astrophysics Data System (ADS)

    Iwata, Yo; Handa, Tomoya; Ishikawa, Hitoshi

    2018-04-01

    Abstract Purpose To examine the effect of visual target blurring on accommodation. Methods We evaluated the objective refraction values when the visual target (asterisk; 8°) was changed from the state without Gaussian blur (15 s) to the state with Gaussian blur adapted [0(without blur) → 10, 0 → 50, 0 → 100: 15 s each]. Results In Gaussian blur 10, when blurring of the target occurred, refraction value did not change significantly. In Gaussian blur 50 and 100, when blurring of the target occurred, the refraction value became significantly myopic. Conclusion Blurring of the distant visual target results in intervention of accommodation.

  5. MR-assisted PET Motion Correction for eurological Studies in an Integrated MR-PET Scanner

    PubMed Central

    Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B.; Michel, Christian J.; El Fakhri, Georges; Schmand, Matthias; Sorensen, A. Gregory

    2011-01-01

    Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MR data can be used for motion tracking. In this work, a novel data processing and rigid-body motion correction (MC) algorithm for the MR-compatible BrainPET prototype scanner is described and proof-of-principle phantom and human studies are presented. Methods To account for motion, the PET prompts and randoms coincidences as well as the sensitivity data are processed in the line or response (LOR) space according to the MR-derived motion estimates. After sinogram space rebinning, the corrected data are summed and the motion corrected PET volume is reconstructed from these sinograms and the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed and motion estimates were obtained using two high temporal resolution MR-based motion tracking techniques. Results After accounting for the physical mismatch between the two scanners, perfectly co-registered MR and PET volumes are reproducibly obtained. The MR output gates inserted in to the PET list-mode allow the temporal correlation of the two data sets within 0.2 s. The Hoffman phantom volume reconstructed processing the PET data in the LOR space was similar to the one obtained processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the novel MC algorithm. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 seconds and 20 ms, respectively. Substantially improved PET images with excellent delineation of specific brain structures were obtained after applying the MC using these MR-based estimates. Conclusion A novel MR-based MC algorithm was developed for the integrated MR-PET scanner. High temporal resolution MR-derived motion estimates (obtained while simultaneously acquiring anatomical or functional MR data) can be used for PET MC. An MR-based MC has the potential to improve PET as a quantitative method, increasing its reliability and reproducibility which could benefit a large number of neurological applications. PMID:21189415

  6. Hybrid registration of PET/CT in thoracic region with pre-filtering PET sinogram

    NASA Astrophysics Data System (ADS)

    Mokri, S. S.; Saripan, M. I.; Marhaban, M. H.; Nordin, A. J.; Hashim, S.

    2015-11-01

    The integration of physiological (PET) and anatomical (CT) images in cancer delineation requires an accurate spatial registration technique. Although hybrid PET/CT scanner is used to co-register these images, significant misregistrations exist due to patient and respiratory/cardiac motions. This paper proposes a hybrid feature-intensity based registration technique for hybrid PET/CT scanner. First, simulated PET sinogram was filtered with a 3D hybrid mean-median before reconstructing the image. The features were then derived from the segmented structures (lung, heart and tumor) from both images. The registration was performed based on modified multi-modality demon registration with multiresolution scheme. Apart from visual observations improvements, the proposed registration technique increased the normalized mutual information index (NMI) between the PET/CT images after registration. All nine tested datasets show marked improvements in mutual information (MI) index than free form deformation (FFD) registration technique with the highest MI increase is 25%.

  7. Blur Clarified: A review and Synthesis of Blur Discrimination

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J.

    2011-01-01

    Blur is an important attribute of human spatial vision, and sensitivity to blur has been the subject of considerable experimental research and theoretical modeling. Often these models have invoked specialized concepts or mechanisms, such as intrinsic blur, multiple channels, or blur estimation units. In this paper we review the several experimental studies of blur discrimination and find they are in broad empirical agreement. But contrary to previous modeling efforts, we find that the essential features of blur discrimination are fully accounted for by a visible contrast energy model (ViCE), in which two spatial patterns are distinguished when the integrated difference between their masked local contrast energy responses reaches a threshold value.

  8. State-space estimation of the input stimulus function using the Kalman filter: a communication system model for fMRI experiments.

    PubMed

    Ward, B Douglas; Mazaheri, Yousef

    2006-12-15

    The blood oxygenation level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments in response to input stimuli is temporally delayed and distorted due to the blurring effect of the voxel hemodynamic impulse response function (IRF). Knowledge of the IRF, obtained during the same experiment, or as the result of a separate experiment, can be used to dynamically obtain an estimate of the input stimulus function. Reconstruction of the input stimulus function allows the fMRI experiment to be evaluated as a communication system. The input stimulus function may be considered as a "message" which is being transmitted over a noisy "channel", where the "channel" is characterized by the voxel IRF. Following reconstruction of the input stimulus function, the received message is compared with the transmitted message on a voxel-by-voxel basis to determine the transmission error rate. Reconstruction of the input stimulus function provides insight into actual brain activity during task activation with less temporal blurring, and may be considered as a first step toward estimation of the true neuronal input function.

  9. The natural statistics of blur

    PubMed Central

    Sprague, William W.; Cooper, Emily A.; Reissier, Sylvain; Yellapragada, Baladitya; Banks, Martin S.

    2016-01-01

    Blur from defocus can be both useful and detrimental for visual perception: It can be useful as a source of depth information and detrimental because it degrades image quality. We examined these aspects of blur by measuring the natural statistics of defocus blur across the visual field. Participants wore an eye-and-scene tracker that measured gaze direction, pupil diameter, and scene distances as they performed everyday tasks. We found that blur magnitude increases with increasing eccentricity. There is a vertical gradient in the distances that generate defocus blur: Blur below the fovea is generally due to scene points nearer than fixation; blur above the fovea is mostly due to points farther than fixation. There is no systematic horizontal gradient. Large blurs are generally caused by points farther rather than nearer than fixation. Consistent with the statistics, participants in a perceptual experiment perceived vertical blur gradients as slanted top-back whereas horizontal gradients were perceived equally as left-back and right-back. The tendency for people to see sharp as near and blurred as far is also consistent with the observed statistics. We calculated how many observations will be perceived as unsharp and found that perceptible blur is rare. Finally, we found that eye shape in ground-dwelling animals conforms to that required to put likely distances in best focus. PMID:27580043

  10. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images. Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode.

  11. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    PubMed Central

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-01-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. Methods We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 seconds. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.04375 mAs, were investigated. Both the analytical FDK algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. Results With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images. Conclusions Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode. PMID:26352168

  12. An Aggregated Method for Determining Railway Defects and Obstacle Parameters

    NASA Astrophysics Data System (ADS)

    Loktev, Daniil; Loktev, Alexey; Stepanov, Roman; Pevzner, Viktor; Alenov, Kanat

    2018-03-01

    The method of combining algorithms of image blur analysis and stereo vision to determine the distance to objects (including external defects of railway tracks) and the speed of moving objects-obstacles is proposed. To estimate the deviation of the distance depending on the blur a statistical approach, logarithmic, exponential and linear standard functions are used. The statistical approach includes a method of estimating least squares and the method of least modules. The accuracy of determining the distance to the object, its speed and direction of movement is obtained. The paper develops a method of determining distances to objects by analyzing a series of images and assessment of depth using defocusing using its aggregation with stereoscopic vision. This method is based on a physical effect of dependence on the determined distance to the object on the obtained image from the focal length or aperture of the lens. In the calculation of the blur spot diameter it is assumed that blur occurs at the point equally in all directions. According to the proposed approach, it is possible to determine the distance to the studied object and its blur by analyzing a series of images obtained using the video detector with different settings. The article proposes and scientifically substantiates new and improved existing methods for detecting the parameters of static and moving objects of control, and also compares the results of the use of various methods and the results of experiments. It is shown that the aggregate method gives the best approximation to the real distances.

  13. Effects of Optical Blur Reduction on Equivalent Intrinsic Blur

    PubMed Central

    Valeshabad, Ali Kord; Wanek, Justin; McAnany, J. Jason; Shahidi, Mahnaz

    2015-01-01

    Purpose To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Methods Twelve visually normal individuals (age; 31 ± 12 years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) due to high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. Results σopt and σint were significantly reduced and visual acuity (VA) was significantly improved after AO correction (P ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, P ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (P = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, P < 0.001) and the two parameters were related linearly with a slope of 0.46. Conclusions Reduction in equivalent intrinsic blur was greater than the reduction in optical blur due to AO correction of wavefront error. This finding implies that VA in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone. PMID:25785538

  14. Effects of optical blur reduction on equivalent intrinsic blur.

    PubMed

    Kord Valeshabad, Ali; Wanek, Justin; McAnany, J Jason; Shahidi, Mahnaz

    2015-04-01

    To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Twelve visually normal subjects (mean [±SD] age, 31 [±12] years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) caused by high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. σopt and σint were significantly reduced and visual acuity was significantly improved after AO correction (p ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, p ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (p = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, p < 0.001), and the two parameters were related linearly with a slope of 0.46. Reduction in equivalent intrinsic blur was greater than the reduction in optical blur after AO correction of wavefront error. This finding implies that visual acuity in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone.

  15. Adaptive recovery of motion blur point spread function from differently exposed images

    NASA Astrophysics Data System (ADS)

    Albu, Felix; Florea, Corneliu; Drîmbarean, Alexandru; Zamfir, Adrian

    2010-01-01

    Motion due to digital camera movement during the image capture process is a major factor that degrades the quality of images and many methods for camera motion removal have been developed. Central to all techniques is the correct recovery of what is known as the Point Spread Function (PSF). A very popular technique to estimate the PSF relies on using a pair of gyroscopic sensors to measure the hand motion. However, the errors caused either by the loss of the translational component of the movement or due to the lack of precision in gyro-sensors measurements impede the achievement of a good quality restored image. In order to compensate for this, we propose a method that begins with an estimation of the PSF obtained from 2 gyro sensors and uses a pair of under-exposed image together with the blurred image to adaptively improve it. The luminance of the under-exposed image is equalized with that of the blurred image. An initial estimation of the PSF is generated from the output signal of 2 gyro sensors. The PSF coefficients are updated using 2D-Least Mean Square (LMS) algorithms with a coarse-to-fine approach on a grid of points selected from both images. This refined PSF is used to process the blurred image using known deblurring methods. Our results show that the proposed method leads to superior PSF support and coefficient estimation. Also the quality of the restored image is improved compared to 2 gyro only approach or to blind image de-convolution results.

  16. Fast restoration approach for motion blurred image based on deconvolution under the blurring paths

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Song, Jie; Hua, Xia

    2015-12-01

    For the real-time motion deblurring, it is of utmost importance to get a higher processing speed with about the same image quality. This paper presents a fast Richardson-Lucy motion deblurring approach to remove motion blur which rotates blurred image under blurring paths. Hence, the computational time is reduced sharply by using one-dimensional Fast Fourier Transform in one-dimensional Richardson-Lucy method. In order to obtain accurate transformational results, interpolation method is incorporated to fetch the gray values. Experiment results demonstrate that the proposed approach is efficient and effective to reduce motion blur under the blur paths.

  17. Response normalization and blur adaptation: Data and multi-scale model

    PubMed Central

    Elliott, Sarah L.; Georgeson, Mark A.; Webster, Michael A.

    2011-01-01

    Adapting to blurred or sharpened images alters perceived blur of a focused image (M. A. Webster, M. A. Georgeson, & S. M. Webster, 2002). We asked whether blur adaptation results in (a) renormalization of perceived focus or (b) a repulsion aftereffect. Images were checkerboards or 2-D Gaussian noise, whose amplitude spectra had (log–log) slopes from −2 (strongly blurred) to 0 (strongly sharpened). Observers adjusted the spectral slope of a comparison image to match different test slopes after adaptation to blurred or sharpened images. Results did not show repulsion effects but were consistent with some renormalization. Test blur levels at and near a blurred or sharpened adaptation level were matched by more focused slopes (closer to 1/f) but with little or no change in appearance after adaptation to focused (1/f) images. A model of contrast adaptation and blur coding by multiple-scale spatial filters predicts these blur aftereffects and those of Webster et al. (2002). A key proposal is that observers are pre-adapted to natural spectra, and blurred or sharpened spectra induce changes in the state of adaptation. The model illustrates how norms might be encoded and recalibrated in the visual system even when they are represented only implicitly by the distribution of responses across multiple channels. PMID:21307174

  18. Adapting to blur produced by ocular high-order aberrations

    PubMed Central

    Sawides, Lucie; de Gracia, Pablo; Dorronsoro, Carlos; Webster, Michael; Marcos, Susana

    2011-01-01

    The perceived focus of an image can be strongly biased by prior adaptation to a blurred or sharpened image. We examined whether these adaptation effects can occur for the natural patterns of retinal image blur produced by high-order aberrations (HOAs) in the optics of the eye. Focus judgments were measured for 4 subjects to estimate in a forced choice procedure (sharp/blurred) their neutral point after adaptation to different levels of blur produced by scaled increases or decreases in their HOAs. The optical blur was simulated by convolution of the PSFs from the 4 different HOA patterns, with Zernike coefficients (excluding tilt, defocus, and astigmatism) multiplied by a factor between 0 (diffraction limited) and 2 (double amount of natural blur). Observers viewed the images through an Adaptive Optics system that corrected their aberrations and made settings under neutral adaptation to a gray field or after adapting to 5 different blur levels. All subjects adapted to changes in the level of blur imposed by HOA regardless of which observer’s HOA was used to generate the stimuli, with the perceived neutral point proportional to the amount of blur in the adapting image. PMID:21712375

  19. Adapting to blur produced by ocular high-order aberrations.

    PubMed

    Sawides, Lucie; de Gracia, Pablo; Dorronsoro, Carlos; Webster, Michael; Marcos, Susana

    2011-06-28

    The perceived focus of an image can be strongly biased by prior adaptation to a blurred or sharpened image. We examined whether these adaptation effects can occur for the natural patterns of retinal image blur produced by high-order aberrations (HOAs) in the optics of the eye. Focus judgments were measured for 4 subjects to estimate in a forced choice procedure (sharp/blurred) their neutral point after adaptation to different levels of blur produced by scaled increases or decreases in their HOAs. The optical blur was simulated by convolution of the PSFs from the 4 different HOA patterns, with Zernike coefficients (excluding tilt, defocus, and astigmatism) multiplied by a factor between 0 (diffraction limited) and 2 (double amount of natural blur). Observers viewed the images through an Adaptive Optics system that corrected their aberrations and made settings under neutral adaptation to a gray field or after adapting to 5 different blur levels. All subjects adapted to changes in the level of blur imposed by HOA regardless of which observer's HOA was used to generate the stimuli, with the perceived neutral point proportional to the amount of blur in the adapting image.

  20. Blurred digital mammography images: an analysis of technical recall and observer detection performance.

    PubMed

    Ma, Wang Kei; Borgen, Rita; Kelly, Judith; Millington, Sara; Hilton, Beverley; Aspin, Rob; Lança, Carla; Hogg, Peter

    2017-03-01

    Blurred images in full-field digital mammography are a problem in the UK Breast Screening Programme. Technical recalls may be due to blurring not being seen on lower resolution monitors used for review. This study assesses the visual detection of blurring on a 2.3-MP monitor and a 5-MP report grade monitor and proposes an observer standard for the visual detection of blurring on a 5-MP reporting grade monitor. 28 observers assessed 120 images for blurring; 20 images had no blurring present, whereas 100 images had blurring imposed through mathematical simulation at 0.2, 0.4, 0.6, 0.8 and 1.0 mm levels of motion. Technical recall rate for both monitors and angular size at each level of motion were calculated. χ 2 tests were used to test whether significant differences in blurring detection existed between 2.3- and 5-MP monitors. The technical recall rate for 2.3- and 5-MP monitors are 20.3% and 9.1%, respectively. The angular size for 0.2- to 1-mm motion varied from 55 to 275 arc s. The minimum amount of motion for visual detection of blurring in this study is 0.4 mm. For 0.2-mm simulated motion, there was no significant difference [χ 2 (1, N = 1095) = 1.61, p = 0.20] in blurring detection between the 2.3- and 5-MP monitors. According to this study, monitors ≤2.3 MP are not suitable for technical review of full-field digital mammography images for the detection of blur. Advances in knowledge: This research proposes the first observer standard for the visual detection of blurring.

  1. Blurred digital mammography images: an analysis of technical recall and observer detection performance

    PubMed Central

    Borgen, Rita; Kelly, Judith; Millington, Sara; Hilton, Beverley; Aspin, Rob; Lança, Carla; Hogg, Peter

    2017-01-01

    Objective: Blurred images in full-field digital mammography are a problem in the UK Breast Screening Programme. Technical recalls may be due to blurring not being seen on lower resolution monitors used for review. This study assesses the visual detection of blurring on a 2.3-MP monitor and a 5-MP report grade monitor and proposes an observer standard for the visual detection of blurring on a 5-MP reporting grade monitor. Methods: 28 observers assessed 120 images for blurring; 20 images had no blurring present, whereas 100 images had blurring imposed through mathematical simulation at 0.2, 0.4, 0.6, 0.8 and 1.0 mm levels of motion. Technical recall rate for both monitors and angular size at each level of motion were calculated. χ2 tests were used to test whether significant differences in blurring detection existed between 2.3- and 5-MP monitors. Results: The technical recall rate for 2.3- and 5-MP monitors are 20.3% and 9.1%, respectively. The angular size for 0.2- to 1-mm motion varied from 55 to 275 arc s. The minimum amount of motion for visual detection of blurring in this study is 0.4 mm. For 0.2-mm simulated motion, there was no significant difference [χ2 (1, N = 1095) = 1.61, p = 0.20] in blurring detection between the 2.3- and 5-MP monitors. Conclusion: According to this study, monitors ≤2.3 MP are not suitable for technical review of full-field digital mammography images for the detection of blur. Advances in knowledge: This research proposes the first observer standard for the visual detection of blurring. PMID:28134567

  2. Blurred palmprint recognition based on stable-feature extraction using a Vese-Osher decomposition model.

    PubMed

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition.

  3. Quantitative fluorescence microscopy and image deconvolution.

    PubMed

    Swedlow, Jason R

    2013-01-01

    Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used to remove blurred signal from an image. There are two major types of deconvolution approaches, deblurring and restoration algorithms. Deblurring algorithms remove blur, but treat a series of optical sections as individual two-dimensional entities, and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed. Copyright © 1998 Elsevier Inc. All rights reserved.

  4. The Effect of Dioptric Blur on Reading Performance

    PubMed Central

    Chung, Susana T.L.; Jarvis, Samuel H.; Cheung, Sing-Hang

    2013-01-01

    Little is known about the systematic impact of blur on reading performance. The purpose of this study was to quantify the effect of dioptric blur on reading performance in a group of normally sighted young adults. We measured monocular reading performance and visual acuity for 19 observers with normal vision, for five levels of optical blur (no blur, 0.5, 1, 2 and 3D). Dioptric blur was induced using convex trial lenses placed in front of the testing eye, with the pupil dilated and in the presence of a 3 mm artificial pupil. Reading performance was assessed using eight versions of the MNREAD Acuity Chart. For each level of dioptric blur, observers read aloud sentences on one of these charts, from large to small print. Reading time for each sentence and the number of errors made were recorded and converted to reading speed in words per minute. Visual acuity was measured using 4-orientation Landolt C stimuli. For all levels of dioptric blur, reading speed increased with print size up to a certain print size and then remained constant at the maximum reading speed. By fitting nonlinear mixed-effects models, we found that the maximum reading speed was minimally affected by blur up to 2D, but was ~23% slower for 3D of blur. When the amount of blur increased from 0 (no-blur) to 3D, the threshold print size (print size corresponded to 80% of the maximum reading speed) increased from 0.01 to 0.88 logMAR, reading acuity worsened from −0.16 to 0.58 logMAR, and visual acuity worsened from −0.19 to 0.64 logMAR. The similar rates of change with blur for threshold print size, reading acuity and visual acuity implicates that visual acuity is a good predictor of threshold print size and reading acuity. Like visual acuity, reading performance is susceptible to the degrading effect of optical blur. For increasing amount of blur, larger print sizes are required to attain the maximum reading speed. PMID:17442363

  5. A blur-invariant local feature for motion blurred image matching

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Aoki, Terumasa

    2017-07-01

    Image matching between a blurred (caused by camera motion, out of focus, etc.) image and a non-blurred image is a critical task for many image/video applications. However, most of the existing local feature schemes fail to achieve this work. This paper presents a blur-invariant descriptor and a novel local feature scheme including the descriptor and the interest point detector based on moment symmetry - the authors' previous work. The descriptor is based on a new concept - center peak moment-like element (CPME) which is robust to blur and boundary effect. Then by constructing CPMEs, the descriptor is also distinctive and suitable for image matching. Experimental results show our scheme outperforms state of the art methods for blurred image matching

  6. Role of parafovea in blur perception.

    PubMed

    Venkataraman, Abinaya Priya; Radhakrishnan, Aiswaryah; Dorronsoro, Carlos; Lundström, Linda; Marcos, Susana

    2017-09-01

    The blur experienced by our visual system is not uniform across the visual field. Additionally, lens designs with variable power profile such as contact lenses used in presbyopia correction and to control myopia progression create variable blur from the fovea to the periphery. The perceptual changes associated with varying blur profile across the visual field are unclear. We therefore measured the perceived neutral focus with images of different angular subtense (from 4° to 20°) and found that the amount of blur, for which focus is perceived as neutral, increases when the stimulus was extended to cover the parafovea. We also studied the changes in central perceived neutral focus after adaptation to images with similar magnitude of optical blur across the image or varying blur from center to the periphery. Altering the blur in the periphery had little or no effect on the shift of perceived neutral focus following adaptation to normal/blurred central images. These perceptual outcomes should be considered while designing bifocal optical solutions for myopia or presbyopia. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Can sinogram-affirmed iterative (SAFIRE) reconstruction improve imaging quality on low-dose lung CT screening compared with traditional filtered back projection (FBP) reconstruction?

    PubMed

    Yang, Wen Jie; Yan, Fu Hua; Liu, Bo; Pang, Li Fang; Hou, Liang; Zhang, Huan; Pan, Zi Lai; Chen, Ke Min

    2013-01-01

    To evaluate the performance of sinogram-affirmed iterative (SAFIRE) reconstruction on image quality of low-dose lung computed tomographic (CT) screening compared with filtered back projection (FBP). Three hundred four patients for annual low-dose lung CT screening were examined by a dual-source CT system at 120 kilovolt (peak) with reference tube current of 40 mA·s. Six image serials were reconstructed, including one data set of FBP and 5 data sets of SAFIRE with different reconstruction strengths from 1 to 5. Image noise was recorded; and subjective scores of image noise, images artifacts, and the overall image quality were also assessed by 2 radiologists. The mean ± SD weight for all patients was 66.3 ± 12.8 kg, and the body mass index was 23.4 ± 3.2. The mean ± SD dose-length product was 95.2 ± 30.6 mGy cm, and the mean ± SD effective dose was 1.6 ± 0.5 mSv. The observation agreements for image noise grade, artifact grade, and the overall image quality were 0.785, 0.595 and 0.512, respectively. Among the overall 6 data sets, both the measured mean objective image noise and the subjective image noise of FBP was the highest, and the image noise decreased with the increasing of SAFIRE reconstruction strength. The data sets of S3 obtained the best image quality scores. Sinogram-affirmed iterative reconstruction can significantly improve image quality of low-dose lung CT screening compared with FBP, and SAFIRE with reconstruction strength 3 was a pertinent choice for low-dose lung CT.

  8. Image-Based 2D Re-Projection for Attenuation Substitution in PET Neuroimaging.

    PubMed

    Laymon, Charles M; Minhas, Davneet S; Becker, Carl R; Matan, Cristy; Oborski, Matthew J; Price, Julie C; Mountz, James M

    2018-02-27

    In dual modality positron emission tomography (PET)/magnetic resonance imaging (MRI), attenuation correction (AC) methods are continually improving. Although a new AC can sometimes be generated from existing MR data, its application requires a new reconstruction. We evaluate an approximate 2D projection method that allows offline image-based reprocessing. 2-Deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) brain scans were acquired (Siemens HR+) for six subjects. Attenuation data were obtained using the scanner's transmission source (SAC). Additional scanning was performed on a Siemens mMR including production of a Dixon-based MR AC (MRAC). The MRAC was imported to the HR+ and the PET data were reconstructed twice: once using native SAC (ground truth); once using the imported MRAC (imperfect AC). The re-projection method was implemented as follows. The MRAC PET was forward projected to approximately reproduce attenuation-corrected sinograms. The SAC and MRAC images were forward projected and converted to attenuation-correction factors (ACFs). The MRAC ACFs were removed from the MRAC PET sinograms by division; the SAC ACFs were applied by multiplication. The regenerated sinograms were reconstructed by filtered back projection to produce images (SUBAC PET) in which SAC has been substituted for MRAC. Ideally SUBAC PET should match SAC PET. Via coregistered T1 images, FreeSurfer (FS; MGH, Boston) was used to define a set of cortical gray matter regions of interest. Regional activity concentrations were extracted for SAC PET, MRAC PET, and SUBAC PET. SUBAC PET showed substantially smaller root mean square error than MRAC PET with averaged values of 1.5 % versus 8.1 %. Re-projection is a viable image-based method for the application of an alternate attenuation correction in neuroimaging.

  9. An improved robust blind motion de-blurring algorithm for remote sensing images

    NASA Astrophysics Data System (ADS)

    He, Yulong; Liu, Jin; Liang, Yonghui

    2016-10-01

    Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.

  10. Leveraging multi-channel x-ray detector technology to improve quality metrics for industrial and security applications

    NASA Astrophysics Data System (ADS)

    Jimenez, Edward S.; Thompson, Kyle R.; Stohn, Adriana; Goodner, Ryan N.

    2017-09-01

    Sandia National Laboratories has recently developed the capability to acquire multi-channel radio- graphs for multiple research and development applications in industry and security. This capability allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for computed tomography can actually benefit from binned projection data compared to traditionally acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis- tinguish between two different materials with similar absorption properties, artifact reduction, and signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology to non-destructive evaluation, national security, and industry is wide-ranging and has to potential to improve upon many inspection methods such as dual-energy methods, material identification, object segmentation, and computer vision on radiographs.

  11. Blurred Palmprint Recognition Based on Stable-Feature Extraction Using a Vese–Osher Decomposition Model

    PubMed Central

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese–Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred–PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328

  12. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    NASA Astrophysics Data System (ADS)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  13. Non-Parametric Blur Map Regression for Depth of Field Extension.

    PubMed

    D'Andres, Laurent; Salvador, Jordi; Kochale, Axel; Susstrunk, Sabine

    2016-04-01

    Real camera systems have a limited depth of field (DOF) which may cause an image to be degraded due to visible misfocus or too shallow DOF. In this paper, we present a blind deblurring pipeline able to restore such images by slightly extending their DOF and recovering sharpness in regions slightly out of focus. To address this severely ill-posed problem, our algorithm relies first on the estimation of the spatially varying defocus blur. Drawing on local frequency image features, a machine learning approach based on the recently introduced regression tree fields is used to train a model able to regress a coherent defocus blur map of the image, labeling each pixel by the scale of a defocus point spread function. A non-blind spatially varying deblurring algorithm is then used to properly extend the DOF of the image. The good performance of our algorithm is assessed both quantitatively, using realistic ground truth data obtained with a novel approach based on a plenoptic camera, and qualitatively with real images.

  14. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.

    PubMed

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-09-07

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers.

  15. A novel rotational invariants target recognition method for rotating motion blurred images

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Gong, Meiling; Dong, Mingwei; Zeng, Yiliang; Zhang, Yuzhen

    2017-11-01

    The imaging of the image sensor is blurred due to the rotational motion of the carrier and reducing the target recognition rate greatly. Although the traditional mode that restores the image first and then identifies the target can improve the recognition rate, it takes a long time to recognize. In order to solve this problem, a rotating fuzzy invariants extracted model was constructed that recognizes target directly. The model includes three metric layers. The object description capability of metric algorithms that contain gray value statistical algorithm, improved round projection transformation algorithm and rotation-convolution moment invariants in the three metric layers ranges from low to high, and the metric layer with the lowest description ability among them is as the input which can eliminate non pixel points of target region from degenerate image gradually. Experimental results show that the proposed model can improve the correct target recognition rate of blurred image and optimum allocation between the computational complexity and function of region.

  16. Identification of handheld objects for electro-optic/FLIR applications

    NASA Astrophysics Data System (ADS)

    Moyer, Steve K.; Flug, Eric; Edwards, Timothy C.; Krapels, Keith A.; Scarbrough, John

    2004-08-01

    This paper describes research on the determination of the fifty-percent probability of identification cycle criterion (N50) for two sets of handheld objects. The first set consists of 12 objects which are commonly held in a single hand. The second set consists of 10 objects commonly held in both hands. These sets consist of not only typical civilian handheld objects but also objects that are potentially lethal. A pistol, a cell phone, a rocket propelled grenade (RPG) launcher, and a broom are examples of the objects in these sets. The discrimination of these objects is an inherent part of homeland security, force protection, and also general population security. Objects were imaged from each set in the visible and mid-wave infrared (MWIR) spectrum. Various levels of blur are then applied to these images. These blurred images were then used in a forced choice perception experiment. Results were analyzed as a function of blur level and target size to give identification probability as a function of resolvable cycles on target. These results are applicable to handheld object target acquisition estimates for visible imaging systems and MWIR systems. This research provides guidance in the design and analysis of electro-optical systems and forward-looking infrared (FLIR) systems for use in homeland security, force protection, and also general population security.

  17. Adaptation to interocular differences in blur

    PubMed Central

    Kompaniez, Elysse; Sawides, Lucie; Marcos, Susana; Webster, Michael A.

    2013-01-01

    Adaptation to a blurred image causes a physically focused image to appear too sharp, and shifts the point of subjective focus toward the adapting blur, consistent with a renormalization of perceived focus. We examined whether and how this adaptation normalizes to differences in blur between the two eyes, which can routinely arise from differences in refractive errors. Observers adapted to images filtered to simulate optical defocus or different axes of astigmatism, as well as to images that were isotropically blurred or sharpened by varying the slope of the amplitude spectrum. Adaptation to the different types of blur produced strong aftereffects that showed strong transfer across the eyes, as assessed both in a monocular adaptation task and in a contingent adaptation task in which the two eyes were simultaneously exposed to different blur levels. Selectivity for the adapting eye was thus generally weak. When one eye was exposed to a sharper image than the other, the aftereffects also tended to be dominated by the sharper image. Our results suggest that while short-term adaptation can rapidly recalibrate the perception of blur, it cannot do so independently for the two eyes, and that the binocular adaptation of blur is biased by the sharper of the two eyes' retinal images. PMID:23729770

  18. Metallic artifact mitigation and organ-constrained tissue assignment for Monte Carlo calculations of permanent implant lung brachytherapy.

    PubMed

    Sutherland, J G H; Miksys, N; Furutani, K M; Thomson, R M

    2014-01-01

    To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxel and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for (125)I, (103)Pd, and (131)Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for (103)Pd seeds and smallest but still considerable differences for (131)Cs seeds. Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.

  19. Comparison of ring artifact removal methods using flat panel detector based CT images

    PubMed Central

    2011-01-01

    Background Ring artifacts are the concentric rings superimposed on the tomographic images often caused by the defective and insufficient calibrated detector elements as well as by the damaged scintillator crystals of the flat panel detector. It may be also generated by objects attenuating X-rays very differently in different projection direction. Ring artifact reduction techniques so far reported in the literature can be broadly classified into two groups. One category of the approaches is based on the sinogram processing also known as the pre-processing techniques and the other category of techniques perform processing on the 2-D reconstructed images, recognized as the post-processing techniques in the literature. The strength and weakness of these categories of approaches are yet to be explored from a common platform. Method In this paper, a comparative study of the two categories of ring artifact reduction techniques basically designed for the multi-slice CT instruments is presented from a common platform. For comparison, two representative algorithms from each of the two categories are selected from the published literature. A very recently reported state-of-the-art sinogram domain ring artifact correction method that classifies the ring artifacts according to their strength and then corrects the artifacts using class adaptive correction schemes is also included in this comparative study. The first sinogram domain correction method uses a wavelet based technique to detect the corrupted pixels and then using a simple linear interpolation technique estimates the responses of the bad pixels. The second sinogram based correction method performs all the filtering operations in the transform domain, i.e., in the wavelet and Fourier domain. On the other hand, the two post-processing based correction techniques actually operate on the polar transform domain of the reconstructed CT images. The first method extracts the ring artifact template vector using a homogeneity test and then corrects the CT images by subtracting the artifact template vector from the uncorrected images. The second post-processing based correction technique performs median and mean filtering on the reconstructed images to produce the corrected images. Results The performances of the comparing algorithms have been tested by using both quantitative and perceptual measures. For quantitative analysis, two different numerical performance indices are chosen. On the other hand, different types of artifact patterns, e.g., single/band ring, artifacts from defective and mis-calibrated detector elements, rings in highly structural object and also in hard object, rings from different flat-panel detectors are analyzed to perceptually investigate the strength and weakness of the five methods. An investigation has been also carried out to compare the efficacy of these algorithms in correcting the volume images from a cone beam CT with the parameters determined from one particular slice. Finally, the capability of each correction technique in retaining the image information (e.g., small object at the iso-center) accurately in the corrected CT image has been also tested. Conclusions The results show that the performances of the algorithms are limited and none is fully suitable for correcting different types of ring artifacts without introducing processing distortion to the image structure. To achieve the diagnostic quality of the corrected slices a combination of the two approaches (sinogram- and post-processing) can be used. Also the comparing methods are not suitable for correcting the volume images from a cone beam flat-panel detector based CT. PMID:21846411

  20. Generation of hybrid sinograms for the recovery of kV-CT images with metal artifacts for helical tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Hosang; Park, Dahl; Kim, Wontaek

    Purpose: The overall goal of this study is to restore kilovoltage computed tomography (kV-CT) images which are disfigured by patients’ metal prostheses. By generating a hybrid sinogram that is a combination of kV and megavoltage (MV) projection data, the authors suggest a novel metal artifact-reduction (MAR) method that retains the image quality to match that of kV-CT and simultaneously restores the information of metal prostheses lost due to photon starvation. Methods: CT projection data contain information about attenuation coefficients and the total length of the attenuation. By normalizing raw kV projections with their own total lengths of attenuation, mean attenuationmore » projections were obtained. In the same manner, mean density projections of MV-CT were obtained by the normalization of MV projections resulting from the forward projection of density-calibrated MV-CT images with the geometric parameters of the kV-CT device. To generate the hybrid sinogram, metal-affected signals of the kV sinogram were identified and replaced by the corresponding signals of the MV sinogram following a density calibration step with kV data. Filtered backprojection was implemented to reconstruct the hybrid CT image. To validate the authors’ approach, they simulated four different scenarios for three heads and one pelvis using metallic rod inserts within a cylindrical phantom. Five inserts describing human body elements were also included in the phantom. The authors compared the image qualities among the kV, MV, and hybrid CT images by measuring the contrast-to-noise ratio (CNR), the signal-to-noise ratio (SNR), the densities of all inserts, and the spatial resolution. In addition, the MAR performance was compared among three existing MAR methods and the authors’ hybrid method. Finally, for clinical trials, the authors produced hybrid images of three patients having dental metal prostheses to compare their MAR performances with those of the kV, MV, and three existing MAR methods. Results: The authors compared the image quality and MAR performance of the hybrid method with those of other imaging modalities and the three MAR methods, respectively. The total measured mean of the CNR (SNR) values for the nonmetal inserts was determined to be 14.3 (35.3), 15.3 (37.8), and 25.5 (64.3) for the kV, MV, and hybrid images, respectively, and the spatial resolutions of the hybrid images were similar to those of the kV images. The measured densities of the metal and nonmetal inserts in the hybrid images were in good agreement with their true densities, except in cases of extremely low densities, such as air and lung. Using the hybrid method, major streak artifacts were suitably removed and no secondary artifacts were introduced in the resultant image. In clinical trials, the authors verified that kV and MV projections were successfully combined and turned into the resultant hybrid image with high image contrast, accurate metal information, and few metal artifacts. The hybrid method also outperformed the three existing MAR methods with regard to metal information restoration and secondary artifact prevention. Conclusions: The authors have shown that the hybrid method can restore the overall image quality of kV-CT disfigured by severe metal artifacts and restore the information of metal prostheses lost due to photon starvation. The hybrid images may allow for the improved delineation of structures of interest and accurate dose calculations for radiation treatment planning for patients with metal prostheses.« less

  1. Recognition of blurred images by the method of moments.

    PubMed

    Flusser, J; Suk, T; Saic, S

    1996-01-01

    The article is devoted to the feature-based recognition of blurred images acquired by a linear shift-invariant imaging system against an image database. The proposed approach consists of describing images by features that are invariant with respect to blur and recognizing images in the feature space. The PSF identification and image restoration are not required. A set of symmetric blur invariants based on image moments is introduced. A numerical experiment is presented to illustrate the utilization of the invariants for blurred image recognition. Robustness of the features is also briefly discussed.

  2. Reconstruction of noisy and blurred images using blur kernel

    NASA Astrophysics Data System (ADS)

    Ellappan, Vijayan; Chopra, Vishal

    2017-11-01

    Blur is a common in so many digital images. Blur can be caused by motion of the camera and scene object. In this work we proposed a new method for deblurring images. This work uses sparse representation to identify the blur kernel. By analyzing the image coordinates Using coarse and fine, we fetch the kernel based image coordinates and according to that observation we get the motion angle of the shaken or blurred image. Then we calculate the length of the motion kernel using radon transformation and Fourier for the length calculation of the image and we use Lucy Richardson algorithm which is also called NON-Blind(NBID) Algorithm for more clean and less noisy image output. All these operation will be performed in MATLAB IDE.

  3. Compensation for Blur Requires Increase in Field of View and Viewing Time

    PubMed Central

    Kwon, MiYoung; Liu, Rong; Chien, Lillian

    2016-01-01

    Spatial resolution is an important factor for human pattern recognition. In particular, low resolution (blur) is a defining characteristic of low vision. Here, we examined spatial (field of view) and temporal (stimulus duration) requirements for blurry object recognition. The spatial resolution of an image such as letter or face, was manipulated with a low-pass filter. In experiment 1, studying spatial requirement, observers viewed a fixed-size object through a window of varying sizes, which was repositioned until object identification (moving window paradigm). Field of view requirement, quantified as the number of “views” (window repositions) for correct recognition, was obtained for three blur levels, including no blur. In experiment 2, studying temporal requirement, we determined threshold viewing time, the stimulus duration yielding criterion recognition accuracy, at six blur levels, including no blur. For letter and face recognition, we found blur significantly increased the number of views, suggesting a larger field of view is required to recognize blurry objects. We also found blur significantly increased threshold viewing time, suggesting longer temporal integration is necessary to recognize blurry objects. The temporal integration reflects the tradeoff between stimulus intensity and time. While humans excel at recognizing blurry objects, our findings suggest compensating for blur requires increased field of view and viewing time. The need for larger spatial and longer temporal integration for recognizing blurry objects may further challenge object recognition in low vision. Thus, interactions between blur and field of view should be considered for developing low vision rehabilitation or assistive aids. PMID:27622710

  4. Improving the blind restoration of retinal images by means of point-spread-function estimation assessment

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés. G.; Millán, María. S.; Å orel, Michal; Kotera, Jan; Å roubek, Filip

    2015-01-01

    Retinal images often suffer from blurring which hinders disease diagnosis and progression assessment. The restoration of the images is carried out by means of blind deconvolution, but the success of the restoration depends on the correct estimation of the point-spread-function (PSF) that blurred the image. The restoration can be space-invariant or space-variant. Because a retinal image has regions without texture or sharp edges, the blind PSF estimation may fail. In this paper we propose a strategy for the correct assessment of PSF estimation in retinal images for restoration by means of space-invariant or space-invariant blind deconvolution. Our method is based on a decomposition in Zernike coefficients of the estimated PSFs to identify valid PSFs. This significantly improves the quality of the image restoration revealed by the increased visibility of small details like small blood vessels and by the lack of restoration artifacts.

  5. Using Blur to Affect Perceived Distance and Size

    PubMed Central

    HELD, ROBERT T.; COOPER, EMILY A.; O’BRIEN, JAMES F.; BANKS, MARTIN S.

    2011-01-01

    We present a probabilistic model of how viewers may use defocus blur in conjunction with other pictorial cues to estimate the absolute distances to objects in a scene. Our model explains how the pattern of blur in an image together with relative depth cues indicates the apparent scale of the image’s contents. From the model, we develop a semiautomated algorithm that applies blur to a sharply rendered image and thereby changes the apparent distance and scale of the scene’s contents. To examine the correspondence between the model/algorithm and actual viewer experience, we conducted an experiment with human viewers and compared their estimates of absolute distance to the model’s predictions. We did this for images with geometrically correct blur due to defocus and for images with commonly used approximations to the correct blur. The agreement between the experimental data and model predictions was excellent. The model predicts that some approximations should work well and that others should not. Human viewers responded to the various types of blur in much the way the model predicts. The model and algorithm allow one to manipulate blur precisely and to achieve the desired perceived scale efficiently. PMID:21552429

  6. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  7. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters

    PubMed Central

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-01-01

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046

  8. PSF estimation for defocus blurred image based on quantum back-propagation neural network

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang

    2010-11-01

    Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.

  9. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    PubMed

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  10. The influence of structure depth on image blurring of micrometres-thick specimens in MeV transmission electron imaging.

    PubMed

    Wang, Fang; Sun, Ying; Cao, Meng; Nishi, Ryuji

    2016-04-01

    This study investigates the influence of structure depth on image blurring of micrometres-thick films by experiment and simulation with a conventional transmission electron microscope (TEM). First, ultra-high-voltage electron microscope (ultra-HVEM) images of nanometer gold particles embedded in thick epoxy-resin films were acquired in the experiment and compared with simulated images. Then, variations of image blurring of gold particles at different depths were evaluated by calculating the particle diameter. The results showed that with a decrease in depth, image blurring increased. This depth-related property was more apparent for thicker specimens. Fortunately, larger particle depth involves less image blurring, even for a 10-μm-thick epoxy-resin film. The quality dependence on depth of a 3D reconstruction of particle structures in thick specimens was revealed by electron tomography. The evolution of image blurring with structure depth is determined mainly by multiple elastic scattering effects. Thick specimens of heavier materials produced more blurring due to a larger lateral spread of electrons after scattering from the structure. Nevertheless, increasing electron energy to 2MeV can reduce blurring and produce an acceptable image quality for thick specimens in the TEM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Addressing the third gamma problem in PET

    NASA Astrophysics Data System (ADS)

    Schueller, M. J.; Mulnix, T. L.; Christian, B. T.; Jensen, M.; Holm, S.; Oakes, T. R.; Roberts, A. D.; Dick, D. W.; Martin, C. C.; Nickles, R. J.

    2003-02-01

    PET brings the promise of quantitative imaging of the in-vivo distribution of any positron emitting nuclide, a list with hundreds of candidates. All but a few of these, the "pure positron" emitters, have isotropic coincident gamma rays that give rise to misrepresented events in the sinogram and in the resulting reconstructed image. Of particular interest are /sup 10/C, /sup 14/O, /sup 38/K, /sup 52m/Mn, /sup 60/Cu, /sup 61/Cu, /sup 94m/Tc, and /sup 124/I, each having high-energy gammas that are Compton-scattered down into the 511 keV window. The problems arising from the "third gamma," and its accommodation by standard scatter correction algorithms, were studied empirically, employing three scanner models (CTI 933/04, CTI HR+ and GE Advance), imaging three phantoms (line source, NEMA scatter and contrast/detail), with /sup 18/F or /sup 38/K and /sup 72/As mimicking /sup 14/O and /sup 10/C, respectively, in 2-D and 3-D modes. Five findings emerge directly from the image analysis. The third gamma: 1) does, obviously, tax the single event rate of the PET scanners, particularly in the absence of septa, from activity outside of the axial field of view; 2) does, therefore, tax the random rate, which is second order in singles, although the gamma is a prompt coincidence partner; 3) does enter the sinogram as an additional flat background, like randoms, but unlike scatter; 4) is not seriously misrepresented by the scatter algorithm which fits the correction to the wings of the sinogram; and 5) does introduce additional statistical noise from the subsequent subtraction, but does not seriously compromise the detectability of lesions as seen in the contrast/detail phantom. As a safeguard against the loss of accuracy in image quantitation, fiducial sources of known activity are included in the field of view alongside of the subject. With this precaution, a much wider selection of imaging agents can enjoy the advantages of positron emission tomography.

  12. SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, C; Qi, H; Chen, Z

    Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using meanmore » filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.« less

  13. Clinical evaluation of 4D PET motion compensation strategies for treatment verification in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia

    2016-06-01

    A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra-reconstruction smoothing.

  14. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images.

    PubMed

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer's own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed.

  15. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images

    PubMed Central

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J.

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed. PMID:26447793

  16. Radiation-induced refraction artifacts in the optical CT readout of polymer gel dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Warren G.; Jirasek, Andrew, E-mail: jirasek@uvic.ca; Wells, Derek M.

    2014-11-01

    Purpose: The objective of this work is to demonstrate imaging artifacts that can occur during the optical computed tomography (CT) scanning of polymer gel dosimeters due to radiation-induced refractive index (RI) changes in polyacrylamide gels. Methods: A 1 L cylindrical polyacrylamide gel dosimeter was irradiated with 3 × 3 cm{sup 2} square beams of 6 MV photons. A prototype fan-beam optical CT scanner was used to image the dosimeter. Investigative optical CT scans were performed to examine two types of rayline bending: (i) bending within the plane of the fan-beam and (ii) bending out the plane of the fan-beam. Tomore » address structured errors, an iterative Savitzky–Golay (ISG) filtering routine was designed to filter 2D projections in sinogram space. For comparison, 2D projections were alternatively filtered using an adaptive-mean (AM) filter. Results: In-plane rayline bending was most notably observed in optical CT projections where rays of the fan-beam confronted a sustained dose gradient that was perpendicular to their trajectory but within the fan-beam plane. These errors caused distinct streaking artifacts in image reconstructions due to the refraction of higher intensity rays toward more opaque regions of the dosimeter. Out-of-plane rayline bending was observed in slices of the dosimeter that featured dose gradients perpendicular to the plane of the fan-beam. These errors caused widespread, severe overestimations of dose in image reconstructions due to the higher-than-actual opacity that is perceived by the scanner when light is bent off of the detector array. The ISG filtering routine outperformed AM filtering for both in-plane and out-of-plane rayline errors caused by radiation-induced RI changes. For in-plane rayline errors, streaks in an irradiated region (>7 Gy) were as high as 49% for unfiltered data, 14% for AM, and 6% for ISG. For out-of-plane rayline errors, overestimations of dose in a low-dose region (∼50 cGy) were as high as 13 Gy for unfiltered data, 10 Gy for AM, and 3.1 Gy for ISG. The ISG routine also addressed unrelated artifacts that previously needed to be manually removed in sinogram space. However, the ISG routine blurred reconstructions, causing losses in spatial resolution of ∼5 mm in the plane of the fan-beam and ∼8 mm perpendicular to the fan-beam. Conclusions: This paper reveals a new category of imaging artifacts that can affect the optical CT readout of polyacrylamide gel dosimeters. Investigative scans show that radiation-induced RI changes can cause significant rayline errors when rays confront a prolonged dose gradient that runs perpendicular to their trajectory. In fan-beam optical CT, these errors manifested in two ways: (1) distinct streaking artifacts caused by in-plane rayline bending and (2) severe overestimations of opacity caused by rays bending out of the fan-beam plane and missing the detector array. Although the ISG filtering routine mitigated these errors better than an adaptive-mean filtering routine, it caused unacceptable losses in spatial resolution.« less

  17. Radiation-induced refraction artifacts in the optical CT readout of polymer gel dosimeters.

    PubMed

    Campbell, Warren G; Wells, Derek M; Jirasek, Andrew

    2014-11-01

    The objective of this work is to demonstrate imaging artifacts that can occur during the optical computed tomography (CT) scanning of polymer gel dosimeters due to radiation-induced refractive index (RI) changes in polyacrylamide gels. A 1 L cylindrical polyacrylamide gel dosimeter was irradiated with 3 × 3 cm(2) square beams of 6 MV photons. A prototype fan-beam optical CT scanner was used to image the dosimeter. Investigative optical CT scans were performed to examine two types of rayline bending: (i) bending within the plane of the fan-beam and (ii) bending out the plane of the fan-beam. To address structured errors, an iterative Savitzky-Golay (ISG) filtering routine was designed to filter 2D projections in sinogram space. For comparison, 2D projections were alternatively filtered using an adaptive-mean (AM) filter. In-plane rayline bending was most notably observed in optical CT projections where rays of the fan-beam confronted a sustained dose gradient that was perpendicular to their trajectory but within the fan-beam plane. These errors caused distinct streaking artifacts in image reconstructions due to the refraction of higher intensity rays toward more opaque regions of the dosimeter. Out-of-plane rayline bending was observed in slices of the dosimeter that featured dose gradients perpendicular to the plane of the fan-beam. These errors caused widespread, severe overestimations of dose in image reconstructions due to the higher-than-actual opacity that is perceived by the scanner when light is bent off of the detector array. The ISG filtering routine outperformed AM filtering for both in-plane and out-of-plane rayline errors caused by radiation-induced RI changes. For in-plane rayline errors, streaks in an irradiated region (>7 Gy) were as high as 49% for unfiltered data, 14% for AM, and 6% for ISG. For out-of-plane rayline errors, overestimations of dose in a low-dose region (∼50 cGy) were as high as 13 Gy for unfiltered data, 10 Gy for AM, and 3.1 Gy for ISG. The ISG routine also addressed unrelated artifacts that previously needed to be manually removed in sinogram space. However, the ISG routine blurred reconstructions, causing losses in spatial resolution of ∼5 mm in the plane of the fan-beam and ∼8 mm perpendicular to the fan-beam. This paper reveals a new category of imaging artifacts that can affect the optical CT readout of polyacrylamide gel dosimeters. Investigative scans show that radiation-induced RI changes can cause significant rayline errors when rays confront a prolonged dose gradient that runs perpendicular to their trajectory. In fan-beam optical CT, these errors manifested in two ways: (1) distinct streaking artifacts caused by in-plane rayline bending and (2) severe overestimations of opacity caused by rays bending out of the fan-beam plane and missing the detector array. Although the ISG filtering routine mitigated these errors better than an adaptive-mean filtering routine, it caused unacceptable losses in spatial resolution.

  18. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.

    PubMed

    Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong

    2017-05-07

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.

  19. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies

    PubMed Central

    Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong

    2017-01-01

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843

  20. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies

    NASA Astrophysics Data System (ADS)

    Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong

    2017-05-01

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.

  1. Metallic artifact mitigation and organ-constrained tissue assignment for Monte Carlo calculations of permanent implant lung brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutherland, J. G. H.; Miksys, N.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca

    2014-01-15

    Purpose: To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Methods: Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxelmore » and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for{sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Results: Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for{sup 103}Pd seeds and smallest but still considerable differences for {sup 131}Cs seeds. Conclusions: Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.« less

  2. Tchebichef moment based restoration of Gaussian blurred images.

    PubMed

    Kumar, Ahlad; Paramesran, Raveendran; Lim, Chern-Loon; Dass, Sarat C

    2016-11-10

    With the knowledge of how edges vary in the presence of a Gaussian blur, a method that uses low-order Tchebichef moments is proposed to estimate the blur parameters: sigma (σ) and size (w). The difference between the Tchebichef moments of the original and the reblurred images is used as feature vectors to train an extreme learning machine for estimating the blur parameters (σ,w). The effectiveness of the proposed method to estimate the blur parameters is examined using cross-database validation. The estimated blur parameters from the proposed method are used in the split Bregman-based image restoration algorithm. A comparative analysis of the proposed method with three existing methods using all the images from the LIVE database is carried out. The results show that the proposed method in most of the cases performs better than the three existing methods in terms of the visual quality evaluated using the structural similarity index.

  3. Processing of configural and componential information in face-selective cortical areas.

    PubMed

    Zhao, Mintao; Cheung, Sing-Hang; Wong, Alan C-N; Rhodes, Gillian; Chan, Erich K S; Chan, Winnie W L; Hayward, William G

    2014-01-01

    We investigated how face-selective cortical areas process configural and componential face information and how race of faces may influence these processes. Participants saw blurred (preserving configural information), scrambled (preserving componential information), and whole faces during fMRI scan, and performed a post-scan face recognition task using blurred or scrambled faces. The fusiform face area (FFA) showed stronger activation to blurred than to scrambled faces, and equivalent responses to blurred and whole faces. The occipital face area (OFA) showed stronger activation to whole than to blurred faces, which elicited similar responses to scrambled faces. Therefore, the FFA may be more tuned to process configural than componential information, whereas the OFA similarly participates in perception of both. Differences in recognizing own- and other-race blurred faces were correlated with differences in FFA activation to those faces, suggesting that configural processing within the FFA may underlie the other-race effect in face recognition.

  4. Effective 3-D shape discrimination survives retinal blur.

    PubMed

    Norman, J Farley; Beers, Amanda M; Holmin, Jessica S; Boswell, Alexandria M

    2010-08-01

    A single experiment evaluated observers' ability to visually discriminate 3-D object shape, where the 3-D structure was defined by motion, texture, Lambertian shading, and occluding contours. The observers' vision was degraded to varying degrees by blurring the experimental stimuli, using 2.0-, 2.5-, and 3.0-diopter convex lenses. The lenses reduced the observers' acuity from -0.091 LogMAR (in the no-blur conditions) to 0.924 LogMAR (in the conditions with the most blur; 3.0-diopter lenses). This visual degradation, although producing severe reductions in visual acuity, had only small (but significant) effects on the observers' ability to discriminate 3-D shape. The observers' shape discrimination performance was facilitated by the objects' rotation in depth, regardless of the presence or absence of blur. Our results indicate that accurate global shape discrimination survives a considerable amount of retinal blur.

  5. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring.

    PubMed

    Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi

    2017-01-18

    Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  6. Vergence accommodation and monocular closed loop blur accommodation have similar dynamic characteristics.

    PubMed

    Suryakumar, Rajaraman; Meyers, Jason P; Irving, Elizabeth L; Bobier, William R

    2007-02-01

    Retinal blur and disparity are two different sensory signals known to cause a change in accommodative response. These inputs have differing neurological correlates that feed into a final common pathway. The purpose of this study was to investigate the dynamic properties of monocular blur driven accommodation and binocular disparity driven vergence-accommodation (VA) in human subjects. The results show that when response amplitudes are matched, blur accommodation and VA share similar dynamic properties.

  7. Distinguishing dose, focus, and blur for lithography characterization and control

    NASA Astrophysics Data System (ADS)

    Ausschnitt, Christopher P.; Brunner, Timothy A.

    2007-03-01

    We derive a physical model to describe the dependence of pattern dimensions on dose, defocus and blur. The coefficients of our model are constants of a given lithographic process. Model inversion applied to dimensional measurements then determines effective dose, defocus and blur for wafers patterned with the same process. In practice, our approach entails the measurement of proximate grating targets of differing dose and focus sensitivity. In our embodiment, the measured attribute of one target is exclusively sensitive to dose, whereas the measured attributes of a second target are distinctly sensitive to defocus and blur. On step-and-scan exposure tools, z-blur is varied in a controlled manner by adjusting the across slit tilt of the image plane. The effects of z-blur and x,y-blur are shown to be equivalent. Furthermore, the exposure slit width is shown to determine the tilt response of the grating attributes. Thus, the response of the measured attributes can be characterized by a conventional focus-exposure matrix (FEM), over which the exposure tool settings are intentionally changed. The model coefficients are determined by a fit to the measured FEM response. The model then fully defines the response for wafers processed under "fixed" dose, focus and blur conditions. Model inversion applied to measurements from the same targets on all such wafers enables the simultaneous determination of effective dose and focus/tilt (DaFT) at each measurement site.

  8. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  9. Sub-Lexical Phonological and Semantic Processing of Semantic Radicals: A Primed Naming Study

    ERIC Educational Resources Information Center

    Zhou, Lin; Peng, Gang; Zheng, Hong-Ying; Su, I-Fan; Wang, William S.-Y.

    2013-01-01

    Most sinograms (i.e., Chinese characters) are phonograms (phonetic compounds). A phonogram is composed of a semantic radical and a phonetic radical, with the former usually implying the meaning of the phonogram, and the latter providing cues to its pronunciation. This study focused on the sub-lexical processing of semantic radicals which are…

  10. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Jürg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2011-06-01

    Seismic interferometry, also known as Green's function retrieval by crosscorrelation, has a wide range of applications, ranging from surface-wave tomography using ambient noise, to creating virtual sources for improved reflection seismology. Despite its successful applications, the crosscorrelation approach also has its limitations. The main underlying assumptions are that the medium is lossless and that the wavefield is equipartitioned. These assumptions are in practice often violated: the medium of interest is often illuminated from one side only, the sources may be irregularly distributed, and losses may be significant. These limitations may partly be overcome by reformulating seismic interferometry as a multidimensional deconvolution (MDD) process. We present a systematic analysis of seismic interferometry by crosscorrelation and by MDD. We show that for the non-ideal situations mentioned above, the correlation function is proportional to a Green's function with a blurred source. The source blurring is quantified by a so-called interferometric point-spread function which, like the correlation function, can be derived from the observed data (i.e. without the need to know the sources and the medium). The source of the Green's function obtained by the correlation method can be deblurred by deconvolving the correlation function for the point-spread function. This is the essence of seismic interferometry by MDD. We illustrate the crosscorrelation and MDD methods for controlled-source and passive-data applications with numerical examples and discuss the advantages and limitations of both methods.

  11. Multi-frame partially saturated images blind deconvolution

    NASA Astrophysics Data System (ADS)

    Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2016-12-01

    When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.

  12. Simulated disparity and peripheral blur interact during binocular fusion.

    PubMed

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J

    2014-07-17

    We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual’s aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. © 2014 ARVO.

  13. Simulated disparity and peripheral blur interact during binocular fusion

    PubMed Central

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J

    2014-01-01

    We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual's aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. PMID:25034260

  14. SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, C; Jin, M; Ouyang, L

    2015-06-15

    Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less

  15. Simultaneous Tumor Segmentation, Image Restoration, and Blur Kernel Estimation in PET Using Multiple Regularizations

    PubMed Central

    Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan

    2016-01-01

    Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407

  16. MO-G-17A-05: PET Image Deblurring Using Adaptive Dictionary Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valiollahzadeh, S; Clark, J; Mawlawi, O

    2014-06-15

    Purpose: The aim of this work is to deblur PET images while suppressing Poisson noise effects using adaptive dictionary learning (DL) techniques. Methods: The model that relates a blurred and noisy PET image to the desired image is described as a linear transform y=Hm+n where m is the desired image, H is a blur kernel, n is Poisson noise and y is the blurred image. The approach we follow to recover m involves the sparse representation of y over a learned dictionary, since the image has lots of repeated patterns, edges, textures and smooth regions. The recovery is based onmore » an optimization of a cost function having four major terms: adaptive dictionary learning term, sparsity term, regularization term, and MLEM Poisson noise estimation term. The optimization is solved by a variable splitting method that introduces additional variables. We simulated a 128×128 Hoffman brain PET image (baseline) with varying kernel types and sizes (Gaussian 9×9, σ=5.4mm; Uniform 5×5, σ=2.9mm) with additive Poisson noise (Blurred). Image recovery was performed once when the kernel type was included in the model optimization and once with the model blinded to kernel type. The recovered image was compared to the baseline as well as another recovery algorithm PIDSPLIT+ (Setzer et. al.) by calculating PSNR (Peak SNR) and normalized average differences in pixel intensities (NADPI) of line profiles across the images. Results: For known kernel types, the PSNR of the Gaussian (Uniform) was 28.73 (25.1) and 25.18 (23.4) for DL and PIDSPLIT+ respectively. For blinded deblurring the PSNRs were 25.32 and 22.86 for DL and PIDSPLIT+ respectively. NADPI between baseline and DL, and baseline and blurred for the Gaussian kernel was 2.5 and 10.8 respectively. Conclusion: PET image deblurring using dictionary learning seems to be a good approach to restore image resolution in presence of Poisson noise. GE Health Care.« less

  17. unWISE: Unblurred Coadds of the WISE Imaging

    NASA Astrophysics Data System (ADS)

    Lang, Dustin

    2014-05-01

    The Wide-field Infrared Survey Explorer (WISE) satellite observed the full sky in four mid-infrared bands in the 2.8-28 μm range. The primary mission was completed in 2010. The WISE team has done a superb job of producing a series of high-quality, well-documented, complete data releases in a timely manner. However, the "Atlas Image" coadds that are part of the recent AllWISE and previous data releases were intentionally blurred. Convolving the images by the point-spread function while coadding results in "matched-filtered" images that are close to optimal for detecting isolated point sources. But these matched-filtered images are sub-optimal or inappropriate for other purposes. For example, we are photometering the WISE images at the locations of sources detected in the Sloan Digital Sky Survey through forward modeling, and this blurring decreases the available signal-to-noise by effectively broadening the point-spread function. This paper presents a new set of coadds of the WISE images that have not been blurred. These images retain the intrinsic resolution of the data and are appropriate for photometry preserving the available signal-to-noise. Users should be cautioned, however, that the W3- and W4-band coadds contain artifacts around large, bright structures (large galaxies, dusty nebulae, etc.); eliminating these artifacts is the subject of ongoing work. These new coadds, and the code used to produce them, are publicly available at http://unwise.me.

  18. Point spread function based classification of regions for linear digital tomosynthesis

    NASA Astrophysics Data System (ADS)

    Israni, Kenny; Avinash, Gopal; Li, Baojun

    2007-03-01

    In digital tomosynthesis, one of the limitations is the presence of out-of-plane blur due to the limited angle acquisition. The point spread function (PSF) characterizes blur in the imaging volume, and is shift-variant in tomosynthesis. The purpose of this research is to classify the tomosynthesis imaging volume into four different categories based on PSF-driven focus criteria. We considered linear tomosynthesis geometry and simple back projection algorithm for reconstruction. The three-dimensional PSF at every pixel in the imaging volume was determined. Intensity profiles were computed for every pixel by integrating the PSF-weighted intensities contained within the line segment defined by the PSF, at each slice. Classification rules based on these intensity profiles were used to categorize image regions. At background and low-frequency pixels, the derived intensity profiles were flat curves with relatively low and high maximum intensities respectively. At in-focus pixels, the maximum intensity of the profiles coincided with the PSF-weighted intensity of the pixel. At out-of-focus pixels, the PSF-weighted intensity of the pixel was always less than the maximum intensity of the profile. We validated our method using human observer classified regions as gold standard. Based on the computed and manual classifications, the mean sensitivity and specificity of the algorithm were 77+/-8.44% and 91+/-4.13% respectively (t=-0.64, p=0.56, DF=4). Such a classification algorithm may assist in mitigating out-of-focus blur from tomosynthesis image slices.

  19. Deblurring of Class-Averaged Images in Single-Particle Electron Microscopy.

    PubMed

    Park, Wooram; Madden, Dean R; Rockmore, Daniel N; Chirikjian, Gregory S

    2010-03-01

    This paper proposes a method for deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre-Fourier expansions, and Hermite expansion and Laguerre-Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method.

  20. Evidence of collaboration, pooling of resources, learning and role blurring in interprofessional healthcare teams: a realist synthesis.

    PubMed

    Sims, Sarah; Hewitt, Gillian; Harris, Ruth

    2015-01-01

    Interprofessional teamwork has become an integral feature of healthcare delivery in a wide range of conditions and services in many countries. Many assumptions are made in healthcare literature and policy about how interprofessional teams function and about the outcomes of interprofessional teamwork. Realist synthesis is an approach to reviewing research evidence on complex interventions which seeks to explore these assumptions. It does this by unpacking the mechanisms of an intervention, exploring the contexts which trigger or deactivate them and connecting these contexts and mechanisms to their subsequent outcomes. This is the second in a series of four papers reporting a realist synthesis of interprofessional teamworking. The paper discusses four of the 13 mechanisms identified in the synthesis: collaboration and coordination; pooling of resources; individual learning; and role blurring. These mechanisms together capture the day-to-day functioning of teams and the dependence of that on members' understanding each others' skills and knowledge and learning from them. This synthesis found empirical evidence to support all four mechanisms, which tentatively suggests that collaboration, pooling, learning, and role blurring are all underlying processes of interprofessional teamwork. However, the supporting evidence for individual learning was relatively weak, therefore there may be assumptions made about learning within healthcare literature and policy that are not founded upon strong empirical evidence. There is a need for more robust research on individual learning to further understand its relationship with interprofessional teamworking in healthcare.

  1. [Fuzzy logic in urology. How to reason in inaccurate terms].

    PubMed

    Vírseda Chamorro, Miguel; Salinas Casado, Jesus; Vázquez Alba, David

    2004-05-01

    The Occidental thinking is basically binary, based on opposites. The classic logic constitutes a systematization of these thinking. The methods of pure sciences such as physics are based on systematic measurement, analysis and synthesis. Nature is described by deterministic differential equations this way. Medical knowledge does not adjust well to deterministic equations of physics so that probability methods are employed. However, this method is not free of problems, both theoretical and practical, so that it is not often possible even to know with certainty the probabilities of most events. On the other hand, the application of binary logic to medicine in general, and to urology particularly, finds serious difficulties such as the imprecise character of the definition of most diseases and the uncertainty associated with most medical acts. These are responsible for the fact that many medical recommendations are made using a literary language which is inaccurate, inconsistent and incoherent. The blurred logic is a way of reasoning coherently using inaccurate concepts. This logic was proposed by Lofti Zadeh in 1965 and it is based in two principles: the theory of blurred conjuncts and the use of blurred rules. A blurred conjunct is one the elements of which have a degree of belonging between 0 and 1. Each blurred conjunct is associated with an inaccurate property or linguistic variable. Blurred rules use the principles of classic logic adapted to blurred conjuncts taking the degree of belonging of each element to the blurred conjunct of reference as the value of truth. Blurred logic allows to do coherent urologic recommendations (i.e. what patient is the performance of PSA indicated in?, what to do in the face of an elevated PSA?), or to perform diagnosis adapted to the uncertainty of diagnostic tests (e.g. data obtained from pressure flow studies in females).

  2. Edge roughness evaluation method for quantifying at-size beam blur in electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Yoshizawa, Masaki; Moriya, Shigeru

    2000-07-01

    At-size beam blur at any given pattern size of an electron beam (EB) direct writer, HL800D, was quantified using the new edge roughness evaluation (ERE) method to optimize the electron-optical system. We characterized the two-dimensional beam-blur dependence on the electron deflection length of the EB direct writer. The results indicate that the beam blur ranged from 45 nm to 56 nm in a deflection field 2520 micrometer square. The new ERE method is based on the experimental finding that line edge roughness of a resist pattern is inversely proportional to the slope of the Gaussian-distributed quasi-beam-profile (QBP) proposed in this paper. The QBP includes effects of the beam blur, electron forward scattering, acid diffusion in chemically amplified resist (CAR), the development process, and aperture mask quality. The application the ERE method to investigating the beam-blur fluctuation demonstrates the validity of the ERE method in characterizing the electron-optical column conditions of EB projections such as SCALPEL and PREVAIL.

  3. Automatic detection of blurred images in UAV image sets

    NASA Astrophysics Data System (ADS)

    Sieberth, Till; Wackrow, Rene; Chandler, Jim H.

    2016-12-01

    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm.

  4. Storying Literacies, Reimagining Classrooms: Teaching, Research, and Writing as Blurred Translating

    ERIC Educational Resources Information Center

    McManimon, Shannon K.

    2014-01-01

    I theorize teaching and researching as practices of "blurred translating" that center antioppressive education (Kumashiro, 2002) and storytelling (e.g., Frank, 2010; Zipes, 1995, 2004). Based in listening, research and teaching as blurred translating are relational, contextual, and ongoing processes oriented toward transformation and…

  5. Image quality affected by diffraction of aperture structure arrangement in transparent active-matrix organic light-emitting diode displays.

    PubMed

    Tsai, Yu-Hsiang; Huang, Mao-Hsiu; Jeng, Wei-de; Huang, Ting-Wei; Lo, Kuo-Lung; Ou-Yang, Mang

    2015-10-01

    Transparent display is one of the main technologies in next-generation displays, especially for augmented reality applications. An aperture structure is attached on each display pixel to partition them into transparent and black regions. However, diffraction blurs caused by the aperture structure typically degrade the transparent image when the light from a background object passes through finite aperture window. In this paper, the diffraction effect of an active-matrix organic light-emitting diode display (AMOLED) is studied. Several aperture structures have been proposed and implemented. Based on theoretical analysis and simulation, the appropriate aperture structure will effectively reduce the blur. The analysis data are also consistent with the experimental results. Compared with the various transparent aperture structure on AMOLED, diffraction width (zero energy position of diffraction pattern) of the optimize aperture structure can be reduced 63% and 31% in the x and y directions in CASE 3. Associated with a lenticular lens on the aperture structure, the improvement could reach to 77% and 54% of diffraction width in the x and y directions. Modulation transfer function and practical images are provided to evaluate the improvement of image blurs.

  6. Blur and the School Library Media Specialist.

    ERIC Educational Resources Information Center

    Barron, Daniel D.

    1999-01-01

    Discusses the concept of "Blur" (described in "Blur: The Speed of Change in the Connected Economy") and what the technology-based, expanded connectivity means for K-12 educators and information specialists. Reviews online and print resources that deal with the rapid development of technology and its effects on society. (AEF)

  7. Detection of blur artifacts in histopathological whole-slide images of endomyocardial biopsies.

    PubMed

    Hang Wu; Phan, John H; Bhatia, Ajay K; Cundiff, Caitlin A; Shehata, Bahig M; Wang, May D

    2015-01-01

    Histopathological whole-slide images (WSIs) have emerged as an objective and quantitative means for image-based disease diagnosis. However, WSIs may contain acquisition artifacts that affect downstream image feature extraction and quantitative disease diagnosis. We develop a method for detecting blur artifacts in WSIs using distributions of local blur metrics. As features, these distributions enable accurate classification of WSI regions as sharp or blurry. We evaluate our method using over 1000 portions of an endomyocardial biopsy (EMB) WSI. Results indicate that local blur metrics accurately detect blurry image regions.

  8. Image thumbnails that represent blur and noise.

    PubMed

    Samadani, Ramin; Mauer, Timothy A; Berfanger, David M; Clark, James H

    2010-02-01

    The information about the blur and noise of an original image is lost when a standard image thumbnail is generated by filtering and subsampling. Image browsing becomes difficult since the standard thumbnails do not distinguish between high-quality and low-quality originals. In this paper, an efficient algorithm with a blur-generating component and a noise-generating component preserves the local blur and the noise of the originals. The local blur is rapidly estimated using a scale-space expansion of the standard thumbnail and subsequently used to apply a space-varying blur to the thumbnail. The noise is estimated and rendered by using multirate signal transformations that allow most of the processing to occur at the lower spatial sampling rate of the thumbnail. The new thumbnails provide a quick, natural way for users to identify images of good quality. A subjective evaluation shows the new thumbnails are more representative of their originals for blurry images. The noise generating component improves the results for noisy images, but degrades the results for textured images. The blur generating component of the new thumbnails may always be used to advantage. The decision to use the noise generating component of the new thumbnails should be based on testing with the particular image mix expected for the application.

  9. Managment of thoracic empyema.

    PubMed

    Sherman, M M; Subramanian, V; Berger, R L

    1977-04-01

    Over a ten year period, 102 patients with thoracic empyemata were treated at Boston City Hospital. Only three patients died from the pleural infection while twenty-six succumbed to the associated diseases. Priniciples of management include: (1) thoracentesis; (2) antibiotics; (3) closed-tube thoracostomy; (4) sinogram; (5) open drainage; (6) empyemectomy and decortication in selected patients; and (7) bronchoscopy and barium swallow when the etiology is uncertain.

  10. The use of wavelet filters for reducing noise in posterior fossa Computed Tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pita-Machado, Reinado; Perez-Diaz, Marlen, E-mail: mperez@uclv.edu.cu; Lorenzo-Ginori, Juan V., E-mail: mperez@uclv.edu.cu

    Wavelet transform based de-noising like wavelet shrinkage, gives the good results in CT. This procedure affects very little the spatial resolution. Some applications are reconstruction methods, while others are a posteriori de-noising methods. De-noising after reconstruction is very difficult because the noise is non-stationary and has unknown distribution. Therefore, methods which work on the sinogram-space don’t have this problem, because they always work over a known noise distribution at this point. On the other hand, the posterior fossa in a head CT is a very complex region for physicians, because it is commonly affected by artifacts and noise which aremore » not eliminated during the reconstruction procedure. This can leads to some false positive evaluations. The purpose of our present work is to compare different wavelet shrinkage de-noising filters to reduce noise, particularly in images of the posterior fossa within CT scans in the sinogram-space. This work describes an experimental search for the best wavelets, to reduce Poisson noise in Computed Tomography (CT) scans. Results showed that de-noising with wavelet filters improved the quality of posterior fossa region in terms of an increased CNR, without noticeable structural distortions.« less

  11. SU-C-9A-03: Simultaneous Deconvolution and Segmentation for PET Tumor Delineation Using a Variational Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    2014-06-01

    Purpose: To implement a new method that integrates deconvolution with segmentation under the variational framework for PET tumor delineation. Methods: Deconvolution and segmentation are both challenging problems in image processing. The partial volume effect (PVE) makes tumor boundaries in PET image blurred which affects the accuracy of tumor segmentation. Deconvolution aims to obtain a PVE-free image, which can help to improve the segmentation accuracy. Conversely, a correct localization of the object boundaries is helpful to estimate the blur kernel, and thus assist in the deconvolution. In this study, we proposed to solve the two problems simultaneously using a variational methodmore » so that they can benefit each other. The energy functional consists of a fidelity term and a regularization term, and the blur kernel was limited to be the isotropic Gaussian kernel. We minimized the energy functional by solving the associated Euler-Lagrange equations and taking the derivative with respect to the parameters of the kernel function. An alternate minimization method was used to iterate between segmentation, deconvolution and blur-kernel recovery. The performance of the proposed method was tested on clinic PET images of patients with non-Hodgkin's lymphoma, and compared with seven other segmentation methods using the dice similarity index (DSI) and volume error (VE). Results: Among all segmentation methods, the proposed one (DSI=0.81, VE=0.05) has the highest accuracy, followed by the active contours without edges (DSI=0.81, VE=0.25), while other methods including the Graph Cut and the Mumford-Shah (MS) method have lower accuracy. A visual inspection shows that the proposed method localizes the real tumor contour very well. Conclusion: The result showed that deconvolution and segmentation can contribute to each other. The proposed variational method solve the two problems simultaneously, and leads to a high performance for tumor segmentation in PET. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less

  12. Effects of Different Levels of Refractive Blur on Nighttime Pedestrian Visibility.

    PubMed

    Wood, Joanne M; Marszalek, Ralph; Carberry, Trent; Lacherez, Philippe; Collins, Michael J

    2015-07-01

    The aim of this study was to systematically investigate the effect of different levels of refractive blur and driver age on nighttime pedestrian recognition and determine whether clothing that has been shown to improve pedestrian conspicuity is robust to the effects of blur. Nighttime pedestrian recognition was measured for 24 visually normal participants (12 younger mean = 24.9 ± 4.5 years and 12 older adults mean = 77.6 ± 5.7 years) for three levels of binocular blur (+0.50 diopter [D], +1.00 D, +2.00 D) compared with baseline (optimal refractive correction). Pedestrians walked in place on a closed road circuit and wore one of three clothing conditions: everyday clothing, a retro-reflective vest, and retro-reflective tape positioned on the extremities in a configuration that conveyed biological motion (known as "biomotion"); the order of conditions was randomized among participants. Pedestrian recognition distances were recorded for each blur and pedestrian clothing combination while participants drove an instrumented vehicle around a closed road course. The recognition distances for pedestrians were significantly reduced (P < 0.05) by all levels of blur compared with baseline. Pedestrians wearing biomotion clothing were recognized at significantly longer distances than for the other clothing configurations in all blur conditions. However, these effects were smaller for the older adults, who had much shorter recognition distances for all conditions tested. In summary, even small amounts of blur had a significant detrimental effect on nighttime pedestrian recognition. Biomotion retro-reflective clothing was effective, even under moderately degraded visibility conditions, for both young and older drivers.

  13. No-reference multiscale blur detection tool for content based image retrieval

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Stocker, Russell; Harrity, Kyle; Alford, Mark; Ferris, David; Blasch, Erik; Gorniak, Mark

    2014-06-01

    In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important component of digital image analysis. To assess image quality for these mobile products, a standard image is required as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be used to measure the quality of the images. However, these methods are not possible if there is no reference image. In our approach, a discrete-wavelet transformation is applied to the blurred image, which decomposes into the approximate image and three detail sub-images, namely horizontal, vertical, and diagonal images. We then focus on noise-measuring the detail images and blur-measuring the approximate image to assess the image quality. We then compute noise mean and noise ratio from the detail images, and blur mean and blur ratio from the approximate image. The Multi-scale Blur Detection (MBD) metric provides both an assessment of the noise and blur content. These values are weighted based on a linear regression against full-reference y values. From these statistics, we can compare to normal useful image statistics for image quality without needing a reference image. We then test the validity of our obtained weights by R2 analysis as well as using them to estimate image quality of an image with a known quality measure. The result shows that our method provides acceptable results for images containing low to mid noise levels and blur content.

  14. Kinematic model for the space-variant image motion of star sensors under dynamical conditions

    NASA Astrophysics Data System (ADS)

    Liu, Chao-Shan; Hu, Lai-Hong; Liu, Guang-Bin; Yang, Bo; Li, Ai-Jun

    2015-06-01

    A kinematic description of a star spot in the focal plane is presented for star sensors under dynamical conditions, which involves all necessary parameters such as the image motion, velocity, and attitude parameters of the vehicle. Stars at different locations of the focal plane correspond to the slightly different orientation and extent of motion blur, which characterize the space-variant point spread function. Finally, the image motion, the energy distribution, and centroid extraction are numerically investigated using the kinematic model under dynamic conditions. A centroid error of eight successive iterations <0.002 pixel is used as the termination criterion for the Richardson-Lucy deconvolution algorithm. The kinematic model of a star sensor is useful for evaluating the compensation algorithms of motion-blurred images.

  15. Modeling blur in various detector geometries for MeV radiography

    NASA Astrophysics Data System (ADS)

    Winch, Nicola M.; Watson, Scott A.; Hunter, James F.

    2017-03-01

    Monte Carlo transport codes have been used to model the detector blur and energy deposition in various detector geometries for applications in MeV radiography. Segmented scintillating detectors, where low Z scintillators combined with a high-Z metal matrix, can be designed in which the resolution increases with increasing metal fraction. The combination of various types of metal intensification screens and storage phosphor imaging plates has also been studied. A storage phosphor coated directly onto a metal intensification screen has superior performance over a commercial plate. Stacks of storage phosphor plates and tantalum intensification screens show an increase in energy deposited and detective quantum efficiency with increasing plate number, at the expense of resolution. Select detector geometries were tested by comparing simulation and experimental modulation transfer functions to validate the approach.

  16. Preliminary Validation of the Work-Family Integration-Blurring Scale

    ERIC Educational Resources Information Center

    Desrochers, Stephan; Hilton, Jeanne M.; Larwood, Laurie

    2005-01-01

    Several studies of telecommuting and working at home have alluded to the blurring line between work and family that can result from such highly integrated work-family arrangements. However, little is known about working parents' perceptions of the integration and blurring of their work and family roles. In this study, the authors created and…

  17. [High resolution reconstruction of PET images using the iterative OSEM algorithm].

    PubMed

    Doll, J; Henze, M; Bublitz, O; Werling, A; Adam, L E; Haberkorn, U; Semmler, W; Brix, G

    2004-06-01

    Improvement of the spatial resolution in positron emission tomography (PET) by incorporation of the image-forming characteristics of the scanner into the process of iterative image reconstruction. All measurements were performed at the whole-body PET system ECAT EXACT HR(+) in 3D mode. The acquired 3D sinograms were sorted into 2D sinograms by means of the Fourier rebinning (FORE) algorithm, which allows the usage of 2D algorithms for image reconstruction. The scanner characteristics were described by a spatially variant line-spread function (LSF), which was determined from activated copper-64 line sources. This information was used to model the physical degradation processes in PET measurements during the course of 2D image reconstruction with the iterative OSEM algorithm. To assess the performance of the high-resolution OSEM algorithm, phantom measurements performed at a cylinder phantom, the hotspot Jaszczack phantom, and the 3D Hoffmann brain phantom as well as different patient examinations were analyzed. Scanner characteristics could be described by a Gaussian-shaped LSF with a full-width at half-maximum increasing from 4.8 mm at the center to 5.5 mm at a radial distance of 10.5 cm. Incorporation of the LSF into the iteration formula resulted in a markedly improved resolution of 3.0 and 3.5 mm, respectively. The evaluation of phantom and patient studies showed that the high-resolution OSEM algorithm not only lead to a better contrast resolution in the reconstructed activity distributions but also to an improved accuracy in the quantification of activity concentrations in small structures without leading to an amplification of image noise or even the occurrence of image artifacts. The spatial and contrast resolution of PET scans can markedly be improved by the presented image restauration algorithm, which is of special interest for the examination of both patients with brain disorders and small animals.

  18. Materiality matters: Blurred boundaries and the domestication of functional foods.

    PubMed

    Weiner, Kate; Will, Catherine

    2015-06-01

    Previous scholarship on novel foods, including functional foods, has suggested that they are difficult to categorise for both regulators and users. It is argued that they blur the boundary between 'food' and 'drug' and that uncertainties about the products create 'experimental' or 'restless' approaches to consumption. We investigate these uncertainties drawing on data about the use of functional foods containing phytosterols, which are licensed for sale in the EU for people wishing to reduce their cholesterol. We start from an interest in the products as material objects and their incorporation into everyday practices. We consider the scripts encoded in the physical form of the products through their regulation, production and packaging and find that these scripts shape but do not determine their use. The domestication of phytosterols involves bundling the products together with other objects (pills, supplements, foodstuffs). Considering their incorporation into different systems of objects offers new understandings of the products as foods or drugs. In their accounts of their practices, consumers appear to be relatively untroubled by uncertainties about the character of the products. We conclude that attending to materials and practices offers a productive way to open up and interrogate the idea of categorical uncertainties surrounding new food products.

  19. Hardware accelerator of convolution with exponential function for image processing applications

    NASA Astrophysics Data System (ADS)

    Panchenko, Ivan; Bucha, Victor

    2015-12-01

    In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.

  20. Orientation tuning of contrast masking caused by motion streaks.

    PubMed

    Apthorp, Deborah; Cass, John; Alais, David

    2010-08-01

    We investigated whether the oriented trails of blur left by fast-moving dots (i.e., "motion streaks") effectively mask grating targets. Using a classic overlay masking paradigm, we varied mask contrast and target orientation to reveal underlying tuning. Fast-moving Gaussian blob arrays elevated thresholds for detection of static gratings, both monoptically and dichoptically. Monoptic masking at high mask (i.e., streak) contrasts is tuned for orientation and exhibits a similar bandwidth to masking functions obtained with grating stimuli (∼30 degrees). Dichoptic masking fails to show reliable orientation-tuned masking, but dichoptic masks at very low contrast produce a narrowly tuned facilitation (∼17 degrees). For iso-oriented streak masks and grating targets, we also explored masking as a function of mask contrast. Interestingly, dichoptic masking shows a classic "dipper"-like TVC function, whereas monoptic masking shows no dip and a steeper "handle". There is a very strong unoriented component to the masking, which we attribute to transiently biased temporal frequency masking. Fourier analysis of "motion streak" images shows interesting differences between dichoptic and monoptic functions and the information in the stimulus. Our data add weight to the growing body of evidence that the oriented blur of motion streaks contributes to the processing of fast motion signals.

  1. Formula for the rms blur circle radius of Wolter telescope based on aberration theory

    NASA Technical Reports Server (NTRS)

    Shealy, David L.; Saha, Timo T.

    1990-01-01

    A formula for the rms blur circle for Wolter telescopes has been derived using the transverse ray aberration expressions of Saha (1985), Saha (1984), and Saha (1986). The resulting formula for the rms blur circle radius over an image plane and a formula for the surface of best focus based on third-, fifth-, and seventh-order aberration theory predict results in good agreement with exact ray tracing. It has also been shown that one of the two terms in the empirical formula of VanSpeybroeck and Chase (1972), for the rms blur circle radius of a Wolter I telescope can be justified by the aberration theory results. Numerical results are given comparing the rms blur radius and the surface of best focus vs the half-field angle computed by skew ray tracing and from analytical formulas for grazing incidence Wolter I-II telescopes and a normal incidence Cassegrain telescope.

  2. Blurred Star Image Processing for Star Sensors under Dynamic Conditions

    PubMed Central

    Zhang, Weina; Quan, Wei; Guo, Lei

    2012-01-01

    The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large angular rate. The adaptive threshold is adopted for denoising the star image when the angular rate is in the dynamic range. Then, the mathematical model of motion blur is deduced so as to restore the blurred star map due to large angular rate. Simulation results validate the effectiveness of the proposed method, which is suitable for blurred star image processing and practical for attitude determination of satellites under dynamic conditions. PMID:22778666

  3. Multichannel blind deconvolution of spatially misaligned images.

    PubMed

    Sroubek, Filip; Flusser, Jan

    2005-07-01

    Existing multichannel blind restoration techniques assume perfect spatial alignment of channels, correct estimation of blur size, and are prone to noise. We developed an alternating minimization scheme based on a maximum a posteriori estimation with a priori distribution of blurs derived from the multichannel framework and a priori distribution of original images defined by the variational integral. This stochastic approach enables us to recover the blurs and the original image from channels severely corrupted by noise. We observe that the exact knowledge of the blur size is not necessary, and we prove that translation misregistration up to a certain extent can be automatically removed in the restoration process.

  4. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    PubMed

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  5. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  6. A New Variational Approach for Multiplicative Noise and Blur Removal

    PubMed Central

    Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad; Sun, HongGuang

    2017-01-01

    This paper proposes a new variational model for joint multiplicative denoising and deblurring. It combines a total generalized variation filter (which has been proved to be able to reduce the blocky-effects by being aware of high-order smoothness) and shearlet transform (that effectively preserves anisotropic image features such as sharp edges, curves and so on). The new model takes the advantage of both regularizers since it is able to minimize the staircase effects while preserving sharp edges, textures and other fine image details. The existence and uniqueness of a solution to the proposed variational model is also discussed. The resulting energy functional is then solved by using alternating direction method of multipliers. Numerical experiments showing that the proposed model achieves satisfactory restoration results, both visually and quantitatively in handling the blur (motion, Gaussian, disk, and Moffat) and multiplicative noise (Gaussian, Gamma, or Rayleigh) reduction. A comparison with other recent methods in this field is provided as well. The proposed model can also be applied for restoring both single and multi-channel images contaminated with multiplicative noise, and permit cross-channel blurs when the underlying image has more than one channel. Numerical tests on color images are conducted to demonstrate the effectiveness of the proposed model. PMID:28141802

  7. Neuronal mechanisms underlying differences in spatial resolution between darks and lights in human vision.

    PubMed

    Pons, Carmen; Mazade, Reece; Jin, Jianzhong; Dul, Mitchell W; Zaidi, Qasim; Alonso, Jose-Manuel

    2017-12-01

    Artists and astronomers noticed centuries ago that humans perceive dark features in an image differently from light ones; however, the neuronal mechanisms underlying these dark/light asymmetries remained unknown. Based on computational modeling of neuronal responses, we have previously proposed that such perceptual dark/light asymmetries originate from a luminance/response saturation within the ON retinal pathway. Consistent with this prediction, here we show that stimulus conditions that increase ON luminance/response saturation (e.g., dark backgrounds) or its effect on light stimuli (e.g., optical blur) impair the perceptual discrimination and salience of light targets more than dark targets in human vision. We also show that, in cat visual cortex, the magnitude of the ON luminance/response saturation remains relatively constant under a wide range of luminance conditions that are common indoors, and only shifts away from the lowest luminance contrasts under low mesopic light. Finally, we show that the ON luminance/response saturation affects visual salience mostly when the high spatial frequencies of the image are reduced by poor illumination or optical blur. Because both low luminance and optical blur are risk factors in myopia, our results suggest a possible neuronal mechanism linking myopia progression with the function of the ON visual pathway.

  8. Neuronal mechanisms underlying differences in spatial resolution between darks and lights in human vision

    PubMed Central

    Pons, Carmen; Mazade, Reece; Jin, Jianzhong; Dul, Mitchell W.; Zaidi, Qasim; Alonso, Jose-Manuel

    2017-01-01

    Artists and astronomers noticed centuries ago that humans perceive dark features in an image differently from light ones; however, the neuronal mechanisms underlying these dark/light asymmetries remained unknown. Based on computational modeling of neuronal responses, we have previously proposed that such perceptual dark/light asymmetries originate from a luminance/response saturation within the ON retinal pathway. Consistent with this prediction, here we show that stimulus conditions that increase ON luminance/response saturation (e.g., dark backgrounds) or its effect on light stimuli (e.g., optical blur) impair the perceptual discrimination and salience of light targets more than dark targets in human vision. We also show that, in cat visual cortex, the magnitude of the ON luminance/response saturation remains relatively constant under a wide range of luminance conditions that are common indoors, and only shifts away from the lowest luminance contrasts under low mesopic light. Finally, we show that the ON luminance/response saturation affects visual salience mostly when the high spatial frequencies of the image are reduced by poor illumination or optical blur. Because both low luminance and optical blur are risk factors in myopia, our results suggest a possible neuronal mechanism linking myopia progression with the function of the ON visual pathway. PMID:29196762

  9. Bending the Rules: Widefield Microscopy and the Abbe Limit of Resolution

    PubMed Central

    Verdaasdonk, Jolien S.; Stephens, Andrew D.; Haase, Julian; Bloom, Kerry

    2014-01-01

    One of the most fundamental concepts of microscopy is that of resolution–the ability to clearly distinguish two objects as separate. Recent advances such as structured illumination microscopy (SIM) and point localization techniques including photoactivated localization microscopy (PALM), and stochastic optical reconstruction microscopy (STORM) strive to overcome the inherent limits of resolution of the modern light microscope. These techniques, however, are not always feasible or optimal for live cell imaging. Thus, in this review, we explore three techniques for extracting high resolution data from images acquired on a widefield microscope–deconvolution, model convolution, and Gaussian fitting. Deconvolution is a powerful tool for restoring a blurred image using knowledge of the point spread function (PSF) describing the blurring of light by the microscope, although care must be taken to ensure accuracy of subsequent quantitative analysis. The process of model convolution also requires knowledge of the PSF to blur a simulated image which can then be compared to the experimentally acquired data to reach conclusions regarding its geometry and fluorophore distribution. Gaussian fitting is the basis for point localization microscopy, and can also be applied to tracking spot motion over time or measuring spot shape and size. All together, these three methods serve as powerful tools for high-resolution imaging using widefield microscopy. PMID:23893718

  10. Slight Blurring in Newer Image from Mars Orbiter

    NASA Image and Video Library

    2018-02-09

    These two frames were taken of the same place on Mars by the same orbiting camera before (left) and after some images from the camera began showing unexpected blur. The images are from the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. They show a patch of ground about 500 feet or 150 meters wide in Gusev Crater. The one on the left, from HiRISE observation ESP_045173_1645, was taken March 16, 2016. The one on the right was taken Jan. 9, 2018. Gusev Crater, at 15 degrees south latitude and 176 degrees east longitude, is the landing site of NASA's Spirit Mars rover in 2004 and a candidate landing site for a rover to be launched in 2020. HiRISE images provide important information for evaluating potential landing sites. The smallest boulders with measurable diameters in the left image are about 3 feet (90 centimeters) wide. In the blurred image, the smallest measurable are about double that width. As of early 2018, most full-resolution images from HiRISE are not blurred, and the cause of the blur is still under investigation. Even before blurred images were first seen, in 2017, observations with HiRISE commonly used a technique that covers more ground area at half the resolution. This shows features smaller than can be distinguished with any other camera orbiting Mars, and little blurring has appeared in these images. https://photojournal.jpl.nasa.gov/catalog/PIA22215

  11. Blur and the perception of depth at occlusions.

    PubMed

    Zannoli, Marina; Love, Gordon D; Narain, Rahul; Banks, Martin S

    2016-01-01

    The depth ordering of two surfaces, one occluding the other, can in principle be determined from the correlation between the occlusion border's blur and the blur of the two surfaces. If the border is blurred, the blurrier surface is nearer; if the border is sharp, the sharper surface is nearer. Previous research has found that observers do not use this informative cue. We reexamined this finding. Using a multiplane display, we confirmed the previous finding: Our observers did not accurately judge depth order when the blur was rendered and the stimulus presented on one plane. We then presented the same simulated scenes on multiple planes, each at a different focal distance, so the blur was created by the optics of the eye. Performance was now much better, which shows that depth order can be reliably determined from blur information but only when the optical effects are similar to those in natural viewing. We asked what the critical differences were in the single- and multiplane cases. We found that chromatic aberration provides useful information but accommodative microfluctuations do not. In addition, we examined how image formation is affected by occlusions and observed some interesting phenomena that allow the eye to see around and through occluding objects and may allow observers to estimate depth in da Vinci stereopsis, where one eye's view is blocked. Finally, we evaluated how accurately different rendering and displaying techniques reproduce the retinal images that occur in real occlusions. We discuss implications for computer graphics.

  12. High-Impact Educational Practices: What We Can Learn from the Traditional Undergraduate Setting

    ERIC Educational Resources Information Center

    Sandeen, Cathy

    2012-01-01

    The higher education ecosystem is shifting. Lines are blurring. Continuing professional education--with its focus on nontraditional students, applied learning, support of workforce development, and use of innovative and technology-based pedagogy--was commonly perceived to function outside the core of the academy, which focused on a liberal-arts…

  13. Self-Monitoring of Gaze in High Functioning Autism

    ERIC Educational Resources Information Center

    Grynszpan, Ouriel; Nadel, Jacqueline; Martin, Jean-Claude; Simonin, Jerome; Bailleul, Pauline; Wang, Yun; Gepner, Daniel; Le Barillier, Florence; Constant, Jacques

    2012-01-01

    Atypical visual behaviour has been recently proposed to account for much of social misunderstanding in autism. Using an eye-tracking system and a gaze-contingent lens display, the present study explores self-monitoring of eye motion in two conditions: free visual exploration and guided exploration via blurring the visual field except for the focal…

  14. Hypertextual Ultrastructures: Movement and Containment in Texts and Hypertexts

    ERIC Educational Resources Information Center

    Coste, Rosemarie L.

    2009-01-01

    The surface-level experience of hypertextuality as formless and unbounded, blurring boundaries among texts and between readers and writers, is created by a deep structure which is not normally presented to readers and which, like the ultrastructure of living cells, defines and controls texts' nature and functions. Most readers, restricted to…

  15. Ocular-Motor Function and Information Processing: Implications for the Reading Process.

    ERIC Educational Resources Information Center

    Leisman, Gerald; Schwartz, Joddy

    This paper discusses the dichotomy between continually moving eyes and the lack of blurred visual experience. A discontinuous model of visual perception is proposed, with the discontinuities being phase and temporally related to saccadic eye movements. It is further proposed that deviant duration and angular velocity characteristics of saccades in…

  16. Combined optimization of image-gathering and image-processing systems for scene feature detection

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.

    1987-01-01

    The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.

  17. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  18. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  19. Resolving Fast, Confined Diffusion in Bacteria with Image Correlation Spectroscopy.

    PubMed

    Rowland, David J; Tuson, Hannah H; Biteen, Julie S

    2016-05-24

    By following single fluorescent molecules in a microscope, single-particle tracking (SPT) can measure diffusion and binding on the nanometer and millisecond scales. Still, although SPT can at its limits characterize the fastest biomolecules as they interact with subcellular environments, this measurement may require advanced illumination techniques such as stroboscopic illumination. Here, we address the challenge of measuring fast subcellular motion by instead analyzing single-molecule data with spatiotemporal image correlation spectroscopy (STICS) with a focus on measurements of confined motion. Our SPT and STICS analysis of simulations of the fast diffusion of confined molecules shows that image blur affects both STICS and SPT, and we find biased diffusion rate measurements for STICS analysis in the limits of fast diffusion and tight confinement due to fitting STICS correlation functions to a Gaussian approximation. However, we determine that with STICS, it is possible to correctly interpret the motion that blurs single-molecule images without advanced illumination techniques or fast cameras. In particular, we present a method to overcome the bias due to image blur by properly estimating the width of the correlation function by directly calculating the correlation function variance instead of using the typical Gaussian fitting procedure. Our simulation results are validated by applying the STICS method to experimental measurements of fast, confined motion: we measure the diffusion of cytosolic mMaple3 in living Escherichia coli cells at 25 frames/s under continuous illumination to illustrate the utility of STICS in an experimental parameter regime for which in-frame motion prevents SPT and tight confinement of fast diffusion precludes stroboscopic illumination. Overall, our application of STICS to freely diffusing cytosolic protein in small cells extends the utility of single-molecule experiments to the regime of fast confined diffusion without requiring advanced microscopy techniques. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  20. Visual information underpinning skilled anticipation: The effect of blur on a coupled and uncoupled in situ anticipatory response.

    PubMed

    Mann, David L; Abernethy, Bruce; Farrow, Damian

    2010-07-01

    Coupled interceptive actions are understood to be the result of neural processing-and visual information-which is distinct from that used for uncoupled perceptual responses. To examine the visual information used for action and perception, skilled cricket batters anticipated the direction of balls bowled toward them using a coupled movement (an interceptive action that preserved the natural coupling between perception and action) or an uncoupled (verbal) response, in each of four different visual blur conditions (plano, +1.00, +2.00, +3.00). Coupled responses were found to be better than uncoupled ones, with the blurring of vision found to result in different effects for the coupled and uncoupled response conditions. Low levels of visual blur did not affect coupled anticipation, a finding consistent with the comparatively poorer visual information on which online interceptive actions are proposed to rely. In contrast, some evidence was found to suggest that low levels of blur may enhance the uncoupled verbal perception of movement.

  1. Image and Video Quality Assessment Using LCD: Comparisons with CRT Conditions

    NASA Astrophysics Data System (ADS)

    Tourancheau, Sylvain; Callet, Patrick Le; Barba, Dominique

    In this paper, the impact of display on quality assessment is addressed. Subjective quality assessment experiments have been performed on both LCD and CRT displays. Two sets of still images and two sets of moving pictures have been assessed using either an ACR or a SAMVIQ protocol. Altogether, eight experiments have been led. Results are presented and discussed, some differences are pointed out. Concerning moving pictures, these differences seem to be mainly due to LCD moving artefacts such as motion blur. LCD motion blur has been measured objectively and with psycho-physics experiments. A motion-blur metric based on the temporal characteristics of LCD can be defined. A prediction model have been then designed which predict the differences of perceived quality between CRT and LCD. This motion-blur-based model enables the estimation of perceived quality on LCD with respect to the perceived quality on CRT. Technical solutions to LCD motion blur can thus be evaluated on natural contents by this mean.

  2. Effects of blurring and vertical misalignment on visual fatigue of stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Baek, Sangwook; Lee, Chulhee

    2015-03-01

    In this paper, we investigate two error issues in stereo images, which may produce visual fatigue. When two cameras are used to produce 3D video sequences, vertical misalignment can be a problem. Although this problem may not occur in professionally produced 3D programs, it is still a major issue in many low-cost 3D programs. Recently, efforts have been made to produce 3D video programs using smart phones or tablets, which may present the vertical alignment problem. Also, in 2D-3D conversion techniques, the simulated frame may have blur effects, which can also introduce visual fatigue in 3D programs. In this paper, to investigate the relationship between these two errors (vertical misalignment and blurring in one image), we performed a subjective test using simulated 3D video sequences that include stereo video sequences with various vertical misalignments and blurring in a stereo image. We present some analyses along with objective models to predict the degree of visual fatigue from vertical misalignment and blurring.

  3. Three-dimensional FLASH Laser Radar Range Estimation via Blind Deconvolution

    DTIC Science & Technology

    2009-10-01

    scene can result in errors due to several factors including the optical spatial impulse response, detector blurring, photon noise , timing jitter, and...estimation error include spatial blur, detector blurring, noise , timing jitter, and inter-sample targets. Unlike previous research, this paper ac- counts...for pixel coupling by defining the range image mathematical model as a 2D convolution between the system spatial impulse response and the object (target

  4. Effects of Scene Modulation Image Blur and Noise Upon Human Target Acquisition Performance.

    DTIC Science & Technology

    1997-06-01

    AFRL-HE-WP-TR-1998-0012 UNITED STATES AIR FORCE RESEARCH LABORATORY EFFECTS OF SCENE MODULATION IMAGE BLUR AND NOISE UPON HUMAN TARGET...COVERED INTERIM (July 1996 - August 1996) TITLE AND SUBTITLE Effects of Scene Modulation Image Blur and Noise Upon Human Target Acquisition...dilemma in image transmission and display is that we must compromise between die conflicting constraints of dynamic range and noise . Three target

  5. Co-production in community mental health services: blurred boundaries or a game of pretend?

    PubMed

    Kirkegaard, Sine; Andersen, Ditte

    2018-06-01

    The concept of co-production suggests a collaborative production of public welfare services, across boundaries of participant categories, for example professionals, service users, peer-workers and volunteers. While co-production has been embraced in most European countries, the way in which it is translated into everyday practice remains understudied. Drawing on ethnographic data from Danish community mental health services, we attempt to fill this gap by critically investigating how participants interact in an organisational set-up with blurred boundaries between participant categories. In particular, we clarify under what circumstances the blurred boundaries emerge as believable. Theoretically, we combine Lamont and Molnár's (2002) distinction between symbolic boundaries and social boundaries with Goffman's (1974) microanalysis of "principles of convincingness". The article presents three findings: (1) co-production is employed as a symbolic resource for blurring social boundaries; (2) the believability of blurred boundaries is worked up through participants' access to resources of validation, knowledge and authority; and (3) incongruence between symbolic and social boundaries institutionalises practices where participants merely act 'as if' boundaries are blurred. Clarification of the principles of convincingness contributes to a general discussion of how co-production frames the everyday negotiation of symbolic and social boundaries in public welfare services. © 2018 Foundation for the Sociology of Health & Illness.

  6. Accommodation Responds to Optical Vergence and Not Defocus Blur Alone.

    PubMed

    Del Águila-Carrasco, Antonio J; Marín-Franch, Iván; Bernal-Molina, Paula; Esteve-Taboada, José J; Kruger, Philip B; Montés-Micó, Robert; López-Gil, Norberto

    2017-03-01

    To determine whether changes in wavefront spherical curvature (optical vergence) are a directional cue for accommodation. Nine subjects participated in this experiment. The accommodation response to a monochromatic target was measured continuously with a custom-made adaptive optics system while astigmatism and higher-order aberrations were corrected in real time. There were two experimental open-loop conditions: vergence-driven condition, where the deformable mirror provided sinusoidal changes in defocus at the retina between -1 and +1 diopters (D) at 0.2 Hz; and blur-driven condition, in which the level of defocus at the retina was always 0 D, but a sinusoidal defocus blur between -1 and +1 D at 0.2 Hz was simulated in the target. Right before the beginning of each trial, the target was moved to an accommodative demand of 2 D. Eight out of nine subjects showed sinusoidal responses for the vergence-driven condition but not for the blur-driven condition. Their average (±SD) gain for the vergence-driven condition was 0.50 (±0.28). For the blur-driven condition, average gain was much smaller at 0.07 (±0.03). The ninth subject showed little to no response for both conditions, with average gain <0.08. Vergence-driven condition gain was significantly different from blur-driven condition gain (P = 0.004). Accommodation responds to optical vergence, even without feedback, and not to changes in defocus blur alone. These results suggest the presence of a retinal mechanism that provides a directional cue for accommodation from optical vergence.

  7. Signal dependence of inter-pixel capacitance in hybridized HgCdTe H2RG arrays for use in James Webb space telescope's NIRcam

    NASA Astrophysics Data System (ADS)

    Donlon, Kevan; Ninkov, Zoran; Baum, Stefi

    2016-08-01

    Interpixel capacitance (IPC) is a deterministic electronic coupling by which signal generated in one pixel is measured in neighboring pixels. Examination of dark frames from test NIRcam arrays corroborates earlier results and simulations illustrating a signal dependent coupling. When the signal on an individual pixel is larger, the fractional coupling to nearest neighbors is lesser than when the signal is lower. Frames from test arrays indicate a drop in average coupling from approximately 1.0% at low signals down to approximately 0.65% at high signals depending on the particular array in question. The photometric ramifications for this non-uniformity are not fully understood. This non-uniformity intro-duces a non-linearity in the current mathematical model for IPC coupling. IPC coupling has been mathematically formalized as convolution by a blur kernel. Signal dependence requires that the blur kernel be locally defined as a function of signal intensity. Through application of a signal dependent coupling kernel, the IPC coupling can be modeled computationally. This method allows for simultaneous knowledge of the intrinsic parameters of the image scene, the result of applying a constant IPC, and the result of a signal dependent IPC. In the age of sub-pixel precision in astronomy these effects must be properly understood and accounted for in order for the data to accurately represent the object of observation. Implementation of this method is done through python scripted processing of images. The introduction of IPC into simulated frames is accomplished through convolution of the image with a blur kernel whose parameters are themselves locally defined functions of the image. These techniques can be used to enhance the data processing pipeline for NIRcam.

  8. Development of 2D deconvolution method to repair blurred MTSAT-1R visible imagery

    NASA Astrophysics Data System (ADS)

    Khlopenkov, Konstantin V.; Doelling, David R.; Okuyama, Arata

    2014-09-01

    Spatial cross-talk has been discovered in the visible channel data of the Multi-functional Transport Satellite (MTSAT)-1R. The slight image blurring is attributed to an imperfection in the mirror surface caused either by flawed polishing or a dust contaminant. An image processing methodology is described that employs a two-dimensional deconvolution routine to recover the original undistorted MTSAT-1R data counts. The methodology assumes that the dispersed portion of the signal is small and distributed randomly around the optical axis, which allows the image blurring to be described by a point spread function (PSF) based on the Gaussian profile. The PSF is described by 4 parameters, which are solved using a maximum likelihood estimator using coincident collocated MTSAT-2 images as truth. A subpixel image matching technique is used to align the MTSAT-2 pixels into the MTSAT-1R projection and to correct for navigation errors and cloud displacement due to the time and viewing geometry differences between the two satellite observations. An optimal set of the PSF parameters is derived by an iterative routine based on the 4-dimensional Powell's conjugate direction method that minimizes the difference between PSF-corrected MTSAT-1R and collocated MTSAT-2 images. This iterative approach is computationally intensive and was optimized analytically as well as by coding in assembly language incorporating parallel processing. The PSF parameters were found to be consistent over the 5-days of available daytime coincident MTSAT-1R and MTSAT-2 images, and can easily be applied to the MTSAT-1R imager pixel level counts to restore the original quality of the entire MTSAT-1R record.

  9. Point spread functions for earthquake source imaging: An interpretation based on seismic interferometry

    USGS Publications Warehouse

    Nakahara, Hisashi; Haney, Matt

    2015-01-01

    Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.

  10. Linear Reconstruction of Non-Stationary Image Ensembles Incorporating Blur and Noise Models

    DTIC Science & Technology

    1998-03-01

    for phase distortions due to noise which leads to less deblurring as noise increases [41]. In contrast, the vector Wiener filter incorporates some a...AFIT/DS/ENG/98- 06 Linear Reconstruction of Non-Stationary Image Ensembles Incorporating Blur and Noise Models DISSERTATION Stephen D. Ford Captain...Dissertation 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS LINEAR RECONSTRUCTION OF NON-STATIONARY IMAGE ENSEMBLES INCORPORATING BLUR AND NOISE MODELS 6. AUTHOR(S

  11. Restoration of motion blurred images

    NASA Astrophysics Data System (ADS)

    Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.

    2017-08-01

    Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.

  12. Materiality matters: Blurred boundaries and the domestication of functional foods

    PubMed Central

    Weiner, Kate; Will, Catherine

    2015-01-01

    Previous scholarship on novel foods, including functional foods, has suggested that they are difficult to categorise for both regulators and users. It is argued that they blur the boundary between ‘food' and ‘drug' and that uncertainties about the products create ‘experimental' or ‘restless' approaches to consumption. We investigate these uncertainties drawing on data about the use of functional foods containing phytosterols, which are licensed for sale in the EU for people wishing to reduce their cholesterol. We start from an interest in the products as material objects and their incorporation into everyday practices. We consider the scripts encoded in the physical form of the products through their regulation, production and packaging and find that these scripts shape but do not determine their use. The domestication of phytosterols involves bundling the products together with other objects (pills, supplements, foodstuffs). Considering their incorporation into different systems of objects offers new understandings of the products as foods or drugs. In their accounts of their practices, consumers appear to be relatively untroubled by uncertainties about the character of the products. We conclude that attending to materials and practices offers a productive way to open up and interrogate the idea of categorical uncertainties surrounding new food products. PMID:26157471

  13. Imaging quality analysis of multi-channel scanning radiometer

    NASA Astrophysics Data System (ADS)

    Fan, Hong; Xu, Wujun; Wang, Chengliang

    2008-03-01

    Multi-channel scanning radiometer, on boarding FY-2 geostationary meteorological satellite, plays a key role in remote sensing because of its wide field of view and continuous multi-spectral images acquirements. It is significant to evaluate image quality after performance parameters of the imaging system are validated. Several methods of evaluating imaging quality are discussed. Of these methods, the most fundamental is the MTF. The MTF of photoelectric scanning remote instrument, in the scanning direction, is the multiplication of optics transfer function (OTF), detector transfer function (DTF) and electronics transfer function (ETF). For image motion compensation, moving speed of scanning mirror should be considered. The optical MTF measurement is performed in both the EAST/WEST and NORTH/SOUTH direction, whose values are used for alignment purposes and are used to determine the general health of the instrument during integration and testing. Imaging systems cannot perfectly reproduce what they see and end up "blurring" the image. Many parts of the imaging system can cause blurring. Among these are the optical elements, the sampling of the detector itself, post-processing, or the earth's atmosphere for systems that image through it. Through theory calculation and actual measurement, it is proved that DTF and ETF are the main factors of system MTF and the imaging quality can satisfy the requirement of instrument design.

  14. Joint de-blurring and nonuniformity correction method for infrared microscopy imaging

    NASA Astrophysics Data System (ADS)

    Jara, Anselmo; Torres, Sergio; Machuca, Guillermo; Ramírez, Wagner; Gutiérrez, Pablo A.; Viafora, Laura A.; Godoy, Sebastián E.; Vera, Esteban

    2018-05-01

    In this work, we present a new technique to simultaneously reduce two major degradation artifacts found in mid-wavelength infrared microscopy imagery, namely the inherent focal-plane array nonuniformity noise and the scene defocus presented due to the point spread function of the infrared microscope. We correct both nuisances using a novel, recursive method that combines the constant range nonuniformity correction algorithm with a frame-by-frame deconvolution approach. The ability of the method to jointly compensate for both nonuniformity noise and blur is demonstrated using two different real mid-wavelength infrared microscopic video sequences, which were captured from two microscopic living organisms using a Janos-Sofradir mid-wavelength infrared microscopy setup. The performance of the proposed method is assessed on real and simulated infrared data by computing the root mean-square error and the roughness-laplacian pattern index, which was specifically developed for the present work.

  15. Effect of contrast enhancement prior to iteration procedure on image correction for soft x-ray projection microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamsranjav, Erdenetogtokh, E-mail: ja.erdenetogtokh@gmail.com; Shiina, Tatsuo, E-mail: shiina@faculity.chiba-u.jp; Kuge, Kenichi

    2016-01-28

    Soft X-ray microscopy is well recognized as a powerful tool of high-resolution imaging for hydrated biological specimens. Projection type of it has characteristics of easy zooming function, simple optical layout and so on. However the image is blurred by the diffraction of X-rays, leading the spatial resolution to be worse. In this study, the blurred images have been corrected by an iteration procedure, i.e., Fresnel and inverse Fresnel transformations repeated. This method was confirmed by earlier studies to be effective. Nevertheless it was not enough to some images showing too low contrast, especially at high magnification. In the present study,more » we tried a contrast enhancement method to make the diffraction fringes clearer prior to the iteration procedure. The method was effective to improve the images which were not successful by iteration procedure only.« less

  16. GrabBlur--a framework to facilitate the secure exchange of whole-exome and -genome SNV data using VCF files.

    PubMed

    Stade, Björn; Seelow, Dominik; Thomsen, Ingo; Krawczak, Michael; Franke, Andre

    2014-01-01

    Next Generation Sequencing (NGS) of whole exomes or genomes is increasingly being used in human genetic research and diagnostics. Sharing NGS data with third parties can help physicians and researchers to identify causative or predisposing mutations for a specific sample of interest more efficiently. In many cases, however, the exchange of such data may collide with data privacy regulations. GrabBlur is a newly developed tool to aggregate and share NGS-derived single nucleotide variant (SNV) data in a public database, keeping individual samples unidentifiable. In contrast to other currently existing SNV databases, GrabBlur includes phenotypic information and contact details of the submitter of a given database entry. By means of GrabBlur human geneticists can securely and easily share SNV data from resequencing projects. GrabBlur can ease the interpretation of SNV data by offering basic annotations, genotype frequencies and in particular phenotypic information - given that this information was shared - for the SNV of interest. GrabBlur facilitates the combination of phenotypic and NGS data (VCF files) via a local interface or command line operations. Data submissions may include HPO (Human Phenotype Ontology) terms, other trait descriptions, NGS technology information and the identity of the submitter. Most of this information is optional and its provision at the discretion of the submitter. Upon initial intake, GrabBlur merges and aggregates all sample-specific data. If a certain SNV is rare, the sample-specific information is replaced with the submitter identity. Generally, all data in GrabBlur are highly aggregated so that they can be shared with others while ensuring maximum privacy. Thus, it is impossible to reconstruct complete exomes or genomes from the database or to re-identify single individuals. After the individual information has been sufficiently "blurred", the data can be uploaded into a publicly accessible domain where aggregated genotypes are provided alongside phenotypic information. A web interface allows querying the database and the extraction of gene-wise SNV information. If an interesting SNV is found, the interrogator can get in contact with the submitter to exchange further information on the carrier and clarify, for example, whether the latter's phenotype matches with phenotype of their own patient.

  17. Effect of intensive insulin therapy on macular biometrics, plasma VEGF and its soluble receptor in newly diagnosed diabetic patients.

    PubMed

    Hernández, Cristina; Zapata, Miguel A; Losada, Eladio; Villarroel, Marta; García-Ramírez, Marta; García-Arumí, José; Simó, Rafael

    2010-07-01

    To evaluate whether intensive insulin therapy leads to changes in macular biometrics (volume and thickness) in newly diagnosed diabetic patients with acute hyperglycaemia and its relationship with serum levels of vascular endothelial growth factor (VEGF) and its soluble receptor (sFlt-1). Twenty-six newly diagnosed diabetic patients admitted to our hospital to initiate intensive insulin treatment were prospectively recruited. Examinations were performed on admission (day 1) and during follow-up (days 3, 10 and 21) and included a questionnaire regarding the presence of blurred vision, standardized refraction measurements and optical coherence tomography. Plasma VEGF and sFlt-1 were assessed by ELISA at baseline and during follow-up. At study entry seven patients (26.9%) complained of blurred vision and five (19.2%) developed burred vision during follow-up. Macular volume and thickness increased significantly (p = 0.008 and p = 0.04, respectively) in the group with blurred vision at day 3 and returned to the baseline value at 10 days. This pattern was present in 18 out of the 24 eyes from patients with blurred vision. By contrast, macular biometrics remained unchanged in the group without blurred vision. We did not detect any significant changes in VEGF levels during follow-up. By contrast, a significant reduction of sFlt-1 was observed in those patients with blurred vision at day 3 (p = 0.03) with normalization by day 10. Diabetic patients with blurred vision after starting insulin therapy present a significant transient increase in macular biometrics which is associated with a decrease in circulating sFlt-1. Copyright (c) 2010 John Wiley & Sons, Ltd.

  18. Determining the relative contribution of retinal disparity and blur cues to ocular accommodation in Down syndrome.

    PubMed

    Doyle, Lesley; Saunders, Kathryn J; Little, Julie-Anne

    2017-01-10

    Individuals with Down syndrome (DS) often exhibit hypoaccommodation alongside accurate vergence. This study investigates the sensitivity of the two systems to retinal disparity and blur cues, establishing the relationship between the two in terms of accommodative-convergence to accommodation (AC/A) and convergence-accommodation to convergence (CA/C) ratios. An objective photorefraction system measured accommodation and vergence under binocular conditions and when retinal disparity and blur cues were removed. Participants were aged 6-16 years (DS n = 41, controls n = 76). Measures were obtained from 65.9% of participants with DS and 100% of controls. Accommodative and vergence responses were reduced with the removal of one or both cues in controls (p < 0.007). For participants with DS, removal of blur was less detrimental to accommodative responses than removal of disparity; accommodative responses being significantly better when all cues were available or when blur was removed in comparison to when proximity was the only available cue. AC/A ratios were larger and CA/C ratios smaller in participants with DS (p < 0.00001). This study demonstrates that retinal disparity is the main driver to both systems in DS and illustrates the diminished influence of retinal blur. High AC/A and low CA/C ratios in combination with disparity-driven responses suggest prioritisation of vergence over accurate accommodation.

  19. Determining the relative contribution of retinal disparity and blur cues to ocular accommodation in Down syndrome

    PubMed Central

    Doyle, Lesley; Saunders, Kathryn J.; Little, Julie-Anne

    2017-01-01

    Individuals with Down syndrome (DS) often exhibit hypoaccommodation alongside accurate vergence. This study investigates the sensitivity of the two systems to retinal disparity and blur cues, establishing the relationship between the two in terms of accommodative-convergence to accommodation (AC/A) and convergence-accommodation to convergence (CA/C) ratios. An objective photorefraction system measured accommodation and vergence under binocular conditions and when retinal disparity and blur cues were removed. Participants were aged 6–16 years (DS n = 41, controls n = 76). Measures were obtained from 65.9% of participants with DS and 100% of controls. Accommodative and vergence responses were reduced with the removal of one or both cues in controls (p < 0.007). For participants with DS, removal of blur was less detrimental to accommodative responses than removal of disparity; accommodative responses being significantly better when all cues were available or when blur was removed in comparison to when proximity was the only available cue. AC/A ratios were larger and CA/C ratios smaller in participants with DS (p < 0.00001). This study demonstrates that retinal disparity is the main driver to both systems in DS and illustrates the diminished influence of retinal blur. High AC/A and low CA/C ratios in combination with disparity-driven responses suggest prioritisation of vergence over accurate accommodation. PMID:28071728

  20. Luminance cues constrain chromatic blur discrimination in natural scene stimuli.

    PubMed

    Sharman, Rebecca J; McGraw, Paul V; Peirce, Jonathan W

    2013-03-22

    Introducing blur into the color components of a natural scene has very little effect on its percept, whereas blur introduced into the luminance component is very noticeable. Here we quantify the dominance of luminance information in blur detection and examine a number of potential causes. We show that the interaction between chromatic and luminance information is not explained by reduced acuity or spatial resolution limitations for chromatic cues, the effective contrast of the luminance cue, or chromatic and achromatic statistical regularities in the images. Regardless of the quality of chromatic information, the visual system gives primacy to luminance signals when determining edge location. In natural viewing, luminance information appears to be specialized for detecting object boundaries while chromatic information may be used to determine surface properties.

  1. Forward light scatter analysis of the eye in a spatially-resolved double-pass optical system.

    PubMed

    Nam, Jayoung; Thibos, Larry N; Bradley, Arthur; Himebaugh, Nikole; Liu, Haixia

    2011-04-11

    An optical analysis is developed to separate forward light scatter of the human eye from the conventional wavefront aberrations in a double pass optical system. To quantify the separate contributions made by these micro- and macro-aberrations, respectively, to the spot image blur in the Shark-Hartmann aberrometer, we develop a metric called radial variance for spot blur. We prove an additivity property for radial variance that allows us to distinguish between spot blurs from macro-aberrations and micro-aberrations. When the method is applied to tear break-up in the human eye, we find that micro-aberrations in the second pass accounts for about 87% of the double pass image blur in the Shack-Hartmann wavefront aberrometer under our experimental conditions. © 2011 Optical Society of America

  2. Multi-Stage Target Tracking with Drift Correction and Position Prediction

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Ren, Keyan; Hou, Yibin

    2018-04-01

    Most existing tracking methods are hard to combine accuracy and performance, and do not consider the shift between clarity and blur that often occurs. In this paper, we propound a multi-stage tracking framework with two particular modules: position prediction and corrective measure. We conduct tracking based on correlation filter with a corrective measure module to increase both performance and accuracy. Specifically, a convolutional network is used for solving the blur problem in realistic scene, training methodology that training dataset with blur images generated by the three blur algorithms. Then, we propose a position prediction module to reduce the computation cost and make tracker more capable of fast motion. Experimental result shows that our tracking method is more robust compared to others and more accurate on the benchmark sequences.

  3. LCD motion blur reduction: a signal processing approach.

    PubMed

    Har-Noy, Shay; Nguyen, Truong Q

    2008-02-01

    Liquid crystal displays (LCDs) have shown great promise in the consumer market for their use as both computer and television displays. Despite their many advantages, the inherent sample-and-hold nature of LCD image formation results in a phenomenon known as motion blur. In this work, we develop a method for motion blur reduction using the Richardson-Lucy deconvolution algorithm in concert with motion vector information from the scene. We further refine our approach by introducing a perceptual significance metric that allows us to weight the amount of processing performed on different regions in the image. In addition, we analyze the role of motion vector errors in the quality of our resulting image. Perceptual tests indicate that our algorithm reduces the amount of perceivable motion blur in LCDs.

  4. Comparison of low-contrast detectability between two CT reconstruction algorithms using voxel-based 3D printed textured phantoms.

    PubMed

    Solomon, Justin; Ba, Alexandre; Bochud, François; Samei, Ehsan

    2016-12-01

    To use novel voxel-based 3D printed textured phantoms in order to compare low-contrast detectability between two reconstruction algorithms, FBP (filtered-backprojection) and SAFIRE (sinogram affirmed iterative reconstruction) and determine what impact background texture (i.e., anatomical noise) has on estimating the dose reduction potential of SAFIRE. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find CLB textures that were reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, four cylindrical phantoms (Textures A-C and uniform, 165 mm in diameter, and 30 mm height) were designed, each containing 20 low-contrast spherical signals (6 mm diameter at nominal contrast levels of ∼3.2, 5.2, 7.2, 10, and 14 HU with four repeats per signal). The phantoms were voxelized and input into a commercial multimaterial 3D printer (Object Connex 350), with custom software for voxel-based printing (using principles of digital dithering). Images of the textured phantoms and a corresponding uniform phantom were acquired at six radiation dose levels (SOMATOM Flash, Siemens Healthcare) and observer model detection performance (detectability index of a multislice channelized Hotelling observer) was estimated for each condition (5 contrasts × 6 doses × 2 reconstructions × 4 backgrounds = 240 total conditions). A multivariate generalized regression analysis was performed (linear terms, no interactions, random error term, log link function) to assess whether dose, reconstruction algorithm, signal contrast, and background type have statistically significant effects on detectability. Also, fitted curves of detectability (averaged across contrast levels) as a function of dose were constructed for each reconstruction algorithm and background texture. FBP and SAFIRE were compared for each background type to determine the improvement in detectability at a given dose, and the reduced dose at which SAFIRE had equivalent performance compared to FBP at 100% dose. Detectability increased with increasing radiation dose (P = 2.7 × 10 -59 ) and contrast level (P = 2.2 × 10 -86 ) and was higher in the uniform phantom compared to the textured phantoms (P = 6.9 × 10 -51 ). Overall, SAFIRE had higher d' compared to FBP (P = 0.02). The estimated dose reduction potential of SAFIRE was found to be 8%, 10%, 27%, and 8% for Texture-A, Texture-B, Texture-C and uniform phantoms. In all background types, detectability was higher with SAFIRE compared to FBP. However, the relative improvement observed from SAFIRE was highly dependent on the complexity of the background texture. Iterative algorithms such as SAFIRE should be assessed in the most realistic context possible.

  5. Validation of CT dose-reduction simulation

    PubMed Central

    Massoumzadeh, Parinaz; Don, Steven; Hildebolt, Charles F.; Bae, Kyongtae T.; Whiting, Bruce R.

    2009-01-01

    The objective of this research was to develop and validate a custom computed tomography dose-reduction simulation technique for producing images that have an appearance consistent with the same scan performed at a lower mAs (with fixed kVp, rotation time, and collimation). Synthetic noise is added to projection (sinogram) data, incorporating a stochastic noise model that includes energy-integrating detectors, tube-current modulation, bowtie beam filtering, and electronic system noise. Experimental methods were developed to determine the parameters required for each component of the noise model. As a validation, the outputs of the simulations were compared to measurements with cadavers in the image domain and with phantoms in both the sinogram and image domain, using an unbiased root-mean-square relative error metric to quantify agreement in noise processes. Four-alternative forced-choice (4AFC) observer studies were conducted to confirm the realistic appearance of simulated noise, and the effects of various system model components on visual noise were studied. The “just noticeable difference (JND)” in noise levels was analyzed to determine the sensitivity of observers to changes in noise level. Individual detector measurements were shown to be normally distributed (p>0.54), justifying the use of a Gaussian random noise generator for simulations. Phantom tests showed the ability to match original and simulated noise variance in the sinogram domain to within 5.6%±1.6% (standard deviation), which was then propagated into the image domain with errors less than 4.1%±1.6%. Cadaver measurements indicated that image noise was matched to within 2.6%±2.0%. More importantly, the 4AFC observer studies indicated that the simulated images were realistic, i.e., no detectable difference between simulated and original images (p=0.86) was observed. JND studies indicated that observers’ sensitivity to change in noise levels corresponded to a 25% difference in dose, which is far larger than the noise accuracy achieved by simulation. In summary, the dose-reduction simulation tool demonstrated excellent accuracy in providing realistic images. The methodology promises to be a useful tool for researchers and radiologists to explore dose reduction protocols in an effort to produce diagnostic images with radiation dose “as low as reasonably achievable.” PMID:19235386

  6. Spasm of the near reflex associated with head injury.

    PubMed

    Knapp, Christopher; Sachdev, Arun; Gottlob, Irene

    2002-03-01

    Spasm of the near reflex is characterized by intermittent miosis, convergence spasm and pseudomyopia with blurred vision at distance. Usually, it is a functional disorder in young patients with underlying emotional problems. Only rarely is it caused by organic disorder. We report a patient who developed convergent spasm associated with miosis after head trauma at the age of 84 years.

  7. Normal, nearsightedness, and farsightedness (image)

    MedlinePlus

    ... Nearsightedness results in blurred vision when the visual image is focused in front of the retina, rather ... blurred. Farsightedness is the result of the visual image being focused behind the retina rather than directly ...

  8. Hazardous Continuation Backward in Time in Nonlinear Parabolic Equations, and an Experiment in Deblurring Nonlinearly Blurred Imagery

    PubMed Central

    Carasso, Alfred S

    2013-01-01

    Identifying sources of ground water pollution, and deblurring nanoscale imagery as well as astronomical galaxy images, are two important applications involving numerical computation of parabolic equations backward in time. Surprisingly, very little is known about backward continuation in nonlinear parabolic equations. In this paper, an iterative procedure originating in spectroscopy in the 1930’s, is adapted into a useful tool for solving a wide class of 2D nonlinear backward parabolic equations. In addition, previously unsuspected difficulties are uncovered that may preclude useful backward continuation in parabolic equations deviating too strongly from the linear, autonomous, self adjoint, canonical model. This paper explores backward continuation in selected 2D nonlinear equations, by creating fictitious blurred images obtained by using several sharp images as initial data in these equations, and capturing the corresponding solutions at some positive time T. Successful backward continuation from t=T to t = 0, would recover the original sharp image. Visual recognition provides meaningful evaluation of the degree of success or failure in the reconstructed solutions. Instructive examples are developed, illustrating the unexpected influence of certain types of nonlinearities. Visually and statistically indistinguishable blurred images are presented, with vastly different deblurring results. These examples indicate that how an image is nonlinearly blurred is critical, in addition to the amount of blur. The equations studied represent nonlinear generalizations of Brownian motion, and the blurred images may be interpreted as visually expressing the results of novel stochastic processes. PMID:26401430

  9. Hazardous Continuation Backward in Time in Nonlinear Parabolic Equations, and an Experiment in Deblurring Nonlinearly Blurred Imagery.

    PubMed

    Carasso, Alfred S

    2013-01-01

    Identifying sources of ground water pollution, and deblurring nanoscale imagery as well as astronomical galaxy images, are two important applications involving numerical computation of parabolic equations backward in time. Surprisingly, very little is known about backward continuation in nonlinear parabolic equations. In this paper, an iterative procedure originating in spectroscopy in the 1930's, is adapted into a useful tool for solving a wide class of 2D nonlinear backward parabolic equations. In addition, previously unsuspected difficulties are uncovered that may preclude useful backward continuation in parabolic equations deviating too strongly from the linear, autonomous, self adjoint, canonical model. This paper explores backward continuation in selected 2D nonlinear equations, by creating fictitious blurred images obtained by using several sharp images as initial data in these equations, and capturing the corresponding solutions at some positive time T. Successful backward continuation from t=T to t = 0, would recover the original sharp image. Visual recognition provides meaningful evaluation of the degree of success or failure in the reconstructed solutions. Instructive examples are developed, illustrating the unexpected influence of certain types of nonlinearities. Visually and statistically indistinguishable blurred images are presented, with vastly different deblurring results. These examples indicate that how an image is nonlinearly blurred is critical, in addition to the amount of blur. The equations studied represent nonlinear generalizations of Brownian motion, and the blurred images may be interpreted as visually expressing the results of novel stochastic processes.

  10. Consecutive Short-Scan CT for Geological Structure Analog Models with Large Size on In-Situ Stage.

    PubMed

    Yang, Min; Zhang, Wen; Wu, Xiaojun; Wei, Dongtao; Zhao, Yixin; Zhao, Gang; Han, Xu; Zhang, Shunli

    2016-01-01

    For the analysis of interior geometry and property changes of a large-sized analog model during a loading or other medium (water or oil) injection process with a non-destructive way, a consecutive X-ray computed tomography (XCT) short-scan method is developed to realize an in-situ tomography imaging. With this method, the X-ray tube and detector rotate 270° around the center of the guide rail synchronously by switching positive and negative directions alternately on the way of translation until all the needed cross-sectional slices are obtained. Compared with traditional industrial XCTs, this method well solves the winding problems of high voltage cables and oil cooling service pipes during the course of rotation, also promotes the convenience of the installation of high voltage generator and cooling system. Furthermore, hardware costs are also significantly decreased. This kind of scanner has higher spatial resolution and penetrating ability than medical XCTs. To obtain an effective sinogram which matches rotation angles accurately, a structural similarity based method is applied to elimination of invalid projection data which do not contribute to the image reconstruction. Finally, on the basis of geometrical symmetry property of fan-beam CT scanning, a whole sinogram filling a full 360° range is produced and a standard filtered back-projection (FBP) algorithm is performed to reconstruct artifacts-free images.

  11. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  12. Synthesis and quality control of fluorodeoxyglucose and performance assessment of Siemens MicroFocus 220 small animal PET scanner

    NASA Astrophysics Data System (ADS)

    Phaterpekar, Siddhesh Nitin

    The scope of this article is to cover the synthesis and quality control procedures involved in production of Fludeoxyglucose (18F--FDG). The article also describes the cyclotron production of 18F radioisotope and gives a brief overview on operations and working of a fixed energy medical cyclotron. The quality control procedures for FDG involve radiochemical and radionuclidic purity tests, pH tests, chemical purity tests, sterility tests, endotoxin tests. Each of these procedures were carried out for multiple batches of FDG with a passing rate of 95% among 20 batches. The article also covers the quality assurance steps for the Siemens MicroPET Focus 220 Scanner using a Jaszczak phantom. We have carried out spatial resolution tests on the scanner, with an average transaxial resolution of 1.775mm with 2-3mm offset. Tests involved detector efficiency, blank scan sinograms and transmission sinograms. A series of radioactivity distribution tests are also carried out on a uniform phantom, denoting the variations in radioactivity and uniformity by using cylindrical ROIs in the transverse region of the final image. The purpose of these quality control tests is to make sure the manufactured FDG is biocompatible with the human body. Quality assurance tests are carried on PET scanners for efficient performance, and to make sure the quality of images acquired is according to the radioactivity distribution in the subject of interest.

  13. Differences in children and adolescents' ability of reporting two CVS-related visual problems.

    PubMed

    Hu, Liang; Yan, Zheng; Ye, Tiantian; Lu, Fan; Xu, Peng; Chen, Hao

    2013-01-01

    The present study examined whether children and adolescents can correctly report dry eyes and blurred distance vision, two visual problems associated with computer vision syndrome. Participants are 913 children and adolescents aged 6-17. They were asked to report their visual problems, including dry eyes and blurred distance vision, and received an eye examination, including tear film break-up time (TFBUT) and visual acuity (VA). Inconsistency was found between participants' reports of dry eyes and TFBUT results among all 913 participants as well as for all of four subgroups. In contrast, consistency was found between participants' reports of blurred distance vision and VA results among 873 participants who had never worn glasses as well as for the four subgroups. It was concluded that children and adolescents are unable to report dry eyes correctly; however, they are able to report blurred distance vision correctly. Three practical implications of the findings were discussed. Little is known about children's ability to report their visual problems, an issue critical to diagnosis and treatment of children's computer vision syndrome. This study compared children's self-reports and clinic examination results and found children can correctly report blurred distance vision but not dry eyes.

  14. Estimation of stereovision in conditions of blurring simulation

    NASA Astrophysics Data System (ADS)

    Krumina, Gunta; Ozolinsh, Maris; Lacis, Ivazs; Lyakhovetskii, Vsevolod

    2005-08-01

    The aim of this study was to evaluate the simulation of eye pathologies, such as amblyopia and cataracts, to estimate the stereovision in artificial conditions, and to compare the results on the stereothreshold obtained in artificial and real- pathologic conditions. Characteristic of the above-mentioned real-life forms of a reduced vision is a blurred image in one of the eyes. The blurring was simulated by (i) defocusing, (ii) blurred stimuli on the screen, and (iii) occluding of an eye with PLZT or PDLC plates. When comparing the methods, two parameters were used: the subject's visual acuity and the modulation depth of the image. The eye occluder method appeared to systematically provide higher stereothreshold values than the rest of the methods. The PLZT and PDLC plates scattered more in the blue and decreased the contrast of the stimuli when the blurring degree was increased. In the eye occluder method, the stereothreshold increased faster than in the defocusation and monitor stimuli methods when the visual acuity difference was higher than 0.4. It has been shown that the PLZT and PDLC plates are good optical phantoms for the simulation of a cataract, while the defocusation and monitor stimuli methods are more suitable for amblyopia.

  15. Seeing blur: 'motion sharpening' without motion.

    PubMed Central

    Georgeson, Mark A; Hammett, Stephen T

    2002-01-01

    It is widely supposed that things tend to look blurred when they are moving fast. Previous work has shown that this is true for sharp edges but, paradoxically, blurred edges look sharper when they are moving than when stationary. This is 'motion sharpening'. We show that blurred edges also look up to 50% sharper when they are presented briefly (8-24 ms) than at longer durations (100-500 ms) without motion. This argues strongly against high-level models of sharpening based specifically on compensation for motion blur. It also argues against a recent, low-level, linear filter model that requires motion to produce sharpening. No linear filter model can explain our finding that sharpening was similar for sinusoidal and non-sinusoidal gratings, since linear filters can never distort sine waves. We also conclude that the idea of a 'default' assumption of sharpness is not supported by experimental evidence. A possible source of sharpening is a nonlinearity in the contrast response of early visual mechanisms to fast or transient temporal changes, perhaps based on the magnocellular (M-cell) pathway. Our finding that sharpening is not diminished at low contrast sets strong constraints on the nature of the nonlinearity. PMID:12137571

  16. Direct recovery of regional tracer kinetics from temporally inconsistent dynamic ECT projections using dimension-reduced time-activity basis

    NASA Astrophysics Data System (ADS)

    Maltz, Jonathan S.

    2000-11-01

    We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.

  17. Sinogram restoration for ultra-low-dose x-ray multi-slice helical CT by nonparametric regression

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Siddiqui, Khan; Zhu, Bin; Tao, Yang; Siegel, Eliot

    2007-03-01

    During the last decade, x-ray computed tomography (CT) has been applied to screen large asymptomatic smoking and nonsmoking populations for early lung cancer detection. Because a larger population will be involved in such screening exams, more and more attention has been paid to studying low-dose, even ultra-low-dose x-ray CT. However, reducing CT radiation exposure will increase noise level in the sinogram, thereby degrading the quality of reconstructed CT images as well as causing more streak artifacts near the apices of the lung. Thus, how to reduce the noise levels and streak artifacts in the low-dose CT images is becoming a meaningful topic. Since multi-slice helical CT has replaced conventional stop-and-shoot CT in many clinical applications, this research mainly focused on the noise reduction issue in multi-slice helical CT. The experiment data were provided by Siemens SOMATOM Sensation 16-Slice helical CT. It included both conventional CT data acquired under 120 kvp voltage and 119 mA current and ultra-low-dose CT data acquired under 120 kvp and 10 mA protocols. All other settings are the same as that of conventional CT. In this paper, a nonparametric smoothing method with thin plate smoothing splines and the roughness penalty was proposed to restore the ultra-low-dose CT raw data. Each projection frame was firstly divided into blocks, and then the 2D data in each block was fitted to a thin-plate smoothing splines' surface via minimizing a roughness-penalized least squares objective function. By doing so, the noise in each ultra-low-dose CT projection was reduced by leveraging the information contained not only within each individual projection profile, but also among nearby profiles. Finally the restored ultra-low-dose projection data were fed into standard filtered back projection (FBP) algorithm to reconstruct CT images. The rebuilt results as well as the comparison between proposed approach and traditional method were given in the results and discussions section, and showed effectiveness of proposed thin-plate based nonparametric regression method.

  18. Blur-resistant perimetric stimuli.

    PubMed

    Horner, Douglas G; Dul, Mitchell W; Swanson, William H; Liu, Tiffany; Tran, Irene

    2013-05-01

    To develop perimetric stimuli that are resistant to the effects of peripheral defocus. One eye each was tested on subjects free of eye disease. Experiment 1 assessed spatial frequency, testing 12 subjects at eccentricities from 2 to 7 degrees using blur levels from 0 to 3 diopters (D) for two (Gabor) stimuli (spatial SD, 0.5 degrees; spatial frequencies, 0.5 and 1.0 cycles per degree [cpd]). Experiment 2 assessed stimulus size, testing 12 subjects at eccentricities from 4 to 7 degrees using blur levels 0 to 6 D for two Gaussians with SD of 0.5 and 0.25 degrees and a 0.5-cpd Gabor with SD of 0.5 degrees. Experiment 3 tested 13 subjects at eccentricities from fixation to 27 degrees using blur levels 0 to 6 D for Gabor stimuli at 56 locations; the spatial frequency ranged from 0.14 to 0.50 cpd with location, and SD was scaled accordingly. In experiment 1, blur by 3 D caused a small decline in log contrast sensitivity for the 0.5-cpd stimulus (mean ± SE, 0.09 ± 0.08 log units) and a larger (t = 7.7, p < 0.0001) decline for the 1.0-cpd stimulus (0.37 ± 0.13 log units). In experiment 2, blur by 6 D caused minimal decline for the larger Gaussian, by 0.17 ± 0.16 log units, and larger (t > 4.5, p < 0.001) declines for the smaller Gaussian (0.33 ± 0.16 log units) and the Gabor (0.36 ± 0.18 log units). In experiment 3, blur by 6 D caused declines by 0.27 ± 0.05 log units for eccentricities from 0 to 10 degrees, by 0.20 ± 0.04 log units for eccentricities from 10 to 20 degrees, and 0.13 ± 0.03 log units for eccentricities from 20 to 27 degrees. Experiments 1 and 2 allowed us to design stimuli for experiment 3 that were resistant to effects of peripheral defocus.

  19. Restoration of out-of-focus images based on circle of confusion estimate

    NASA Astrophysics Data System (ADS)

    Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto

    2002-11-01

    In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.

  20. Stroboscopic Image Modulation to Reduce the Visual Blur of an Object Being Viewed by an Observer Experiencing Vibration

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K. (Inventor); Adelstein, Bernard D. (Inventor); Anderson, Mark R. (Inventor); Beutter, Brent R. (Inventor); Ahumada, Albert J., Jr. (Inventor); McCann, Robert S. (Inventor)

    2014-01-01

    A method and apparatus for reducing the visual blur of an object being viewed by an observer experiencing vibration. In various embodiments of the present invention, the visual blur is reduced through stroboscopic image modulation (SIM). A SIM device is operated in an alternating "on/off" temporal pattern according to a SIM drive signal (SDS) derived from the vibration being experienced by the observer. A SIM device (controlled by a SIM control system) operates according to the SDS serves to reduce visual blur by "freezing" (or reducing an image's motion to a slow drift) the visual image of the viewed object. In various embodiments, the SIM device is selected from the group consisting of illuminator(s), shutter(s), display control system(s), and combinations of the foregoing (including the use of multiple illuminators, shutters, and display control systems).

  1. Hyoscine Skin Patches for Drooling Dilate Pupils and Impair Accommodation: Spectacle Correction for Photophobia and Blurred Vision May Be Warranted

    ERIC Educational Resources Information Center

    Saeed, Manzar; Henderson, Gladys; Dutton, Gordon N.

    2007-01-01

    Hyoscine skin patches diminish salivation by their anticholinergic action. The aim of reporting this case series is to present the ophthalmic side effects in children, and to highlight the precautions to take. Five children (two males, three females; age range 8-18y) with quadriplegic cerebral palsy (Gross Motor Function Classification System…

  2. Influence of image registration on ADC images computed from free-breathing diffusion MRIs of the abdomen

    NASA Astrophysics Data System (ADS)

    Guyader, Jean-Marie; Bernardin, Livia; Douglas, Naomi H. M.; Poot, Dirk H. J.; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    The apparent diffusion coefficient (ADC) is an imaging biomarker providing quantitative information on the diffusion of water in biological tissues. This measurement could be of relevance in oncology drug development, but it suffers from a lack of reliability. ADC images are computed by applying a voxelwise exponential fitting to multiple diffusion-weighted MR images (DW-MRIs) acquired with different diffusion gradients. In the abdomen, respiratory motion induces misalignments in the datasets, creating visible artefacts and inducing errors in the ADC maps. We propose a multistep post-acquisition motion compensation pipeline based on 3D non-rigid registrations. It corrects for motion within each image and brings all DW-MRIs to a common image space. The method is evaluated on 10 datasets of free-breathing abdominal DW-MRIs acquired from healthy volunteers. Regions of interest (ROIs) are segmented in the right part of the abdomen and measurements are compared in the three following cases: no image processing, Gaussian blurring of the raw DW-MRIs and registration. Results show that both blurring and registration improve the visual quality of ADC images, but compared to blurring, registration yields visually sharper images. Measurement uncertainty is reduced both by registration and blurring. For homogeneous ROIs, blurring and registration result in similar median ADCs, which are lower than without processing. In a ROI at the interface between liver and kidney, registration and blurring yield different median ADCs, suggesting that uncorrected motion introduces a bias. Our work indicates that averaging procedures on the scanner should be avoided, as they remove the opportunity to perform motion correction.

  3. The use of vision-based image quality metrics to predict low-light performance of camera phones

    NASA Astrophysics Data System (ADS)

    Hultgren, B.; Hertel, D.

    2010-01-01

    Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.

  4. Blurring personal health and public priorities: an analysis of celebrity health narratives in the public sphere.

    PubMed

    Beck, Christina S; Aubuchon, Stellina M; McKenna, Timothy P; Ruhl, Stephanie; Simmons, Nathaniel

    2014-01-01

    This article explores the functions of personal celebrity health narratives in the public sphere. This study examines data about 157 celebrities, including athletes, actors, musicians, and politicians, who have shared private information regarding a personal health situation (or that of a loved one) with others in the public domain. Part of a larger project on celebrity health narratives, this article highlights three key functions that celebrity health narratives perform--education, inspiration, and activism--and discusses the implications for celebrities and for public conversations about health-related issues.

  5. Artificial testing targets with controllable blur for adaptive optics microscopes

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Tamada, Yosuke; Murata, Takashi; Oya, Shin; Hasebe, Mitsuyasu; Hayano, Yutaka; Kamei, Yasuhiro

    2017-08-01

    This letter proposes a method of configuring a testing target to evaluate the performance of adaptive optics microscopes. In this method, a testing slide with fluorescent beads is used to simultaneously determine the point spread function and the field of view. The point spread function is reproduced to simulate actual biological samples by etching a microstructure on the cover glass. The fabrication process is simplified to facilitate an onsite preparation. The artificial tissue consists of solid materials and silicone oil and is stable for use in repetitive experiments.

  6. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    NASA Astrophysics Data System (ADS)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  7. Comparison of Motion Blur Measurement Methods

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2008-01-01

    Motion blur is a significant display property for which accurate, valid measurement methods are needed. Recent measurements of a set of eight displays by a set of six measurement devices provide an opportunity to evaluate techniques of measurement and of the analysis of those measurements.

  8. Indoor Spatial Updating with Reduced Visual Information

    PubMed Central

    Legge, Gordon E.; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M.

    2016-01-01

    Purpose Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Methods Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. Results With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. Discussion If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment. PMID:26943674

  9. Indoor Spatial Updating with Reduced Visual Information.

    PubMed

    Legge, Gordon E; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M

    2016-01-01

    Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment.

  10. Optimization and Comparison of Different Digital Mammographic Tomosynthesis Reconstruction Methods

    DTIC Science & Technology

    2007-04-01

    physical measurements of impulse response analysis, modulation transfer function (MTF) and noise power spectrum (NPS). (Months 5- 12). 1.2.1. Simulate...added: projection images with simulated impulse and the 1/r2 shading difference. Other system blur and noise issues were not addressed in this paper...spectrum (NPS), Noise -equivalent quanta (NEQ), impulse response, Back Projection (BP) 1. INTRODUCTION Digital breast tomosynthesis is a new

  11. Influence of the corneal optical zone on the point-spread function of the human eye

    NASA Astrophysics Data System (ADS)

    Rol, Pascal O.; Parel, Jean-Marie A.

    1992-08-01

    In refractive surgery, a number of surgical techniques have been developed to correct ametropia (refractive defaults) of the eye by changing the exterior shape of the cornea. Because the air-cornea interface makes up for about two thirds of the refractive power of the eye, a refractive correction can be obtained by a suitable reshaping of the cornea. Postoperatively, it is usually observed that the corneal region consists of two or more zones which are characterized by different optical parameters exhibiting in particular different focal distances. Under normal circumstances, only the central area of the cornea is involved in the formation of the retinal image. However, if part of the light entering the eye through peripheral portions of the cornea with refractive properties different from the central area can pass the pupil, an out-of-focus `ghost' image may be overlaid on the retina causing a blur. In such a case the resolution, and the contrast performance of the eye which is expected from a successful operation, may be reduced. This study is an attempt to quantify the vision blur as a function of the diameter of the central zone, i.e., the optical zone which is of importance for vision.

  12. The Influence of Beam Broadening on the Spatial Resolution of Annular Dark Field Scanning Transmission Electron Microscopy.

    PubMed

    de Jonge, Niels; Verch, Andreas; Demers, Hendrix

    2018-02-01

    The spatial resolution of aberration-corrected annular dark field scanning transmission electron microscopy was studied as function of the vertical position z within a sample. The samples consisted of gold nanoparticles (AuNPs) positioned in different horizontal layers within aluminum matrices of 0.6 and 1.0 µm thickness. The highest resolution was achieved in the top layer, whereas the resolution was reduced by beam broadening for AuNPs deeper in the sample. To examine the influence of the beam broadening, the intensity profiles of line scans over nanoparticles at a certain vertical location were analyzed. The experimental data were compared with Monte Carlo simulations that accurately matched the data. The spatial resolution was also calculated using three different theoretical models of the beam blurring as function of the vertical position within the sample. One model considered beam blurring to occur as a single scattering event but was found to be inaccurate for larger depths of the AuNPs in the sample. Two models were adapted and evaluated that include estimates for multiple scattering, and these described the data with sufficient accuracy to be able to predict the resolution. The beam broadening depended on z 1.5 in all three models.

  13. Photographic simulation of off-axis blurring due to chromatic aberration in spectacle lenses.

    PubMed

    Doroslovački, Pavle; Guyton, David L

    2015-02-01

    Spectacle lens materials of high refractive index (nd) tend to have high chromatic dispersion (low Abbé number [V]), which may contribute to visual blurring with oblique viewing. A patient who noted off-axis blurring with new high-refractive-index spectacle lenses prompted us to do a photographic simulation of the off-axis aberrations in 3 readily available spectacle lens materials, CR-39 (nd = 1.50), polyurethane (nd = 1.60), and polycarbonate (nd = 1.59). Both chromatic and monochromatic aberrations were found to cause off-axis image degradation. Chromatic aberration was more prominent in the higher-index materials (especially polycarbonate), whereas the lower-index CR-39 had more astigmatism of oblique incidence. It is important to consider off-axis aberrations when a patient complains of otherwise unexplained blurred vision with a new pair of spectacle lenses, especially given the increasing promotion of high-refractive-index materials with high chromatic dispersion. Copyright © 2015 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.

  14. Face imagery is based on featural representations.

    PubMed

    Lobmaier, Janek S; Mast, Fred W

    2008-01-01

    The effect of imagery on featural and configural face processing was investigated using blurred and scrambled faces. By means of blurring, featural information is reduced; by scrambling a face into its constituent parts configural information is lost. Twenty-four participants learned ten faces together with the sound of a name. In following matching-to-sample tasks participants had to decide whether an auditory presented name belonged to a visually presented scrambled or blurred face in two experimental conditions. In the imagery condition, the name was presented prior to the visual stimulus and participants were required to imagine the corresponding face as clearly and vividly as possible. In the perception condition name and test face were presented simultaneously, thus no facilitation via mental imagery was possible. Analyses of the hit values showed that in the imagery condition scrambled faces were recognized significantly better than blurred faces whereas there was no such effect for the perception condition. The results suggest that mental imagery activates featural representations more than configural representations.

  15. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    PubMed Central

    Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung

    2015-01-01

    Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282

  16. Real-time deblurring of handshake blurred images on smartphones

    NASA Astrophysics Data System (ADS)

    Pourreza-Shahri, Reza; Chang, Chih-Hsiang; Kehtarnavaz, Nasser

    2015-02-01

    This paper discusses an Android app for the purpose of removing blur that is introduced as a result of handshakes when taking images via a smartphone. This algorithm utilizes two images to achieve deblurring in a computationally efficient manner without suffering from artifacts associated with deconvolution deblurring algorithms. The first image is the normal or auto-exposure image and the second image is a short-exposure image that is automatically captured immediately before or after the auto-exposure image is taken. A low rank approximation image is obtained by applying singular value decomposition to the auto-exposure image which may appear blurred due to handshakes. This approximation image does not suffer from blurring while incorporating the image brightness and contrast information. The eigenvalues extracted from the low rank approximation image are then combined with those from the shortexposure image. It is shown that this deblurring app is computationally more efficient than the adaptive tonal correction algorithm which was previously developed for the same purpose.

  17. Blur-resistant Perimetric Stimuli

    PubMed Central

    Horner, Douglas G.; Dul, Mitchell W.; Swanson, William H.; Liu, Tiffany; Tran, Irene

    2013-01-01

    Purpose To develop perimetric stimuli which are resistant to the effects of peripheral defocus. Methods One eye each was tested on subjects free of eye disease. Experiment 1 assessed spatial frequency, testing 12 subjects at eccentricities from 2° to 7°, using blur levels from 0 D to 3 D for two (Gabor) stimuli (spatial standard deviation (SD) = 0.5°, spatial frequencies of 0.5 and 1.0 cpd). Experiment 2 assessed stimulus size, testing 12 subjects at eccentricities from 4° to 7°, using blur levels 0 D to 6 D, for two Gaussians with SDs of 0.5° and 0.25° and a 0.5 cpd Gabor with SD of 0.5°. Experiment 3 tested 13 subjects at eccentricities from fixation to 27°, using blur levels 0 D to 6 D, for Gabor stimuli at 56 locations; the spatial frequency ranged from 0.14 to 0.50 cpd with location, and SD was scaled accordingly. Results In experiment 1, blur by 3 D caused a small decline in log contrast sensitivity (CS) for the 0.5 cpd stimulus (mean ± SE = −0.09 ± 0.08 log unit) and a larger (t = 7.7, p <0.0001) decline for the 1.0 cpd stimulus (0.37 ± 0.13 log unit). In experiment 2, blur by 6 D caused minimal decline for the larger Gaussian, by −0.17 ± 0.16 log unit, and larger (t >4.5, p < 0.001) declines for the smaller Gaussian (−0.33 ± 0.16 log unit) and the Gabor (−0.36 ± 0.18 log unit). In experiment 3, blur by 6 D caused declines by 0.27 ± 0.05 log unit for eccentricities from 0° to 10°, by 0.20 ± 0.04 log unit for eccentricities from 10° to 20° and 0.13 ± 0.03 log unit for eccentricities from 20°–27°. Conclusions Experiments 1 & 2 allowed us to design stimuli for Experiment 3 that were resistant to effects of peripheral defocus. PMID:23584488

  18. Single image super-resolution reconstruction algorithm based on eage selection

    NASA Astrophysics Data System (ADS)

    Zhang, Yaolan; Liu, Yijun

    2017-05-01

    Super-resolution (SR) has become more important, because it can generate high-quality high-resolution (HR) images from low-resolution (LR) input images. At present, there are a lot of work is concentrated on developing sophisticated image priors to improve the image quality, while taking much less attention to estimating and incorporating the blur model that can also impact the reconstruction results. We present a new reconstruction method based on eager selection. This method takes full account of the factors that affect the blur kernel estimation and accurately estimating the blur process. When comparing with the state-of-the-art methods, our method has comparable performance.

  19. Blurring of the public/private divide: the Canadian chapter.

    PubMed

    Flood, Colleen M; Thomas, Bryan

    2010-06-01

    Blurring of public/private divide is occurring in different ways around the world, with differential effects in terms of access and equity. In Canada, one pathway towards privatization has received particular attention: duplicative private insurance, allowing those with the financial means to bypass queues in the public system. We assess recent legal and policy developments on this front, but also describe other trends towards the blurring of public and private in Canada: the reliance on mandated private insurance for pharmaceutical coverage; provincial governments' reliance on public-private partnerships to finance hospitals; and the incorporation of for-profit clinics within the public health care system.

  20. Dynamic accommodation with simulated targets blurred with high order aberrations

    PubMed Central

    Gambra, Enrique; Wang, Yinan; Yuan, Jing; Kruger, Philip B.; Marcos, Susana

    2010-01-01

    High order aberrations have been suggested to play a role in determining the direction of accommodation. We have explored the effect of retinal blur induced by high order aberrations on dynamic accommodation by measuring the accommodative response to sinusoidal variations in accommodative demand (1–3 D). The targets were blurred with 0.3 and 1 μm (for a 3-mm pupil) of defocus, coma, trefoil and spherical aberration. Accommodative gain decreased significantly when 1-μm of aberration was induced. We found a strong correlation between the relative accommodative gain (and phase lag) and the contrast degradation imposed on the target at relevant spatial frequencies. PMID:20600230

  1. Characterization of adaptive statistical iterative reconstruction (ASIR) in low contrast helical abdominal imaging via a transfer function based method

    NASA Astrophysics Data System (ADS)

    Zhang, Da; Li, Xinhua; Liu, Bob

    2012-03-01

    Since the introduction of ASiR, its potential in noise reduction has been reported in various clinical applications. However, the influence of different scan and reconstruction parameters on the trade off between ASiR's blurring effect and noise reduction in low contrast imaging has not been fully studied. Simple measurements on low contrast images, such as CNR or phantom scores could not explore the nuance nature of this problem. We tackled this topic using a method which compares the performance of ASiR in low contrast helical imaging based on an assumed filter layer on top of the FBP reconstruction. Transfer functions of this filter layer were obtained from the noise power spectra (NPS) of corresponding FBP and ASiR images that share the same scan and reconstruction parameters. 2D transfer functions were calculated as sqrt[NPSASiR(u, v)/NPSFBP(u, v)]. Synthesized ACR phantom images were generated by filtering the FBP images with the transfer functions of specific (FBP, ASiR) pairs, and were compared with the ASiR images. It is shown that the transfer functions could predict the deterministic blurring effect of ASiR on low contrast objects, as well as the degree of noise reductions. Using this method, the influence of dose, scan field of view (SFOV), display field of view (DFOV), ASiR level, and Recon Mode on the behavior of ASiR in low contrast imaging was studied. It was found that ASiR level, dose level, and DFOV play more important roles in determining the behavior of ASiR than the other two parameters.

  2. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    NASA Astrophysics Data System (ADS)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  3. Assessment of the impact of modeling axial compression on PET image reconstruction.

    PubMed

    Belzunce, Martin A; Reader, Andrew J

    2017-10-01

    To comprehensively evaluate both the acceleration and image-quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end-point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5-10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher frequencies. Modeling the axial compression also achieved a lower coefficient of variation but with an increase of intervoxel correlations. The unmatched projector/backprojector achieved similar contrast values to the matched version at considerably lower reconstruction times, but at the cost of noisier images. For a line source scan, the reconstructions with modeling of the axial compression achieved similar resolution to the span 1 reconstructions. Axial compression applied to PET sinograms was found to have a negligible impact for span values lower than 7. For span values up to 21, the spatial resolution degradation due to the axial compression can be almost completely compensated for by modeling this effect in the system matrix at the expense of considerably larger processing times and higher intervoxel correlations, while retaining the storage benefit of compressed data. For even higher span values, the resolution loss cannot be completely compensated possibly due to an effective null space in the system. The use of an unmatched projector/backprojector proved to be a practical solution to compensate for the spatial resolution degradation at a reasonable computational cost but can lead to noisier images. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  4. Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs.

    PubMed

    Wang, Shaoze; Jin, Kai; Lu, Haitong; Cheng, Chuming; Ye, Juan; Qian, Dahong

    2016-04-01

    Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair generic quality that would be especially useful to assist inexperienced individuals in collecting meaningful and interpretable data with consistency. The algorithm is based on three characteristics of the human visual system--multi-channel sensation, just noticeable blur, and the contrast sensitivity function to detect illumination and color distortion, blur, and low contrast distortion, respectively. A total of 536 retinal images, 280 from proprietary databases and 256 from public databases, were graded independently by one senior and two junior ophthalmologists, such that three partial measures of quality and generic overall quality were classified into two categories. Binary classification was implemented by the support vector machine and the decision tree, and receiver operating characteristic (ROC) curves were obtained and plotted to analyze the performance of the proposed algorithm. The experimental results revealed that the generic overall quality classification achieved a sensitivity of 87.45% at a specificity of 91.66%, with an area under the ROC curve of 0.9452, indicating the value of applying the algorithm, which is based on the human vision system, to assess the image quality of non-mydriatic photography, especially for low-cost ophthalmological telemedicine applications.

  5. Single scan parameterization of space-variant point spread functions in image space via a printed array: the impact for two PET/CT scanners.

    PubMed

    Kotasidis, F A; Matthews, J C; Angelis, G I; Noonan, P J; Jackson, A; Price, P; Lionheart, W R; Reader, A J

    2011-05-21

    Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.

  6. Reading Motivation and Reading Engagement: Clarifying Commingled Conceptions

    ERIC Educational Resources Information Center

    Unrau, Norman J.; Quirk, Matthew

    2014-01-01

    The constructs of motivation for reading and reading engagement have frequently become blurred and ambiguous in both research and discussions of practice. To address this commingling of constructs, the authors provide a concise review of the literature on motivation for reading and reading engagement and illustrate the blurring of those concepts…

  7. The "Blur" of Federal Information and Services: Implications for University Libraries.

    ERIC Educational Resources Information Center

    Lippincott, Joan K.; Cheverie, Joan F.

    1999-01-01

    Discusses the interrelation of product content with associated services, or "blurring" (Davis and Meyer) and its relation to federal information and services. Highlights include the federal role in facilitating use of government-collected information; infrastructure and policy issues; and implications for university library reference services,…

  8. Adaptive Deblurring of Noisy Images

    DTIC Science & Technology

    2007-10-01

    deblurring filter adaptively by estimating energy of the signal and noise of the image to determine the passband and transition-band of the filter...The deblurring filter design criteria are: a) filter magnitude is less than one at the frequencies where the noise is stronger than the desired signal...filter is able to deblur the image by a desired amount based on the estimated or known blurring function while suppressing the noise in the output

  9. Designing Biomimetic, Dissipative Material Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balazs, Anna C.; Whitesides, George M.; Brinker, C. Jeffrey

    Throughout human history, new materials have been the foundation of transformative technologies: from bronze, paper, and ceramics to steel, silicon, and polymers, each material has enabled far-reaching advances. Today, another new class of materials is emerging—one with both the potential to provide radically new functions and to challenge our notion of what constitutes a “material”. These materials would harvest, transduce, or dissipate energy to perform autonomous, dynamic functions that mimic the behaviors of living organisms. Herein, we discuss the challenges and benefits of creating “dissipative” materials that can potentially blur the boundaries between living and non-living matter.

  10. Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.

    2016-10-01

    With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.

  11. Multi-frequency interpolation in spiral magnetic resonance fingerprinting for correction of off-resonance blurring.

    PubMed

    Ostenson, Jason; Robison, Ryan K; Zwart, Nicholas R; Welch, E Brian

    2017-09-01

    Magnetic resonance fingerprinting (MRF) pulse sequences often employ spiral trajectories for data readout. Spiral k-space acquisitions are vulnerable to blurring in the spatial domain in the presence of static field off-resonance. This work describes a blurring correction algorithm for use in spiral MRF and demonstrates its effectiveness in phantom and in vivo experiments. Results show that image quality of T1 and T2 parametric maps is improved by application of this correction. This MRF correction has negligible effect on the concordance correlation coefficient and improves coefficient of variation in regions of off-resonance relative to uncorrected measurements. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Motion-blur-compensated structural health monitoring system for tunnels at a speed of 100 km/h

    NASA Astrophysics Data System (ADS)

    Hayakawa, Tomohiko; Ishikawa, Masatoshi

    2017-04-01

    High quality images of tunnel surfaces are necessary for visual judgment of abnormal parts. Hence, we propose a monitoring system from a vehicle, which is motion-blur-compensated by the back and forth motion of a galvanometer mirror to offset the vehicle speed, prolong exposure time, and take sharp images including detailed textures. As experimental result of the vehicle-mounted system, we confirmed significant improvements in image quality for a few millimeter-sized ordered black-and-white stripes and cracks, by means of motion blur compensation and prolonged exposure time, under the maximum speed allowed in Japan in a standard tunnel of a highway.

  13. Blurring the Boundaries between School and Community: Implementing Connected Learning Principles in English Classrooms

    ERIC Educational Resources Information Center

    Cartun, Ashley; Penuel, William R.; West-Puckett, Stephanie

    2017-01-01

    In participatory cultures, the lines between producers and consumers of text are blurred, and communities emerge that are based on shared interest and peer support. Although literacy scholarship has mostly focused on youth engagement and literacy practices within online participatory cultures, scholars in the learning sciences investigate these…

  14. Blurring the Boundaries of Public and Private Education in Brazil

    ERIC Educational Resources Information Center

    Akkari, Abdeljalil

    2013-01-01

    A typical analysis of the privatization of education in Latin America focuses on private sector development at the expense of public education. In this paper, I propose a different view that will highlight the blurring of boundaries between public and private education in Brazil. This confusion perpetuates the historical duality of the education…

  15. Video surveillance with speckle imaging

    DOEpatents

    Carrano, Carmen J [Livermore, CA; Brase, James M [Pleasanton, CA

    2007-07-17

    A surveillance system looks through the atmosphere along a horizontal or slant path. Turbulence along the path causes blurring. The blurring is corrected by speckle processing short exposure images recorded with a camera. The exposures are short enough to effectively freeze the atmospheric turbulence. Speckle processing is used to recover a better quality image of the scene.

  16. Quantitative assessment of image motion blur in diffraction images of moving biological cells

    NASA Astrophysics Data System (ADS)

    Wang, He; Jin, Changrong; Feng, Yuanming; Qi, Dandan; Sa, Yu; Hu, Xin-Hua

    2016-02-01

    Motion blur (MB) presents a significant challenge for obtaining high-contrast image data from biological cells with a polarization diffraction imaging flow cytometry (p-DIFC) method. A new p-DIFC experimental system has been developed to evaluate the MB and its effect on image analysis using a time-delay-integration (TDI) CCD camera. Diffraction images of MCF-7 and K562 cells have been acquired with different speed-mismatch ratios and compared to characterize MB quantitatively. Frequency analysis of the diffraction images shows that the degree of MB can be quantified by bandwidth variations of the diffraction images along the motion direction. The analytical results were confirmed by the p-DIFC image data acquired at different speed-mismatch ratios and used to validate a method of numerical simulation of MB on blur-free diffraction images, which provides a useful tool to examine the blurring effect on diffraction images acquired from the same cell. These results provide insights on the dependence of diffraction image on MB and allow significant improvement on rapid biological cell assay with the p-DIFC method.

  17. Imaging model for the scintillator and its application to digital radiography image enhancement.

    PubMed

    Wang, Qian; Zhu, Yining; Li, Hongwei

    2015-12-28

    Digital Radiography (DR) images obtained by OCD-based (optical coupling detector) Micro-CT system usually suffer from low contrast. In this paper, a mathematical model is proposed to describe the image formation process in scintillator. By solving the correlative inverse problem, the quality of DR images is improved, i.e. higher contrast and spatial resolution. By analyzing the radiative transfer process of visible light in scintillator, scattering is recognized as the main factor leading to low contrast. Moreover, involved blurring effect is also concerned and described as point spread function (PSF). Based on these physical processes, the scintillator imaging model is then established. When solving the inverse problem, pre-correction to the intensity of x-rays, dark channel prior based haze removing technique, and an effective blind deblurring approach are employed. Experiments on a variety of DR images show that the proposed approach could improve the contrast of DR images dramatically as well as eliminate the blurring vision effectively. Compared with traditional contrast enhancement methods, such as CLAHE, our method could preserve the relative absorption values well.

  18. A no-reference video quality assessment metric based on ROI

    NASA Astrophysics Data System (ADS)

    Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan

    2015-01-01

    A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.

  19. A method of extending the depth of focus of the high-resolution X-ray imaging system employing optical lens and scintillator: a phantom study.

    PubMed

    Li, Guang; Luo, Shouhua; Yan, Yuling; Gu, Ning

    2015-01-01

    The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion.

  20. A method of extending the depth of focus of the high-resolution X-ray imaging system employing optical lens and scintillator: a phantom study

    PubMed Central

    2015-01-01

    Background The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. Methods In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. Results By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. Conclusions The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion. PMID:25602532

  1. Lung dynamic MRI deblurring using low-rank decomposition and dictionary learning.

    PubMed

    Gou, Shuiping; Wang, Yueyue; Wu, Jiaolong; Lee, Percy; Sheng, Ke

    2015-04-01

    Lung dynamic MRI (dMRI) has emerged to be an appealing tool to quantify lung motion for both planning and treatment guidance purposes. However, this modality can result in blurry images due to intrinsically low signal-to-noise ratio in the lung and spatial/temporal interpolation. The image blurring could adversely affect the image processing that depends on the availability of fine landmarks. The purpose of this study is to reduce dMRI blurring using image postprocessing. To enhance the image quality and exploit the spatiotemporal continuity of dMRI sequences, a low-rank decomposition and dictionary learning (LDDL) method was employed to deblur lung dMRI and enhance the conspicuity of lung blood vessels. Fifty frames of continuous 2D coronal dMRI frames using a steady state free precession sequence were obtained from five subjects including two healthy volunteer and three lung cancer patients. In LDDL, the lung dMRI was decomposed into sparse and low-rank components. Dictionary learning was employed to estimate the blurring kernel based on the whole image, low-rank or sparse component of the first image in the lung MRI sequence. Deblurring was performed on the whole image sequences using deconvolution based on the estimated blur kernel. The deblurring results were quantified using an automated blood vessel extraction method based on the classification of Hessian matrix filtered images. Accuracy of automated extraction was calculated using manual segmentation of the blood vessels as the ground truth. In the pilot study, LDDL based on the blurring kernel estimated from the sparse component led to performance superior to the other ways of kernel estimation. LDDL consistently improved image contrast and fine feature conspicuity of the original MRI without introducing artifacts. The accuracy of automated blood vessel extraction was on average increased by 16% using manual segmentation as the ground truth. Image blurring in dMRI images can be effectively reduced using a low-rank decomposition and dictionary learning method using kernels estimated by the sparse component.

  2. Design, fabrication, and implementation of voxel-based 3D printed textured phantoms for task-based image quality assessment in CT

    NASA Astrophysics Data System (ADS)

    Solomon, Justin; Ba, Alexandre; Diao, Andrew; Lo, Joseph; Bier, Elianna; Bochud, François; Gehm, Michael; Samei, Ehsan

    2016-03-01

    In x-ray computed tomography (CT), task-based image quality studies are typically performed using uniform background phantoms with low-contrast signals. Such studies may have limited clinical relevancy for modern non-linear CT systems due to possible influence of background texture on image quality. The purpose of this study was to design and implement anatomically informed textured phantoms for task-based assessment of low-contrast detection. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find the CLB parameters that were most reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, a cylinder phantom (165 mm in diameter and 30 mm height) was designed, containing 20 low-contrast spherical signals (6 mm in diameter at targeted contrast levels of ~3.2, 5.2, 7.2, 10, and 14 HU, 4 repeats per signal). The phantom was voxelized and input into a commercial multi-material 3D printer (Object Connex 350), with custom software for voxel-based printing. Using principles of digital half-toning and dithering, the 3D printer was programmed to distribute two base materials (VeroWhite and TangoPlus, nominal voxel size of 42x84x30 microns) to achieve the targeted spatial distribution of x-ray attenuation properties. The phantom was used for task-based image quality assessment of a clinically available iterative reconstruction algorithm (Sinogram Affirmed Iterative Reconstruction, SAFIRE) using a channelized Hotelling observer paradigm. Images of the textured phantom and a corresponding uniform phantom were acquired at six dose levels and observer model performance was estimated for each condition (5 contrasts x 6 doses x 2 reconstructions x 2 backgrounds = 120 total conditions). Based on the observer model results, the dose reduction potential of SAFIRE was computed and compared between the uniform and textured phantom. The dose reduction potential of SAFIRE was found to be 23% based on the uniform phantom and 17% based on the textured phantom. This discrepancy demonstrates the need to consider background texture when assessing non-linear reconstruction algorithms.

  3. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  4. Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET

    NASA Astrophysics Data System (ADS)

    Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan

    2016-02-01

    Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.

  5. MRI-assisted PET motion correction for neurologic studies in an integrated MR-PET scanner.

    PubMed

    Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B; Michel, Christian J; El Fakhri, Georges; Schmand, Matthias; Sorensen, A Gregory

    2011-01-01

    Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MRI data can be used for motion tracking. In this work, a novel algorithm for data processing and rigid-body motion correction (MC) for the MRI-compatible BrainPET prototype scanner is described, and proof-of-principle phantom and human studies are presented. To account for motion, the PET prompt and random coincidences and sensitivity data for postnormalization were processed in the line-of-response (LOR) space according to the MRI-derived motion estimates. The processing time on the standard BrainPET workstation is approximately 16 s for each motion estimate. After rebinning in the sinogram space, the motion corrected data were summed, and the PET volume was reconstructed using the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed, and motion estimates were obtained using 2 high-temporal-resolution MRI-based motion-tracking techniques. After accounting for the misalignment between the 2 scanners, perfectly coregistered MRI and PET volumes were reproducibly obtained. The MRI output gates inserted into the PET list-mode allow the temporal correlation of the 2 datasets within 0.2 ms. The Hoffman phantom volume reconstructed by processing the PET data in the LOR space was similar to the one obtained by processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the procedure. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 s and 20 ms, respectively. Motion-deblurred PET images, with excellent delineation of specific brain structures, were obtained using these 2 MRI-based estimates. An MRI-based MC algorithm was implemented for an integrated MR-PET scanner. High-temporal-resolution MRI-derived motion estimates (obtained while simultaneously acquiring anatomic or functional MRI data) can be used for PET MC. An MRI-based MC method has the potential to improve PET image quality, increasing its reliability, reproducibility, and quantitative accuracy, and to benefit many neurologic applications.

  6. The Role of Clarity and Blur in Guiding Visual Attention in Photographs

    ERIC Educational Resources Information Center

    Enns, James T.; MacDonald, Sarah C.

    2013-01-01

    Visual artists and photographers believe that a viewer's gaze can be guided by selective use of image clarity and blur, but there is little systematic research. In this study, participants performed several eye-tracking tasks with the same naturalistic photographs, including recognition memory for the entire photo, as well as recognition memory…

  7. Patent Donations: Making Use of the Gift of Technology

    ERIC Educational Resources Information Center

    Talnack, G. Marie

    2010-01-01

    The lines between basic and applied research and the sectors of the U.S. economy responsible for each type have begun to blur. No better case for the blurring of these lines and the benefits of technology transfer among research institutions can be provided than the recent phenomenon of corporate patent donations to non-profit research…

  8. The effect of monocular target blur on simulated telerobotic manipulation

    NASA Technical Reports Server (NTRS)

    Liu, Andrew; Stark, Lawrence

    1991-01-01

    A simulation involving three types of telerobotic tasks that require information about the spatial position of objects is reported. This is similar to the results of psychophysical experiments examining the effect of blur on stereoacuity. It is suggested that other psychophysical experimental results could be used to predict operator performance for other telerobotic tasks. It is demonstrated that refractive errors in the helmet-mounted stereo display system can affect performance in the three types of telerobotic tasks. The results of two sets of experiments indicate that monocular target blur of two diopters or more degrades stereo display performance to the level of monocular displays. This indicates that moderate levels of visual degradation that affect the operator's stereoacuity may eliminate the performance advantage of stereo displays.

  9. Comparison of the ocular wavefront aberration between pharmacologically-induced and stimulus-driven accommodation.

    PubMed

    Plainis, S; Plevridi, E; Pallikaris, I G

    2009-05-01

    To compare the ocular wavefront aberration between pharmacologically- and stimulus-driven accommodation in phakic eyes of young subjects. The aberration structure of the tested eye when accommodating was measured using the Complete Ophthalmic Analysis System (COAS; AMO WaveFront Sciences, Albuquerque, NM, USA). It was used in conjunction with a purposely-modified Badal optometer to allow blur-driven accommodation to be stimulated by a high contrast letter E with a vergence range between +0.84 D and -8.00 D. Pharmacological accommodation was induced with one drop of pilocarpine 4%. Data from six subjects (age range: 23-36 years) with dark irides were collected. No correlation was found between the maximal levels of accommodative response achieved with an 8 D blur-driven stimulus and pharmacological stimulation. Pharmacological accommodation varied considerably among subjects: maximum accommodation, achieved within 38-85 min following application of pilocarpine, ranged from 2.7 D to 10.0 D. Furthermore, although the changes of spherical aberration and coma as a function of accommodation were indistinguishable between the two methods for low levels of response, a characteristic break in the pattern of aberration occurred at higher levels of pilocarpine-induced accommodation. This probably resulted from differences in the time course of biometric changes occurring with the two methods. Measuring the pilocarpine-induced accommodative response at only one time point after its application may lead to misleading results. The considerable inter-individual differences in the time course of drug-induced accommodative response and its magnitude may lead to overestimation or underestimation of the corresponding amplitude of normal, blur-driven accommodation. Stimulating accommodation by topical application of pilocarpine is inappropriate for evaluating the efficacy of 'accommodating' IOLs.

  10. Real-time image restoration for iris recognition systems.

    PubMed

    Kang, Byung Jun; Park, Kang Ryoung

    2007-12-01

    In the field of biometrics, it has been reported that iris recognition techniques have shown high levels of accuracy because unique patterns of the human iris, which has very many degrees of freedom, are used. However, because conventional iris cameras have small depth-of-field (DOF) areas, input iris images can easily be blurred, which can lead to lower recognition performance, since iris patterns are transformed by the blurring caused by optical defocusing. To overcome these problems, an autofocusing camera can be used. However, this inevitably increases the cost, size, and complexity of the system. Therefore, we propose a new real-time iris image-restoration method, which can increase the camera's DOF without requiring any additional hardware. This paper presents five novelties as compared to previous works: 1) by excluding eyelash and eyelid regions, it is possible to obtain more accurate focus scores from input iris images; 2) the parameter of the point spread function (PSF) can be estimated in terms of camera optics and measured focus scores; therefore, parameter estimation is more accurate than it has been in previous research; 3) because the PSF parameter can be obtained by using a predetermined equation, iris image restoration can be done in real-time; 4) by using a constrained least square (CLS) restoration filter that considers noise, performance can be greatly enhanced; and 5) restoration accuracy can also be enhanced by estimating the weight value of the noise-regularization term of the CLS filter according to the amount of image blurring. Experimental results showed that iris recognition errors when using the proposed restoration method were greatly reduced as compared to those results achieved without restoration or those achieved using previous iris-restoration methods.

  11. Characterization of the Structure and Function of the Normal Human Fovea Using Adaptive Optics Scanning Laser Ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Putnam, Nicole Marie

    In order to study the limits of spatial vision in normal human subjects, it is important to look at and near the fovea. The fovea is the specialized part of the retina, the light-sensitive multi-layered neural tissue that lines the inner surface of the human eye, where the cone photoreceptors are smallest (approximately 2.5 microns or 0.5 arcmin) and cone density reaches a peak. In addition, there is a 1:1 mapping from the photoreceptors to the brain in this central region of the retina. As a result, the best spatial sampling is achieved in the fovea and it is the retinal location used for acuity and spatial vision tasks. However, vision is typically limited by the blur induced by the normal optics of the eye and clinical tests of foveal vision and foveal imaging are both limited due to the blur. As a result, it is unclear what the perceptual benefit of extremely high cone density is. Cutting-edge imaging technology, specifically Adaptive Optics Scanning Laser Ophthalmoscopy (AOSLO), can be utilized to remove this blur, zoom in, and as a result visualize individual cone photoreceptors throughout the central fovea. This imaging combined with simultaneous image stabilization and targeted stimulus delivery expands our understanding of both the anatomical structure of the fovea on a microscopic scale and the placement of stimuli within this retinal area during visual tasks. The final step is to investigate the role of temporal variables in spatial vision tasks since the eye is in constant motion even during steady fixation. In order to learn more about the fovea, it becomes important to study the effect of this motion on spatial vision tasks. This dissertation steps through many of these considerations, starting with a model of the foveal cone mosaic imaged with AOSLO. We then use this high resolution imaging to compare anatomical and functional markers of the center of the normal human fovea. Finally, we investigate the role of natural and manipulated fixational eye movements in foveal vision, specifically looking at a motion detection task, contrast sensitivity, and image fading.

  12. Enacting Work Space in the Flow: Sensemaking about Mobile Practices and Blurring Boundaries

    ERIC Educational Resources Information Center

    Davis, Loni

    2013-01-01

    An increasing portion of the contemporary workforce is using mobile devices to create new kinds of work-space flows characterized by emergence, liquidity, and the blurring of all kinds of boundaries. This changes the traditional notion of the term "workplace." The present study focuses on how people enact and make sense of new work space…

  13. 1. "X15 RUN UP AREA 230." A somewhat blurred, very ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. "X-15 RUN UP AREA 230." A somewhat blurred, very low altitude low oblique view to the northwest. This view predates construction of observation bunkers. Photo no. "14,696 58 A-AFFTC 17 NOV 58." - Edwards Air Force Base, X-15 Engine Test Complex, Rogers Dry Lake, east of runway between North Base & South Base, Boron, Kern County, CA

  14. Assessment "as" Learning: Blurring the Boundaries of Assessment and Learning for Theory, Policy and Practice

    ERIC Educational Resources Information Center

    Dann, Ruth

    2014-01-01

    This paper explores assessment and learning in a way that blurs their boundaries. The notion of assessment "as" learning (AaL) is offered as an aspect of formative assessment (assessment for learning). It considers how pupils self-regulate their own learning, and in so doing make complex decisions about how they use feedback and engage…

  15. Contour sensitive saliency and depth application in image retargeting

    NASA Astrophysics Data System (ADS)

    Lu, Hongju; Yue, Pengfei; Zhao, Yanhui; Liu, Rui; Fu, Yuanbin; Zheng, Yuanjie; Cui, Jia

    2018-04-01

    Image retargeting technique requires important information preservation and less edge distortion during increasing/decreasing image size. The major existed content-aware methods perform well. However, there are two problems should be improved: the slight distortion appeared at the object edges and the structure distortion in the nonsalient area. According to psychological theories, people evaluate image quality based on multi-level judgments and comparison between different areas, both image content and image structure. The paper proposes a new standard: the structure preserving in non-salient area. After observation and image analysis, blur (slight blur) is generally existed at the edge of objects. The blur feature is used to estimate the depth cue, named blur depth descriptor. It can be used in the process of saliency computation for balanced image retargeting result. In order to keep the structure information in nonsalient area, the salient edge map is presented in Seam Carving process, instead of field-based saliency computation. The derivative saliency from x- and y-direction can avoid the redundant energy seam around salient objects causing structure distortion. After the comparison experiments between classical approaches and ours, the feasibility of our algorithm is proved.

  16. Photoresist and stochastic modeling

    NASA Astrophysics Data System (ADS)

    Hansen, Steven G.

    2018-01-01

    Analysis of physical modeling results can provide unique insights into extreme ultraviolet stochastic variation, which augment, and sometimes refute, conclusions based on physical intuition and even wafer experiments. Simulations verify the primacy of "imaging critical" counting statistics (photons, electrons, and net acids) and the image/blur-dependent dose sensitivity in describing the local edge or critical dimension variation. But the failure of simple counting when resist thickness is varied highlights a limitation of this exact analytical approach, so a calibratable empirical model offers useful simplicity and convenience. Results presented here show that a wide range of physical simulation results can be well matched by an empirical two-parameter model based on blurred image log-slope (ILS) for lines/spaces and normalized ILS for holes. These results are largely consistent with a wide range of published experimental results; however, there is some disagreement with the recently published dataset of De Bisschop. The present analysis suggests that the origin of this model failure is an unexpected blurred ILS:dose-sensitivity relationship failure in that resist process. It is shown that a photoresist mechanism based on high photodecomposable quencher loading and high quencher diffusivity can give rise to pitch-dependent blur, which may explain the discrepancy.

  17. Deblurring for spatial and temporal varying motion with optical computing

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Xue, Dongfeng; Hui, Zhao

    2016-05-01

    A way to estimate and remove spatially and temporally varying motion blur is proposed, which is based on an optical computing system. The translation and rotation motion can be independently estimated from the joint transform correlator (JTC) system without iterative optimization. The inspiration comes from the fact that the JTC system is immune to rotation motion in a Cartesian coordinate system. The work scheme of the JTC system is designed to keep switching between the Cartesian coordinate system and polar coordinate system in different time intervals with the ping-pang handover. In the ping interval, the JTC system works in the Cartesian coordinate system to obtain a translation motion vector with optical computing speed. In the pang interval, the JTC system works in the polar coordinate system. The rotation motion is transformed to the translation motion through coordinate transformation. Then the rotation motion vector can also be obtained from JTC instantaneously. To deal with continuous spatially variant motion blur, submotion vectors based on the projective motion path blur model are proposed. The submotion vectors model is more effective and accurate at modeling spatially variant motion blur than conventional methods. The simulation and real experiment results demonstrate its overall effectiveness.

  18. Focus information is used to interpret binocular images

    PubMed Central

    Hoffman, David M.; Banks, Martin S.

    2011-01-01

    Focus information—blur and accommodation—is highly correlated with depth in natural viewing. We examined the use of focus information in solving the binocular correspondence problem and in interpreting monocular occlusions. We presented transparent scenes consisting of two planes. Observers judged the slant of the farther plane, which was seen through the nearer plane. To do this, they had to solve the correspondence problem. In one condition, the two planes were presented with sharp rendering on one image plane, as is done in conventional stereo displays. In another condition, the planes were presented on two image planes at different focal distances, simulating focus information in natural viewing. Depth discrimination performance improved significantly when focus information was correct, which shows that the visual system utilizes the information contained in depth-of-field blur in solving binocular correspondence. In a second experiment, we presented images in which one eye could see texture behind an occluder that the other eye could not see. When the occluder's texture was sharp along with the occluded texture, binocular rivalry was prominent. When the occluded and occluding textures were presented with different blurs, rivalry was significantly reduced. This shows that blur aids the interpretation of scene layout near monocular occlusions. PMID:20616139

  19. The Effects of Explosive Blast as Compared to Post-Traumatic Stress Disorder on Brain Function and Stucture

    DTIC Science & Technology

    2011-04-01

    fractional anisotropymeasures of axonal tracts derived from diffusion tensor imaging ( DTI ). Nine soldiers who incurred a blast-related mTBI during...nauseous for 24 to 36 h, blurred vision, tingling in legs , poor coordination for 3 h. Yes, for unknown period None 5 Subject was a gunner in a Humvee...pairs of distant electrodes in all frequency bands. DTI acquisition and processing Diffusionweighted images were acquired on a 1.5T Philips Achieva

  20. A variable-temperature nanostencil compatible with a low-temperature scanning tunneling microscope/atomic force microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steurer, Wolfram, E-mail: wst@zurich.ibm.com; Gross, Leo; Schlittler, Reto R.

    2014-02-15

    We describe a nanostencil lithography tool capable of operating at variable temperatures down to 30 K. The setup is compatible with a combined low-temperature scanning tunneling microscope/atomic force microscope located within the same ultra-high-vacuum apparatus. The lateral movement capability of the mask allows the patterning of complex structures. To demonstrate operational functionality of the tool and estimate temperature drift and blurring, we fabricated LiF and NaCl nanostructures on Cu(111) at 77 K.

  1. A variable-temperature nanostencil compatible with a low-temperature scanning tunneling microscope/atomic force microscope.

    PubMed

    Steurer, Wolfram; Gross, Leo; Schlittler, Reto R; Meyer, Gerhard

    2014-02-01

    We describe a nanostencil lithography tool capable of operating at variable temperatures down to 30 K. The setup is compatible with a combined low-temperature scanning tunneling microscope/atomic force microscope located within the same ultra-high-vacuum apparatus. The lateral movement capability of the mask allows the patterning of complex structures. To demonstrate operational functionality of the tool and estimate temperature drift and blurring, we fabricated LiF and NaCl nanostructures on Cu(111) at 77 K.

  2. Integrative Research on Organic Matter Cycling Across Aquatic Gradients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Nicholas D.; Bianchi, Thomas S.; Medeiros, Patricia M.

    The goal of this research topic was to motivate innovative research that blurs traditional disciplinary and geographical boundaries. As the scientific community continues to gain momentum and knowledge about how the natural world functions, it is increasingly important that we recognize the interconnected nature of earth systems and embrace the complexities of ecosystem transitions. We are pleased to present this body of work, which embodies the spirit of research spanning across the terrestrial-aquatic continuum, from mountains to the sea.

  3. Digital Image Deblurring by Nonlinear Homomorphic Filtering

    DTIC Science & Technology

    1974-08-01

    Noise Film Grain Noise Impulse Noise Nois» and the ReVlection Scanner Page iv vii viii 1 1 2 4 5 7 8 11 11 12 IB 20 25...1. "^ bCx.y), n(x,y) Diagram 1 a(x,y) le the impulse response, or point-spread function, of the system, and la assumed to be unknown. All noise ... deblurring problem. This inadequacy results from the fact that the high frequency noise floor in the pouer spectrum of a blurred imaga U about 60 dbt

  4. Forward model with space-variant of source size for reconstruction on X-ray radiographic image

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Liu, Jun; Jing, Yue-feng; Xiao, Bo; Wei, Cai-hua; Guan, Yong-hong; Zhang, Xuan

    2018-03-01

    The Forward Imaging Technique is a method to solve the inverse problem of density reconstruction in radiographic imaging. In this paper, we introduce the forward projection equation (IFP model) for the radiographic system with areal source blur and detector blur. Our forward projection equation, based on X-ray tracing, is combined with the Constrained Conjugate Gradient method to form a new method for density reconstruction. We demonstrate the effectiveness of the new technique by reconstructing density distributions from simulated and experimental images. We show that for radiographic systems with source sizes larger than the pixel size, the effect of blur on the density reconstruction is reduced through our method and can be controlled within one or two pixels. The method is also suitable for reconstruction of non-homogeneousobjects.

  5. Comparisons of NIF convergent ablation simulations with radiograph data.

    PubMed

    Olson, R E; Hicks, D G; Meezan, N B; Koch, J A; Landen, O L

    2012-10-01

    A technique for comparing simulation results directly with radiograph data from backlit capsule implosion experiments will be discussed. Forward Abel transforms are applied to the kappa*rho profiles of the simulation. These provide the transmission ratio (optical depth) profiles of the simulation. Gaussian and top hat blurs are applied to the simulated transmission ratio profiles in order to account for the motion blurring and imaging slit resolution of the experimental measurement. Comparisons between the simulated transmission ratios and the radiograph data lineouts are iterated until a reasonable backlighter profile is obtained. This backlighter profile is combined with the blurred, simulated transmission ratios to obtain simulated intensity profiles that can be directly compared with the radiograph data. Examples will be shown from recent convergent ablation (backlit implosion) experiments at the NIF.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen

    Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less

  7. Registration of Large Motion Blurred CMOS Images

    DTIC Science & Technology

    2017-08-28

    raju@ee.iitm.ac.in - Institution : Indian Institute of Technology (IIT) Madras, India - Mailing Address : Room ESB 307c, Dept. of Electrical ...AFRL-AFOSR-JP-TR-2017-0066 Registration of Large Motion Blurred CMOS Images Ambasamudram Rajagopalan INDIAN INSTITUTE OF TECHNOLOGY MADRAS Final...NUMBER 5f.  WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) INDIAN INSTITUTE OF TECHNOLOGY MADRAS SARDAR PATEL ROAD Chennai, 600036

  8. SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, S; Rottmann, J; Berbeco, R

    2014-06-01

    Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less

  9. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less

  10. Postural stability changes in the elderly with cataract simulation and refractive blur.

    PubMed

    Anand, Vijay; Buckley, John G; Scally, Andy; Elliott, David B

    2003-11-01

    To determine the influence of cataractous and refractive blur on postural stability and limb-load asymmetry (LLA) and to establish how postural stability changes with the spatial frequency and contrast of the visual stimulus. Thirteen elderly subjects (mean age, 70.76 +/- 4.14 [SD] years) with no history of falls and normal vision were recruited. Postural stability was determined as the root mean square [RMS] of the center of pressure (COP) signal in the anterior-posterior (A-P) and medial-lateral directions and LLA was determined as the ratio of the average body weight placed on the more-loaded limb to the less-loaded limb, recorded during a 30-second period. Data were collected under normal standing conditions and with somatosensory system input disrupted. Measurements were repeated with four visual targets with high (8 cyc/deg) or low (2 cyc/deg) spatial frequency and high (Weber contrast, approximately 95%) or low (Weber contrast, approximately 25%) contrast. Postural stability was measured under conditions of binocular refractive blur of 0, 1, 2, 4, and 8 D and with cataract simulation. The data were analyzed in a population-averaged linear model. The cataract simulation caused significant increases in postural instability equivalent to that caused by 8-D blur conditions, and its effect was greater when the input from the somatosensory system was disrupted. High spatial frequency targets increased postural instability. Refractive blur, cataract simulation, or eye closure had no effect on LLA. Findings indicate that cataractous and refractive blur increase postural instability, and show why the elderly, many of whom have poor vision along with musculoskeletal and central nervous system degeneration, are at greater risk of falling. Findings also highlight that changes in contrast sensitivity rather than resolution changes are responsible for increasing postural instability. Providing low spatial frequency information in certain environments may be useful in maintaining postural stability. Correcting visual impairment caused by uncorrected refractive error and cataracts could be a useful intervention strategy to help prevent falls and fall-related injuries in the elderly.

  11. Accommodation and vergence response gains to different near cues characterize specific esotropias.

    PubMed

    Horwood, Anna M; Riddell, Patricia M

    2013-09-01

    To describe preliminary findings of how the profile of the use of blur, disparity, and proximal cues varies between non-strabismic groups and those with different types of esotropia. This was a case control study. A remote haploscopic photorefractor measured simultaneous convergence and accommodation to a range of targets containing all combinations of binocular disparity, blur, and proximal (looming) cues. Thirteen constant esotropes, 16 fully accommodative esotropes, and 8 convergence excess esotropes were compared with age- and refractive error-matched controls and 27 young adult emmetropic controls. All wore full refractive correction if not emmetropic. Response AC/A and CA/C ratios were also assessed. Cue use differed between the groups. Even esotropes with constant suppression and no binocular vision (BV) responded to disparity in cues. The constant esotropes with weak BV showed trends for more stable responses and better vergence and accommodation than those without any BV. The accommodative esotropes made less use of disparity cues to drive accommodation (p = 0.04) and more use of blur to drive vergence (p = 0.008) than controls. All esotropic groups failed to show the strong bias for better responses to disparity cues found in the controls, with convergence excess esotropes favoring blur cues. AC/A and CA/C ratios existed in an inverse relationship in the different groups. Accommodative lag of > 1.0 D at 33 cm was common (46%) in the pooled esotropia groups compared with 11% in typical children (p = 0.05). Esotropic children use near cues differently from matched non-esotropic children in ways characteristic to their deviations. Relatively higher weighting for blur cues was found in accommodative esotropia compared to matched controls.

  12. Dynamic accommodation responses following adaptation to defocus.

    PubMed

    Cufflin, Matthew P; Mallen, Edward A H

    2008-10-01

    Adaptation to defocus is known to influence the subjective sensitivity to blur in both emmetropes and myopes. Blur is a major contributing factor in the closed-loop dynamic accommodation response. Previous investigations have examined the magnitude of the accommodation response following blur adaptation. We have investigated whether a period of blur adaptation influences the dynamic accommodation response to step and sinusoidal changes in target vergence. Eighteen subjects (six emmetropes, six early onset myopes, and six late onset myopes) underwent 30 min of adaptation to 0.00 D (control), +1.00 D or +3.00 D myopic defocus. Following this adaptation period, accommodation responses to a 2.00 D step change and 2.00 D sinusoidal change (0.2 Hz) in target vergence were recorded continuously using an autorefractor. Adaptation to defocus failed to influence accommodation latency times, but did influence response times to a step change in target vergence. Adaptation to both +1.00 and +3.00 D induced significant increases in response times (p = 0.002 and p = 0.012, respectively) and adaptation to +3.00 D increased the change in accommodation response magnitude (p = 0.014) for a 2.00 D step change in demand. Blur adaptation also significantly increased the peak-to-peak phase lag for accommodation responses to a sinusoidally oscillating target, although failed to influence the accommodation gain. These changes in accommodative response were equivalent across all refractive groups. Adaptation to a degraded stimulus causes an increased level of accommodation for dynamic targets moving towards an observer and increases response times and phase lags. It is suggested that the contrast constancy theory may explain these changes in dynamic behavior.

  13. Accommodation and vergence response gains to different near cues characterize specific esotropias

    PubMed Central

    Horwood, Anna M; Riddell, Patricia M

    2015-01-01

    Aim To describe preliminary findings of how the profile of the use of blur, disparity and proximal cues varies between non-strabismic groups and those with different types of esotropia. Design Case control study Methodology A remote haploscopic photorefractor measured simultaneous convergence and accommodation to a range of targets containing all combinations of binocular disparity, blur and proximal (looming) cues. 13 constant esotropes, 16 fully accommodative esotropes, and 8 convergence excess esotropes were compared with age and refractive error matched controls, and 27 young adult emmetropic controls. All wore full refractive correction if not emmetropic. Response AC/A and CA/C ratios were also assessed. Results Cue use differed between the groups. Even esotropes with constant suppression and no binocular vision (BV) responded to disparity in cues. The constant esotropes with weak BV showed trends for more stable responses and better vergence and accommodation than those without any BV. The accommodative esotropes made less use of disparity cues to drive accommodation (p=0.04) and more use of blur to drive vergence (p=0.008) than controls. All esotropic groups failed to show the strong bias for better responses to disparity cues found in the controls, with convergence excess esotropes favoring blur cues. AC/A and CA/C ratios existed in an inverse relationship in the different groups. Accommodative lag of >1.0D at 33cm was common (46%) in the pooled esotropia groups compared with 11% in typical children (p=0.05). Conclusion Esotropic children use near cues differently from matched non-esotropic children in ways characteristic to their deviations. Relatively higher weighting for blur cues was found in accommodative esotropia compared to matched controls. PMID:23978142

  14. Transfer between local and global processing levels by pigeons (Columba livia) and humans (Homo sapiens) in exemplar- and rule-based categorization tasks.

    PubMed

    Aust, Ulrike; Braunöder, Elisabeth

    2015-02-01

    The present experiment investigated pigeons' and humans' processing styles-local or global-in an exemplar-based visual categorization task in which category membership of every stimulus had to be learned individually, and in a rule-based task in which category membership was defined by a perceptual rule. Group Intact was trained with the original pictures (providing both intact local and global information), Group Scrambled was trained with scrambled versions of the same pictures (impairing global information), and Group Blurred was trained with blurred versions (impairing local information). Subsequently, all subjects were tested for transfer to the 2 untrained presentation modes. Humans outperformed pigeons regarding learning speed and accuracy as well as transfer performance and showed good learning irrespective of group assignment, whereas the pigeons of Group Blurred needed longer to learn the training tasks than the pigeons of Groups Intact and Scrambled. Also, whereas humans generalized equally well to any novel presentation mode, pigeons' transfer from and to blurred stimuli was impaired. Both species showed faster learning and, for the most part, better transfer in the rule-based than in the exemplar-based task, but there was no evidence of the used processing mode depending on the type of task (exemplar- or rule-based). Whereas pigeons relied on local information throughout, humans did not show a preference for either processing level. Additional tests with grayscale versions of the training stimuli, with versions that were both blurred and scrambled, and with novel instances of the rule-based task confirmed and further extended these findings. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  15. Quality of image of grating target placed in model of human eye with corneal aberrations as observed through multifocal intraocular lenses.

    PubMed

    Inoue, Makoto; Noda, Toru; Mihashi, Toshifumi; Ohnuma, Kazuhiko; Bissen-Miyajima, Hiroko; Hirakata, Akito

    2011-04-01

    To evaluate the quality of the image of a grating target placed in a model eye viewed through multifocal intraocular lenses (IOLs). Laboratory investigation. Refractive (NXG1 or PY60MV) or diffractive (ZM900 or SA60D3) multifocal IOLs were placed in a fluid-filled model eye with human corneal aberrations. A United States Air Force resolution target was placed on the posterior surface of the model eye. A flat contact lens or a wide-field contact lens was placed on the cornea. The contrasts of the gratings were evaluated under endoillumination and compared to those obtained through a monofocal IOL. The grating images were clear when viewed through the flat contact lens and through the central far-vision zone of the NXG1 and PY60MV, although those through the near-vision zone were blurred and doubled. The images observed through the central area of the ZM900 with flat contact lens were slightly defocused but the images in the periphery were very blurred. The contrast decreased significantly in low frequencies (P<.001). The images observed through the central diffractive zone of the SA60D3 were slightly blurred, although the images in the periphery were clearer than that of the ZM900. The images were less blurred in all of the refractive and diffractive IOLs with the wide-field contact lens. Refractive and diffractive multifocal IOLs blur the grating target but less with the wide-angle viewing system. The peripheral multifocal optical zone may be more influential on the quality of the images with contact lens system. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Evaluation of noise and blur effects with SIRT-FISTA-TV reconstruction algorithm: Application to fast environmental transmission electron tomography.

    PubMed

    Banjak, Hussein; Grenier, Thomas; Epicier, Thierry; Koneti, Siddardha; Roiban, Lucian; Gay, Anne-Sophie; Magnin, Isabelle; Peyrin, Françoise; Maxim, Voichita

    2018-06-01

    Fast tomography in Environmental Transmission Electron Microscopy (ETEM) is of a great interest for in situ experiments where it allows to observe 3D real-time evolution of nanomaterials under operating conditions. In this context, we are working on speeding up the acquisition step to a few seconds mainly with applications on nanocatalysts. In order to accomplish such rapid acquisitions of the required tilt series of projections, a modern 4K high-speed camera is used, that can capture up to 100 images per second in a 2K binning mode. However, due to the fast rotation of the sample during the tilt procedure, noise and blur effects may occur in many projections which in turn would lead to poor quality reconstructions. Blurred projections make classical reconstruction algorithms inappropriate and require the use of prior information. In this work, a regularized algebraic reconstruction algorithm named SIRT-FISTA-TV is proposed. The performance of this algorithm using blurred data is studied by means of a numerical blur introduced into simulated images series to mimic possible mechanical instabilities/drifts during fast acquisitions. We also present reconstruction results from noisy data to show the robustness of the algorithm to noise. Finally, we show reconstructions with experimental datasets and we demonstrate the interest of fast tomography with an ultra-fast acquisition performed under environmental conditions, i.e. gas and temperature, in the ETEM. Compared to classically used SIRT and SART approaches, our proposed SIRT-FISTA-TV reconstruction algorithm provides higher quality tomograms allowing easier segmentation of the reconstructed volume for a better final processing and analysis. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Postural stability in the elderly during sensory perturbations and dual tasking: the influence of refractive blur.

    PubMed

    Anand, Vijay; Buckley, John G; Scally, Andy; Elliott, David B

    2003-07-01

    To determine the influence of refractive blur on postural stability during somatosensory and vestibular system perturbation and dual tasking. Fifteen healthy, elderly subjects (mean age, 71 +/- 5 years), who had no history of falls and had normal vision, were recruited. Postural stability during standing was assessed using a force platform, and was determined as the root mean square (RMS) of the center of pressure (COP) signal in the anterior-posterior (A-P) and medial-lateral directions collected over a 30-second period. Data were collected under normal standing conditions and with somatosensory and vestibular system perturbations. Measurements were repeated with an additional physical and/or cognitive task. Postural stability was measured under conditions of binocular refractive blur of 0, 1, 2, 4, and 8 D and with eyes closed. The data were analyzed with a population-averaged linear model. The greatest increases in postural instability were due to disruptions of the somatosensory and vestibular systems. Increasing refractive blur caused increasing postural instability, and its effect was greater when the input from the other sensory systems was disrupted. Performing an additional cognitive and physical task increased A-P RMS COP further. All these detrimental effects on postural stability were cumulative. The findings highlight the multifactorial nature of postural stability and indicate why the elderly, many of whom have poor vision and musculoskeletal and central nervous system degeneration, are at greater risk of falling. The findings also highlight that standing instability in both normal and perturbed conditions was significantly increased with refractive blur. Correcting visual impairment caused by uncorrected refractive error could be a useful intervention strategy to help prevent falls and fall-related injuries in the elderly.

  18. The interactive processes of accommodation and vergence.

    PubMed

    Semmlow, J L; Bérard, P V; Vercher, J L; Putteman, A; Gauthier, G M

    1994-01-01

    A near target generates two different, though related stimuli: image disparity and image blur. Fixation of that near target evokes three motor responses: the so-called oculomotor "near triad". It has long been known that both disparity and blur stimuli are each capable of independently generating all three responses, and a recent theory of near triad control (the Dual Interactive Theory) describes how these stimulus components normally work together in the aid of near vision. However, this theory also indicates that when the system becomes unbalanced, as in high AC/A ratios of some accommodative esotropes, the two components will become antagonistic. In this situation, the interaction between the blur and disparity driven components exaggerates the imbalance created in the vergence motor output. Conversely, there is enhanced restoration when the AC/A ratio is effectively reduced surgically.

  19. Retinal image restoration by means of blind deconvolution

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.

    2011-11-01

    Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

  20. Image deblurring by motion estimation for remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Yueting; Wu, Jiagu; Xu, Zhihai; Li, Qi; Feng, Huajun

    2010-08-01

    The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors. Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with significant objective evaluation increment.

  1. Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis

    NASA Astrophysics Data System (ADS)

    Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.

    2016-07-01

    In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.

  2. Virtual Colonoscopy Screening With Ultra Low-Dose CT and Less-Stressful Bowel Preparation: A Computer Simulation Study

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Wang, Su; Li, Lihong; Fan, Yi; Lu, Hongbing; Liang, Zhengrong

    2008-10-01

    Computed tomography colonography (CTC) or CT-based virtual colonoscopy (VC) is an emerging tool for detection of colonic polyps. Compared to the conventional fiber-optic colonoscopy, VC has demonstrated the potential to become a mass screening modality in terms of safety, cost, and patient compliance. However, current CTC delivers excessive X-ray radiation to the patient during data acquisition. The radiation is a major concern for screening application of CTC. In this work, we performed a simulation study to demonstrate a possible ultra low-dose CT technique for VC. The ultra low-dose abdominal CT images were simulated by adding noise to the sinograms of the patient CTC images acquired with normal dose scans at 100 mA s levels. The simulated noisy sinogram or projection data were first processed by a Karhunen-Loeve domain penalized weighted least-squares (KL-PWLS) restoration method and then reconstructed by a filtered backprojection algorithm for the ultra low-dose CT images. The patient-specific virtual colon lumen was constructed and navigated by a VC system after electronic colon cleansing of the orally-tagged residue stool and fluid. By the KL-PWLS noise reduction, the colon lumen can successfully be constructed and the colonic polyp can be detected in an ultra low-dose level below 50 mA s. Polyp detection can be found more easily by the KL-PWLS noise reduction compared to the results using the conventional noise filters, such as Hanning filter. These promising results indicate the feasibility of an ultra low-dose CTC pipeline for colon screening with less-stressful bowel preparation by fecal tagging with oral contrast.

  3. Image quality of CT angiography in young children with congenital heart disease: a comparison between the sinogram-affirmed iterative reconstruction (SAFIRE) and advanced modelled iterative reconstruction (ADMIRE) algorithms.

    PubMed

    Nam, S B; Jeong, D W; Choo, K S; Nam, K J; Hwang, J-Y; Lee, J W; Kim, J Y; Lim, S J

    2017-12-01

    To compare the image quality of computed tomography angiography (CTA) reconstructed by sinogram-affirmed iterative reconstruction (SAFIRE) with that of advanced modelled iterative reconstruction (ADMIRE) in children with congenital heart disease (CHD). Thirty-one children (8.23±13.92 months) with CHD who underwent CTA were enrolled. Images were reconstructed using SAFIRE (strength 5) and ADMIRE (strength 5). Objective image qualities (attenuation, noise) were measured in the great vessels and heart chambers. Two radiologists independently calculated the contrast-to-noise ratio (CNR) by measuring the intensity and noise of the myocardial walls. Subjective noise, diagnostic confidence, and sharpness at the level prior to the first branch of the main pulmonary artery were also graded by the two radiologists independently. The objective image noise of ADMIRE was significantly lower than that of SAFIRE in the right atrium, right ventricle, and myocardial wall (p<0.05); however, there were no significant differences observed in the attenuations among the four chambers and great vessels, except in the pulmonary arteries (p>0.05). The mean CNR values were 21.56±10.80 for ADMIRE and 18.21±6.98 for SAFIRE, which were significantly different (p<0.05). In addition, the diagnostic confidence of ADMIRE was significantly lower than that of SAFIRE (p<0.05), while the subjective image noise and sharpness of ADMIRE were not significantly different (p>0.05). CTA using ADMIRE was superior to SAFIRE when comparing the objective and subjective image quality in children with CHD. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  4. Value of 100 kVp scan with sinogram-affirmed iterative reconstruction algorithm on a single-source CT system during whole-body CT for radiation and contrast medium dose reduction: an intra-individual feasibility study.

    PubMed

    Nagayama, Y; Nakaura, T; Oda, S; Tsuji, A; Urata, J; Furusawa, M; Tanoue, S; Utsunomiya, D; Yamashita, Y

    2018-02-01

    To perform an intra-individual investigation of the usefulness of a contrast medium (CM) and radiation dose-reduction protocol using single-source computed tomography (CT) combined with 100 kVp and sinogram-affirmed iterative reconstruction (SAFIRE) for whole-body CT (WBCT; chest-abdomen-pelvis CT) in oncology patients. Forty-three oncology patients who had undergone WBCT under both 120 and 100 kVp protocols at different time points (mean interscan intervals: 98 days) were included retrospectively. The CM doses for the 120 and 100 kVp protocols were 600 and 480 mg iodine/kg, respectively; 120 kVp images were reconstructed with filtered back-projection (FBP), whereas 100 kVp images were reconstructed with FBP (100 kVp-F) and the SAFIRE (100 kVp-S). The size-specific dose estimate (SSDE), iodine load and image quality of each protocol were compared. The SSDE and iodine load of 100 kVp protocol were 34% and 21%, respectively, lower than of 120 kVp protocol (SSDE: 10.6±1.1 versus 16.1±1.8 mGy; iodine load: 24.8±4versus 31.5±5.5 g iodine, p<0.01). Contrast enhancement, objective image noise, contrast-to-noise-ratio, and visual score of 100 kVp-S were similar to or better than of 120 kVp protocol. Compared with the 120 kVp protocol, the combined use of 100 kVp and SAFIRE in WBCT for oncology assessment with an SSCT facilitated substantial reduction in the CM and radiation dose while maintaining image quality. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  5. Line-edge roughness performance targets for EUV lithography

    NASA Astrophysics Data System (ADS)

    Brunner, Timothy A.; Chen, Xuemei; Gabor, Allen; Higgins, Craig; Sun, Lei; Mack, Chris A.

    2017-03-01

    Our paper will use stochastic simulations to explore how EUV pattern roughness can cause device failure through rare events, so-called "black swans". We examine the impact of stochastic noise on the yield of simple wiring patterns with 36nm pitch, corresponding to 7nm node logic, using a local Critical Dimension (CD)-based fail criteria Contact hole failures are examined in a similar way. For our nominal EUV process, local CD uniformity variation and local Pattern Placement Error variation was observed, but no pattern failures were seen in the modest (few thousand) number of features simulated. We degraded the image quality by incorporating Moving Standard Deviation (MSD) blurring to degrade the Image Log-Slope (ILS), and were able to find conditions where pattern failures were observed. We determined the Line Width Roughness (LWR) value as a function of the ILS. By use of an artificial "step function" image degraded by various MSD blur, we were able to extend the LWR vs ILS curve into regimes that might be available for future EUV imagery. As we decreased the image quality, we observed LWR grow and also began to see pattern failures. For high image quality, we saw CD distributions that were symmetrical and close to Gaussian in shape. Lower image quality caused CD distributions that were asymmetric, with "fat tails" on the low CD side (under-exposed) which were associated with pattern failures. Similar non-Gaussian CD distributions were associated with image conditions that caused missing contact holes, i.e. CD=0.

  6. Object recognition with severe spatial deficits in Williams syndrome: sparing and breakdown.

    PubMed

    Landau, Barbara; Hoffman, James E; Kurz, Nicole

    2006-07-01

    Williams syndrome (WS) is a rare genetic disorder that results in severe visual-spatial cognitive deficits coupled with relative sparing in language, face recognition, and certain aspects of motion processing. Here, we look for evidence for sparing or impairment in another cognitive system-object recognition. Children with WS, normal mental-age (MA) and chronological age-matched (CA) children, and normal adults viewed pictures of a large range of objects briefly presented under various conditions of degradation, including canonical and unusual orientations, and clear or blurred contours. Objects were shown as either full-color views (Experiment 1) or line drawings (Experiment 2). Across both experiments, WS and MA children performed similarly in all conditions while CA children performed better than both WS group and MA groups with unusual views. This advantage, however, was eliminated when images were also blurred. The error types and relative difficulty of different objects were similar across all participant groups. The results indicate selective sparing of basic mechanisms of object recognition in WS, together with developmental delay or arrest in recognition of objects from unusual viewpoints. These findings are consistent with the growing literature on brain abnormalities in WS which points to selective impairment in the parietal areas of the brain. As a whole, the results lend further support to the growing literature on the functional separability of object recognition mechanisms from other spatial functions, and raise intriguing questions about the link between genetic deficits and cognition.

  7. "You Never Know, Things Might Have Once Existed": Young Readers Engaging with Postmodern Texts That Blur the Boundaries between Fact and Fiction

    ERIC Educational Resources Information Center

    Williams, Sandra; Willis, Rachel

    2017-01-01

    This article considers children's engagement with the "Ologies", a series of postmodern texts that blur the boundaries between fact and fiction. It follows on from a text-based analysis of the series published in this journal (22(3) 2015). Data collected from 9-12 year olds demonstrate how actual readers took up the invitation offered by…

  8. Camera Geolocation From Mountain Images

    DTIC Science & Technology

    2015-09-17

    be reliably extracted from query images. However, in real-life scenarios the skyline in a query image may be blurred or invisible , due to occlusions...extracted from multiple mountain ridges is critical to reliably geolocating challenging real-world query images with blurred or invisible mountain skylines...Buddemeier, A. Bissacco, F. Brucher, T. Chua, H. Neven, and J. Yagnik, “Tour the world: building a web -scale landmark recognition engine,” in Proc. of

  9. Work Requirements in Transformation, Competence for the Future: A Critical Look at the Consequences of Current Positions. IAB Labour Market Research Topics No. 45.

    ERIC Educational Resources Information Center

    Plath, Hans-Eberhard

    In Germany and elsewhere, the literature on current and future work requirements rarely discusses the effects of globalization, internationalization, computerization, and other factors from the point of view of workers. Some have suggested that a blurring of limits will be one of the main changes in work in the future. This blurring will involve…

  10. Registration of Large Motion Blurred Images

    DTIC Science & Technology

    2016-05-09

    in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS

  11. Filtering, Coding, and Compression with Malvar Wavelets

    DTIC Science & Technology

    1993-12-01

    speech coding techniques being investigated by the military (38). Imagery: Space imagery often requires adaptive restoration to deblur out-of-focus...and blurred image, find an estimate of the ideal image using a priori information about the blur, noise , and the ideal image" (12). The research for...recording can be described as the original signal convolved with impulses , which appear as echoes in the seismic event. The term deconvolution indicates

  12. MER Surface Phase; Blurring the Line Between Fault Protection and What is Supposed to Happen

    NASA Technical Reports Server (NTRS)

    Reeves, Glenn E.

    2008-01-01

    An assessment on the limitations of communication with MER rovers and how such constraints drove the system design, flight software and fault protection architecture, blurring the line between traditional fault protection and expected nominal behavior, and requiring the most novel autonomous and semi-autonomous elements of the vehicle software including communication, surface mobility, attitude knowledge acquisition, fault protection, and the activity arbitration service.

  13. Developmental changes in the balance of disparity, blur and looming/proximity cues to drive ocular alignment and focus

    PubMed Central

    Horwood, Anna M; Riddell, Patricia M

    2015-01-01

    Accurate co-ordination of accommodation and convergence is necessary to view near objects and develop fine motor co-ordination. We used a remote haploscopic videorefraction paradigm to measure longitudinal changes in simultaneous ocular accommodation and vergence to targets at different depths, and to all combinations of blur, binocular disparity, and change-in-size (“proximity”) cues. Infants were followed longitudinally and compared to older children and young adults, with the prediction that sensitivity to different cues would change during development. Mean infant responses to the most naturalistic condition were similar to those of adults from 6-7 weeks (accommodation) and 8-9 weeks (vergence). Proximity cues influenced responses most in infants less than 14 weeks of age, but sensitivity declined thereafter. Between 12-28 weeks of age infants were equally responsive to all three cues, while in older children and adults manipulation of disparity resulted in the greatest changes in response. Despite rapid development of visual acuity (thus increasing availability of blur cues), responses to blur were stable throughout development. Our results suggest that during much of infancy, vergence and accommodation responses are not dependent on the development of specific depth cues, but make use of any cues available to drive appropriate changes in response. PMID:24344547

  14. A novel experimental method for measuring vergence and accommodation responses to the main near visual cues in typical and atypical groups.

    PubMed

    Horwood, Anna M; Riddell, Patricia M

    2009-01-01

    Binocular disparity, blur, and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3 and 2 m. By separating the three main near cues, we can explore their relative weighting in three-, two-, one-, and zero-cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable interparticipant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development, and emmetropization.

  15. A Novel Experimental Method for Measuring Vergence and Accommodation Responses to the Main Near Visual Cues in Typical and Atypical Groups

    PubMed Central

    Horwood, Anna M; Riddell, Patricia M

    2015-01-01

    Binocular disparity, blur and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3m and 2m. By separating the three main near cues we can explore their relative weighting in three, two, one and zero cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable inter-participant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development and emmetropisation. PMID:19301186

  16. Digital Image Quality And Interpretability: Database And Hardcopy Studies

    NASA Astrophysics Data System (ADS)

    Snyder, H. L.; Maddox, M. E.; Shedivy, D. I.; Turpin, J. A.; Burke, J. J.; Strickland, R. N.

    1982-02-01

    Two hundred fifty transparencies, displaying a new digital database consisting of 25 degraded versions (5 blur levels x 5 noise levels) of each of 10 digitized, first-generation positive transparencies, were used in two experiments involving 15 trained military photointer-preters. Each image is 86 mm square and represents 40962 8-bit pixels. In the "interpretation" experiment, each photointerpreter (judge) spent approximately two days extracting essential elements of information (EEls) from one degraded version of each scene at a constant Gaussian blur level (FWHM = 40, 84, or 322 Am). In the scaling experiment, each judge assigned a numerical value to each of the 250 images, according to its perceived position on a 10-point NATO-standardized scale (0 = useless through 9 = nearly perfect), to the nearest 0.1 unit. Eighty-eight of the 100 possible values were used by the judges, indicating that 62 categories, based on the Shannon-Wiener measure of information, are needed to scale these hardcopy images. The overall correlation between the scaling and interpretation results was 0.9. Though the main effect of blur was not statistically significant in the interpretation experiment, that of noise was significant, and all main factors (blur, noise, scene, order of battle) and most interactions were statistically significant in the scaling experiment.

  17. Quality Metrics Of Digitally Derived Imagery And Their Relation To Interpreter Performance

    NASA Astrophysics Data System (ADS)

    Burke, James J.; Snyder, Harry L.

    1981-12-01

    Two hundred-fifty transparencies, displaying a new digital database consisting of 25 degraded versions (5 blur levels x 5 noise levels) of each of 10 digitized, first-generation positive transparencies, were used in two experiments involving 15 trained military photo-interpreters. Each image is 86 mm square and represents 40962 8-bit pixels. In the "interpretation" experiment, each photo-interpreter (judge) spent approximately two days extracting Essential Elements of Information (EEI's) from one degraded version of each scene at a constant blur level (FWHM = 40, 84 or 322 μm). In the scaling experiment, each judge assigned a numerical value to each of the 250 images, according to its perceived position on a 10-point NATO-standardized scale (0 = useless through 9 = nearly perfect), to the nearest 0.1 unit. Eighty-eight of the 100 possible values were used by the judges, indicating that 62 categories are needed to scale these hardcopy images. The overall correlation between the scaling and interpretation results was 0.9. Though the main effect of blur was not significant (p = 0.146) in the interpretation experiment, that of noise was significant (p = 0.005), and all main factors (blur, noise, scene, order of battle) and most interactions were statistically significant in the scaling experiment.

  18. Restoration of Motion-Blurred Image Based on Border Deformation Detection: A Traffic Sign Restoration Model

    PubMed Central

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Wang, Qi; Gao, Jing

    2015-01-01

    Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently. PMID:25849350

  19. Restoration of motion-blurred image based on border deformation detection: a traffic sign restoration model.

    PubMed

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Wang, Qi; Gao, Jing

    2015-01-01

    Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently.

  20. Blurring of emotional and non-emotional memories by taxing working memory during recall.

    PubMed

    van den Hout, Marcel A; Eidhof, Marloes B; Verboom, Jesse; Littel, Marianne; Engelhard, Iris M

    2014-01-01

    Memories that are recalled while working memory (WM) is taxed, e.g., by making eye movements (EM), become blurred during the recall + EM and later recall, without EM. This may help to explain the effects of Eye Movement and Desensitisation and Reprocessing (EMDR) in the treatment of post-traumatic stress disorder (PTSD) in which patients make EM during trauma recall. Earlier experimental studies on recall + EM have focused on emotional memories. WM theory suggests that recall + EM is superior to recall only but is silent about effects of memory emotionality. Based on the emotion and memory literature, we examined whether recall + EM has superior effects in blurring emotional memories relative to neutral memories. Healthy volunteers recalled negative or neutral memories, matched for vividness, while visually tracking a dot that moved horizontally ("recall + EM") or remained stationary ("recall only"). Compared to a pre-test, a post-test (without concentrating on the dot) replicated earlier findings: negative memories are rated as less vivid after "recall + EM" but not after "recall only". This was not found for neutral memories. Emotional memories are more taxing than neutral memories, which may explain the findings. Alternatively, transient arousal induced by recall of aversive memories may promote reconsolidation of the blurred memory image that is provoked by EM.

  1. Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts

    PubMed Central

    Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.

    2013-01-01

    To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080

  2. Framework for Processing Videos in the Presence of Spatially Varying Motion Blur

    DTIC Science & Technology

    2014-04-18

    international journals. Expected impact The related problems of image restoration, registration, dehazing, and superresolution , all in the presence of blurring...real-time, it can be very valuable for applications involving aerial surveillance. Our work on superresolution will be especially valuable while...unified approach to superresolution and multichannel blind decon- volution,” Trans. Img. Proc., vol. 16, no. 9, pp. 2322–2332, Sept. 2007. 5, 5.2.1

  3. Laser Illuminated Imaging: Multiframe Beam Tilt Tracking and Deconvolution Algorithm

    DTIC Science & Technology

    2013-03-01

    same way with atmospheric turbulence resulting in tilt, blur and other higher order distortions on the returned image. Using the Fourier shift...of the target image with distortions such as speckle, blurring and defocus mitigated via a multiframe processing strategy. Atmospheric turbulence ...propagating a beam in a turbulent atmosphere with a beam width at the target is smaller than the field of view (FOV) of the receiver optics. 1.2

  4. Post-Processing of Low Dose Mammography Images

    DTIC Science & Technology

    2002-05-01

    method of restoring images in the presence of blur as well as noise ” (12:276). The deblurring and denoising characteristics make Wiener filtering...independent noise . The signal dependant scatter noise can be modeled as blur in the mammography image. A Wiener filter with deblurring characteristics can...centered on. This method is used to eradicate noise impulses with high 26 pixel values (2:7). For the research at hand, the median filter would

  5. Are nurses blurring their identity by extending or delegating roles?

    PubMed

    Harmer, Victoria

    Nursing may be going through an identity crisis. The Department of Health commissioned research identifying where nurses stand within society (Maben and Griffiths, 2008), 'with the stimulus for the report being the sense that nursing had lost its way' (Maben and Griffiths, 2008). The professional identity of nursing appears to be unclear and an area where confusion and conflicting opinions are invisible. This, combined with the extension of roles that many nurses have accepted in recent years, may have allowed a blurring of boundaries between healthcare professions, which has resulted in a blurring of the professional identity of the nurse. Perhaps, while nursing was busily extending, expanding or delegating more traditional nursing duties, it lost its way. To this end, this article concentrates on identifying what professional identity means, then investigates changing roles and role extension nurses are undertaking, referring to relevant literature.

  6. Deblurring in digital tomosynthesis by iterative self-layer subtraction

    NASA Astrophysics Data System (ADS)

    Youn, Hanbean; Kim, Jee Young; Jang, SunYoung; Cho, Min Kook; Cho, Seungryong; Kim, Ho Kyung

    2010-04-01

    Recent developments in large-area flat-panel detectors have made tomosynthesis technology revisited in multiplanar xray imaging. However, the typical shift-and-add (SAA) or backprojection reconstruction method is notably claimed by a lack of sharpness in the reconstructed images because of blur artifact which is the superposition of objects which are out of planes. In this study, we have devised an intuitive simple method to reduce the blur artifact based on an iterative approach. This method repeats a forward and backward projection procedure to determine the blur artifact affecting on the plane-of-interest (POI), and then subtracts it from the POI. The proposed method does not include any Fourierdomain operations hence excluding the Fourier-domain-originated artifacts. We describe the concept of the self-layer subtractive tomosynthesis and demonstrate its performance with numerical simulation and experiments. Comparative analysis with the conventional methods, such as the SAA and filtered backprojection methods, is addressed.

  7. Sharpening of Hierarchical Visual Feature Representations of Blurred Images.

    PubMed

    Abdelhack, Mohamed; Kamitani, Yukiyasu

    2018-01-01

    The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.

  8. Large Area Microcorrals and Cavity Formation on Cantilevers using a Focused Ion Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saraf, Laxmikant V.; Britt, David W.

    2011-09-14

    We utilize focused ion beam (FIB) to explore various sputtering parameters to form large area microcorrals and cavities on cantilevers. Microcorrals were rapidly created by modifying ion beam blur and overlaps. Modification in FIB sputtering parameters affects the periodicity and shape of corral microstructure. Cantilever deflections show ion beam amorphization effects as a function of sputtered area and cantilever base cavities with or without side walls. The FIB sputtering parameters address a method for rapid creation of a cantilever tensiometer with integrated fluid storage and delivery.

  9. Optical restoration of images blurred by atmospheric turbulence using optimum filter theory.

    PubMed

    Horner, J L

    1970-01-01

    The results of optimum filtering from communications theory have been applied to an image restoration problem. Photographic film imagery, degraded by long-term artificial atmospheric turbulence, has been restored by spatial filters placed in the Fourier transform plane. The time-averaged point spread function was measured and used in designing the filters. Both the simple inverse filter and the optimum least-mean-square filters were used in the restoration experiments. The superiority of the latter is conclusively demonstrated. An optical analog processor was used for the restoration.

  10. Joint reconstruction of dynamic PET activity and kinetic parametric images using total variation constrained dictionary sparse coding

    NASA Astrophysics Data System (ADS)

    Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng

    2017-05-01

    Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.

  11. Discrete Radon transform has an exact, fast inverse and generalizes to operations other than sums along lines

    PubMed Central

    Press, William H.

    2006-01-01

    Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N × N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude. PMID:17159155

  12. Discrete Radon transform has an exact, fast inverse and generalizes to operations other than sums along lines.

    PubMed

    Press, William H

    2006-12-19

    Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N x N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude.

  13. Fresnel Lenses for Wide-Aperture Optical Receivers

    NASA Technical Reports Server (NTRS)

    Hemmati, Hamid

    2004-01-01

    Wide-aperture receivers for freespace optical communication systems would utilize Fresnel lenses instead of conventional telescope lenses, according to a proposal. Fresnel lenses weigh and cost much less than conventional lenses having equal aperture widths. Plastic Fresnel lenses are commercially available in diameters up to 5 m large enough to satisfy requirements for aperture widths of the order of meters for collecting sufficient light in typical long-distance free-space optical communication systems. Fresnel lenses are not yet suitable for high-quality diffraction-limited imaging, especially in polychromatic light. However, optical communication systems utilize monochromatic light, and there is no requirement for high-quality imaging; instead, the basic requirement for an optical receiver is to collect the incoming monochromatic light over a wide aperture and concentrate the light onto a photodetector. Because of lens aberrations and diffraction, the light passing through any lens is focused to a blur circle rather than to a point. Calculations for some representative cases of wide-aperture non-diffraction-limited Fresnel lenses have shown that it should be possible to attain blur-circle diameters of less than 2 mm. Preferably, the blur-circle diameter should match the width of the photodetector. For most high-bandwidth communication applications, the required photodetector diameters would be about 1 mm. In a less-preferable case in which the blur circle was wider than a single photodetector, it would be possible to occupy the blur circle with an array of photodetectors. As an alternative to using a single large Fresnel lens, one could use an array of somewhat smaller lenses to synthesize the equivalent aperture area. Such a configuration might be preferable in a case in which a single Fresnel lens of the requisite large size would be impractical to manufacture, and the blur circle could not be made small enough. For example one could construct a square array of four 5-m-diameter Fresnel lenses to obtain the same light-collecting area as that of a single 10-m-diameter lens. In that case (see figure), the light collected by each Fresnel lens could be collimated, the collimated beams from the four Fresnel lenses could be reflected onto a common offaxis paraboloidal reflector, and the paraboloidal reflector would focus the four beams onto a single photodetector. Alternatively, detected signal from each detector behind each lens would be digitized before summing the signals.

  14. Development of a 3-D Defocusing Liquid Crystal Particle Thermometry and Velocimetry (3DDLCPTV) System

    DTIC Science & Technology

    2007-05-01

    general, off axis imaging can cause distortion and astigmatism in the image if proper precautions are not taken. In this case, the lens selection... astigmatism into the optical system. This astigmatism takes the form of a blurring in each image directed away from the optical axis. This blurring...is non-trivial and makes particle identification nearly impossible. Images of particles from two of the off axis cameras with the astigmatism present

  15. United States Air Force Summer Faculty Research Program (1983). Technical Report. Volume 2

    DTIC Science & Technology

    1983-12-01

    filters are given below: (1) Inverse filter - Based on the model given in Eq. (2) and the criterion of minimizing the norm (i.e., power ) of the...and compared based on their performances In machine classification under a variety of blur and noise conditions. These filters are analyzed to...criteria based on various assumptions of the Image models* In practice filter performance varies with the type of image, the blur and the noise conditions

  16. The Effect of a Monocular Helmet-Mounted Display on Aircrew Health: A Longitudinal Cohort Study of Apache AH Mk 1 Pilots -(Vision and Handedness)

    DTIC Science & Technology

    2015-05-19

    reported by U.S. Army aviators using NVG for night flights (Glick and Moser, 1974). It was initially, and incorrectly, called “brown eye syndrome ...112 FREQUENCY Never Rarely Occasionally Often Eye irritation Eye pain Blurred vision Dry eye ... Eye pain Blurred vision Dry eye Light sensitivity j. Since your last contact lens review, did you experience any of the following

  17. A-law/Mu-law Dynamic Range Compression Deconvolution (Preprint)

    DTIC Science & Technology

    2008-02-04

    noise filtering via the spectrum proportionality filter, and second the signal deblurring via the inverse filter. In this process for regions when...is the joint image of motion impulse response and the noisy blurred image with signal to noise ratio 5, 6(A’) is the gray level recovered image...joint image of motion impulse response and the noisy blurred image with signal to noise ratio 5, (A’) the gray level recovered image using the A-law

  18. Malaria and Other Vector-Borne Infection Surveillance in the U.S. Department of Defense Armed Forces Health Surveillance Center-Global Program: Review of 2009 Accomplishments

    DTIC Science & Technology

    2011-03-04

    global travel, tourism and trade, and blurred lines of demarcation between zoonotic VBI reservoirs and human populations increase vector exposure. Urban...Unprecedented levels of global travel, tourism and trade, and blurred lines of demarcation between zoonotic VBI reservoirs and human populations...made in 2009 to enhance or establish hospi- tal-based febrile illness surveillance platforms in Azer- baijan, Bolivia, Cambodia, Ecuador , Georgia

  19. A simple acquisition strategy to avoid off-resonance blurring in spiral imaging with redundant spiral-in/out k-space trajectories

    PubMed Central

    Fielden, Samuel W.; Meyer, Craig H.

    2014-01-01

    Purpose The major hurdle to widespread adoption of spiral trajectories has been their poor off-resonance performance. Here we present a self-correcting spiral k-space trajectory that avoids much of the well-known spiral blurring during data acquisition. Theory and Methods In comparison with a traditional spiral-out trajectory, the spiral-in/out trajectory has improved off-resonance performance. By combining two spiral-in/out acquisitions, one rotated 180° in k-space compared to the other, multi-shot spiral-in/out artifacts are eliminated. A phantom was scanned with the center frequency manually tuned 20, 40, 80, and 160 Hz off-resonance with both a spiral-out gradient echo sequence and the redundant spiral-in/out sequence. The phantom was also imaged in an oblique orientation in order to demonstrate improved concomitant gradient field performance of the sequence, and was additionally incorporated into a spiral turbo spin echo sequence for brain imaging. Results Phantom studies with manually-tuned off-resonance agree well with theoretical calculations, showing that moderate off-resonance is well-corrected by this acquisition scheme. Blur due to concomitant fields is reduced, and good results are obtained in vivo. Conclusion The redundant spiral-in/out trajectory results in less image blur for a given readout length than a traditional spiral-out scan, reducing the need for complex off-resonance correction algorithms. PMID:24604539

  20. High quality image-pair-based deblurring method using edge mask and improved residual deconvolution

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting

    2017-04-01

    Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.

  1. A pilot trial of tele-ophthalmology for diagnosis of chronic blurred vision.

    PubMed

    Tan, Johnson Choon Hwai; Poh, Eugenie Wei Ting; Srinivasan, Sanjay; Lim, Tock Han

    2013-02-01

    We evaluated the accuracy of tele-ophthalmology in diagnosing the major causes of chronic blurring of vision. Thirty consecutive patients attending a primary eye-care facility in Singapore (the Ang Mo Kio Polyclinic, AMKP) with the symptom of chronic blurred vision were recruited. An ophthalmic technician was trained to perform Snellen acuity; auto-refraction; intraocular pressure measurement; red-colour perimetry; video recordings of extraocular movement, cover tests and pupillary reactions; and anterior segment and fundus photography. Digital information was transmitted to a tertiary hospital in Singapore (the Tan Tock Seng Hospital) via a tele-ophthalmology system for teleconsultation with an ophthalmologist. The diagnoses were compared with face-to-face consultation by another ophthalmologist at the AMKP. A user experience questionnaire was administered at the end of the consultation. Using face-to-face consultation as the gold standard, tele-ophthalmology achieved 100% sensitivity and specificity in diagnosing media opacity (n = 29), maculopathy (n = 23) and keratopathy (n = 30) of any type; and 100% sensitivity and 92% specificity in diagnosing optic neuropathy of any type (n = 24). The majority of the patients (97%) were satisfied with the tele-ophthalmology workflow and consultation. The tele-ophthalmology system was able to detect causes of chronic blurred vision accurately. It has the potential to deliver high-accuracy diagnostic eye support to remote areas if suitably trained ophthalmic technicians are available.

  2. A simple acquisition strategy to avoid off-resonance blurring in spiral imaging with redundant spiral-in/out k-space trajectories.

    PubMed

    Fielden, Samuel W; Meyer, Craig H

    2015-02-01

    The major hurdle to widespread adoption of spiral trajectories has been their poor off-resonance performance. Here we present a self-correcting spiral k-space trajectory that avoids much of the well-known spiral blurring during data acquisition. In comparison with a traditional spiral-out trajectory, the spiral-in/out trajectory has improved off-resonance performance. By combining two spiral-in/out acquisitions, one rotated 180° in k-space compared with the other, multishot spiral-in/out artifacts are eliminated. A phantom was scanned with the center frequency manually tuned 20, 40, 80, and 160 Hz off-resonance with both a spiral-out gradient echo sequence and the redundant spiral-in/out sequence. The phantom was also imaged in an oblique orientation in order to demonstrate improved concomitant gradient field performance of the sequence. Additionally, the trajectory was incorporated into a spiral turbo spin echo sequence for brain imaging. Phantom studies with manually tuned off-resonance agree well with theoretical calculations, showing that moderate off-resonance is well-corrected by this acquisition scheme. Blur due to concomitant fields is reduced, and good results are obtained in vivo. The redundant spiral-in/out trajectory results in less image blur for a given readout length than a traditional spiral-out scan, reducing the need for complex off-resonance correction algorithms. © 2014 Wiley Periodicals, Inc.

  3. Impact of Altering Various Image Parameters on Human Epidermal Growth Factor Receptor 2 Image Analysis Data Quality.

    PubMed

    Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K

    2017-01-01

    The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output.

  4. Using compressive sensing to recover images from PET scanners with partial detector rings.

    PubMed

    Valiollahzadeh, SeyyedMajid; Clark, John W; Mawlawi, Osama

    2015-01-01

    Most positron emission tomography/computed tomography (PET/CT) scanners consist of tightly packed discrete detector rings to improve scanner efficiency. The authors' aim was to use compressive sensing (CS) techniques in PET imaging to investigate the possibility of decreasing the number of detector elements per ring (introducing gaps) while maintaining image quality. A CS model based on a combination of gradient magnitude and wavelet domains (wavelet-TV) was developed to recover missing observations in PET data acquisition. The model was designed to minimize the total variation (TV) and L1-norm of wavelet coefficients while constrained by the partially observed data. The CS model also incorporated a Poisson noise term that modeled the observed noise while suppressing its contribution by penalizing the Poisson log likelihood function. Three experiments were performed to evaluate the proposed CS recovery algorithm: a simulation study, a phantom study, and six patient studies. The simulation dataset comprised six disks of various sizes in a uniform background with an activity concentration of 5:1. The simulated image was multiplied by the system matrix to obtain the corresponding sinogram and then Poisson noise was added. The resultant sinogram was masked to create the effect of partial detector removal and then the proposed CS algorithm was applied to recover the missing PET data. In addition, different levels of noise were simulated to assess the performance of the proposed algorithm. For the phantom study, an IEC phantom with six internal spheres each filled with F-18 at an activity-to-background ratio of 10:1 was used. The phantom was imaged twice on a RX PET/CT scanner: once with all detectors operational (baseline) and once with four detector blocks (11%) turned off at each of 0 ˚, 90 ˚, 180 ˚, and 270° (partially sampled). The partially acquired sinograms were then recovered using the proposed algorithm. For the third test, PET images from six patient studies were investigated using the same strategy of the phantom study. The recovered images using WTV and TV as well as the partially sampled images from all three experiments were then compared with the fully sampled images (the baseline). Comparisons were done by calculating the mean error (%bias), root mean square error (RMSE), contrast recovery (CR), and SNR of activity concentration in regions of interest drawn in the background as well as the disks, spheres, and lesions. For the simulation study, the mean error, RMSE, and CR for the WTV (TV) recovered images were 0.26% (0.48%), 2.6% (2.9%), 97% (96%), respectively, when compared to baseline. For the partially sampled images, these results were 22.5%, 45.9%, and 64%, respectively. For the simulation study, the average SNR for the baseline was 41.7 while for WTV (TV), recovered image was 44.2 (44.0). The phantom study showed similar trends with 5.4% (18.2%), 15.6% (18.8%), and 78% (60%), respectively, for the WTV (TV) images and 33%, 34.3%, and 69% for the partially sampled images. For the phantom study, the average SNR for the baseline was 14.7 while for WTV (TV) recovered image was 13.7 (11.9). Finally, the average of these values for the six patient studies for the WTV-recovered, TV, and partially sampled images was 1%, 7.2%, 92% and 1.3%, 15.1%, 87%, and 27%, 25.8%, 45%, respectively. CS with WTV is capable of recovering PET images with good quantitative accuracy from partially sampled data. Such an approach can be used to potentially reduce the cost of scanners while maintaining good image quality.

  5. Using compressive sensing to recover images from PET scanners with partial detector rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valiollahzadeh, SeyyedMajid, E-mail: sv4@rice.edu; Clark, John W.; Mawlawi, Osama

    2015-01-15

    Purpose: Most positron emission tomography/computed tomography (PET/CT) scanners consist of tightly packed discrete detector rings to improve scanner efficiency. The authors’ aim was to use compressive sensing (CS) techniques in PET imaging to investigate the possibility of decreasing the number of detector elements per ring (introducing gaps) while maintaining image quality. Methods: A CS model based on a combination of gradient magnitude and wavelet domains (wavelet-TV) was developed to recover missing observations in PET data acquisition. The model was designed to minimize the total variation (TV) and L1-norm of wavelet coefficients while constrained by the partially observed data. The CSmore » model also incorporated a Poisson noise term that modeled the observed noise while suppressing its contribution by penalizing the Poisson log likelihood function. Three experiments were performed to evaluate the proposed CS recovery algorithm: a simulation study, a phantom study, and six patient studies. The simulation dataset comprised six disks of various sizes in a uniform background with an activity concentration of 5:1. The simulated image was multiplied by the system matrix to obtain the corresponding sinogram and then Poisson noise was added. The resultant sinogram was masked to create the effect of partial detector removal and then the proposed CS algorithm was applied to recover the missing PET data. In addition, different levels of noise were simulated to assess the performance of the proposed algorithm. For the phantom study, an IEC phantom with six internal spheres each filled with F-18 at an activity-to-background ratio of 10:1 was used. The phantom was imaged twice on a RX PET/CT scanner: once with all detectors operational (baseline) and once with four detector blocks (11%) turned off at each of 0 °, 90 °, 180 °, and 270° (partially sampled). The partially acquired sinograms were then recovered using the proposed algorithm. For the third test, PET images from six patient studies were investigated using the same strategy of the phantom study. The recovered images using WTV and TV as well as the partially sampled images from all three experiments were then compared with the fully sampled images (the baseline). Comparisons were done by calculating the mean error (%bias), root mean square error (RMSE), contrast recovery (CR), and SNR of activity concentration in regions of interest drawn in the background as well as the disks, spheres, and lesions. Results: For the simulation study, the mean error, RMSE, and CR for the WTV (TV) recovered images were 0.26% (0.48%), 2.6% (2.9%), 97% (96%), respectively, when compared to baseline. For the partially sampled images, these results were 22.5%, 45.9%, and 64%, respectively. For the simulation study, the average SNR for the baseline was 41.7 while for WTV (TV), recovered image was 44.2 (44.0). The phantom study showed similar trends with 5.4% (18.2%), 15.6% (18.8%), and 78% (60%), respectively, for the WTV (TV) images and 33%, 34.3%, and 69% for the partially sampled images. For the phantom study, the average SNR for the baseline was 14.7 while for WTV (TV) recovered image was 13.7 (11.9). Finally, the average of these values for the six patient studies for the WTV-recovered, TV, and partially sampled images was 1%, 7.2%, 92% and 1.3%, 15.1%, 87%, and 27%, 25.8%, 45%, respectively. Conclusions: CS with WTV is capable of recovering PET images with good quantitative accuracy from partially sampled data. Such an approach can be used to potentially reduce the cost of scanners while maintaining good image quality.« less

  6. Sterile Fluid Collections in Acute Pancreatitis: Catheter Drainage Versus Simple Aspiration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walser, Eric M.; Nealon, William H.; Marroquin, Santiago

    2006-02-15

    Purpose. To compare the clinical outcome of needle aspiration versus percutaneous catheter drainage of sterile fluid collections in patients with acute pancreatitis. Methods. We reviewed the clinical and imaging data of patients with acute pancreatic fluid collections from 1998 to 2003. Referral for fluid sampling was based on elevated white blood cell count and fevers. Those patients with culture-negative drainages or needle aspirations were included in the study. Fifteen patients had aspiration of 10-20 ml fluid only (group A) and 22 patients had catheter placement for chronic evacuation of fluid (group C). We excluded patients with grossly purulent collections andmore » chronic pseudocysts. We also recorded the number of sinograms and catheter changes and duration of catheter drainage. The CT severity index, Ranson scores, and maximum diameter of abdominal fluid collections were calculated for all patients at presentation. The total length of hospital stay (LOS), length of hospital stay after the drainage or aspiration procedure (LOS-P), and conversions to percutaneous and/or surgical drainage were recorded as well as survival. Results. The CT severity index and acute Ransom scores were not different between the two groups (p = 0.15 and p = 0.6, respectively). When 3 crossover patients from group A to group C were accounted for, the duration of hospitalization did not differ significantly, with a mean LOS and LOS-P of 33.8 days and 27.9 days in group A and 41.5 days and 27.6 days in group C, respectively (p = 0.57 and 0.98, respectively). The 60-day mortality was 2 of 15 (13%) in group A and 2 of 22 (9.1%) in group C. Kaplan-Meier survival curves for the two groups were not significantly different (p 0.3). Surgical or percutaneous conversions occurred significantly more often in group A (7/15, 47%) than surgical conversions in group C (4/22, 18%) (p 0.03). Patients undergoing catheter drainage required an average of 2.2 sinograms/tube changes and kept catheters in for an average of 52 days. Aspirates turned culture-positive in 13 of 22 patients (59%) who had chronic catheterization. In group A, 3 of the 7 patients converted to percutaneous or surgical drainage had infected fluid at the time of conversion (total positive culture rate in group A 3/15 or 20%). Conclusions. There is no apparent clinical benefit for catheter drainage of sterile fluid collections arising in acute pancreatitis as the length of hospital stay and mortality were similar between patients undergoing aspiration versus catheter drainage. However, almost half of patients treated with simple aspiration will require surgical or percutaneous drainage at some point. Disadvantages of chronic catheter drainage include a greater than 50% rate of bacterial colonization and the need for multiple sinograms and tube changes over an average duration of about 2 months.« less

  7. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography. We anticipate the results will be of particular interest to 3D filmmaking and real time computer games.

  8. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery.

    PubMed

    Hashemi, SayedMasoud; Song, William Y; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G; Ruschin, Mark

    2017-04-07

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm -1 which was increased to 1.2 mm -1 by SDIR, at half maximum.

  9. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery

    NASA Astrophysics Data System (ADS)

    Hashemi, SayedMasoud; Song, William Y.; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G.; Ruschin, Mark

    2017-04-01

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm-1 which was increased to 1.2 mm-1 by SDIR, at half maximum.

  10. Delineation of the riparian zone in data-scarce regions using fuzzy membership functions: An evaluation based on the case of the Naryn River in Kyrgyzstan

    NASA Astrophysics Data System (ADS)

    Betz, Florian; Lauermann, Magdalena; Cyffka, Bernd

    2018-04-01

    Riparian zones contain important ecosystems with a high biodiversity and relevant ecosystem services. From a process point of view, riparian zones are characterized by the interaction of hydrological, geomorphological and ecological processes. Consequently, their boundary is dynamic and blurred as it depends on not only the local valley morphology but also the hydrological regime. This makes a delineation of riparian zones from digital elevation data a challenging task as it should represent this blurred nature of riparian zone boundaries. While the application of high resolution topography from LIDAR and hydraulic models have become standard in many developed countries, studies and applications in remote areas still commonly rely on the freely available coarse resolution digital elevation models. In this article, we present the delineation of riparian zones from the SRTM-1 elevation model and fuzzy membership functions for the Naryn River in Kyrgyzstan having a length of approximately 700 km. We evaluate the extraction of the underlying channel network as well as the different indicator variables. The maximum user's accuracy for the delineation of riparian zones along the entire Naryn River is 82.14% reflecting the uncertainty arising from the heterogeneity of the riverscape as well as from the quality of the underlying elevation data. Despite the uncertainty, the fuzzy membership approach is considered as an appropriate method for riparian zone delineation as it reflects their dynamic, transitional character and can be used as indicator of connectivity within a riverscape.

  11. High-resolution imaging of expertise reveals reliable object selectivity in the fusiform face area related to perceptual performance

    PubMed Central

    McGugin, Rankin Williams; Gatenby, J. Christopher; Gore, John C.; Gauthier, Isabel

    2012-01-01

    The fusiform face area (FFA) is a region of human cortex that responds selectively to faces, but whether it supports a more general function relevant for perceptual expertise is debated. Although both faces and objects of expertise engage many brain areas, the FFA remains the focus of the strongest modular claims and the clearest predictions about expertise. Functional MRI studies at standard-resolution (SR-fMRI) have found responses in the FFA for nonface objects of expertise, but high-resolution fMRI (HR-fMRI) in the FFA [Grill-Spector K, et al. (2006) Nat Neurosci 9:1177–1185] and neurophysiology in face patches in the monkey brain [Tsao DY, et al. (2006) Science 311:670–674] reveal no reliable selectivity for objects. It is thus possible that FFA responses to objects with SR-fMRI are a result of spatial blurring of responses from nonface-selective areas, potentially driven by attention to objects of expertise. Using HR-fMRI in two experiments, we provide evidence of reliable responses to cars in the FFA that correlate with behavioral car expertise. Effects of expertise in the FFA for nonface objects cannot be attributed to spatial blurring beyond the scale at which modular claims have been made, and within the lateral fusiform gyrus, they are restricted to a small area (200 mm2 on the right and 50 mm2 on the left) centered on the peak of face selectivity. Experience with a category may be sufficient to explain the spatially clustered face selectivity observed in this region. PMID:23027970

  12. High-resolution imaging of expertise reveals reliable object selectivity in the fusiform face area related to perceptual performance.

    PubMed

    McGugin, Rankin Williams; Gatenby, J Christopher; Gore, John C; Gauthier, Isabel

    2012-10-16

    The fusiform face area (FFA) is a region of human cortex that responds selectively to faces, but whether it supports a more general function relevant for perceptual expertise is debated. Although both faces and objects of expertise engage many brain areas, the FFA remains the focus of the strongest modular claims and the clearest predictions about expertise. Functional MRI studies at standard-resolution (SR-fMRI) have found responses in the FFA for nonface objects of expertise, but high-resolution fMRI (HR-fMRI) in the FFA [Grill-Spector K, et al. (2006) Nat Neurosci 9:1177-1185] and neurophysiology in face patches in the monkey brain [Tsao DY, et al. (2006) Science 311:670-674] reveal no reliable selectivity for objects. It is thus possible that FFA responses to objects with SR-fMRI are a result of spatial blurring of responses from nonface-selective areas, potentially driven by attention to objects of expertise. Using HR-fMRI in two experiments, we provide evidence of reliable responses to cars in the FFA that correlate with behavioral car expertise. Effects of expertise in the FFA for nonface objects cannot be attributed to spatial blurring beyond the scale at which modular claims have been made, and within the lateral fusiform gyrus, they are restricted to a small area (200 mm(2) on the right and 50 mm(2) on the left) centered on the peak of face selectivity. Experience with a category may be sufficient to explain the spatially clustered face selectivity observed in this region.

  13. Fast-response LCDs for virtual reality applications

    NASA Astrophysics Data System (ADS)

    Chen, Haiwei; Peng, Fenglin; Gou, Fangwang; Wand, Michael; Wu, Shin-Tson

    2017-02-01

    We demonstrate a fast-response liquid crystal display (LCD) with an ultra-low-viscosity nematic LC mixture. The measured average motion picture response time is only 6.88 ms, which is comparable to 6.66 ms for an OLED at a 120 Hz frame rate. If we slightly increase the TFT frame rate and/or reduce the backlight duty ratio, image blurs can be further suppressed to unnoticeable level. Potential applications of such an image-blur-free LCD for virtual reality, gaming monitors, and TVs are foreseeable.

  14. Advanced carbon nanotubes functionalization

    NASA Astrophysics Data System (ADS)

    Setaro, A.

    2017-10-01

    Similar to graphene, carbon nanotubes are materials made of pure carbon in its sp2 form. Their extended conjugated π-network provides them with remarkable quantum optoelectronic properties. Frustratingly, it also brings drawbacks. The π-π stacking interaction makes as-produced tubes bundle together, blurring all their quantum properties. Functionalization aims at modifying and protecting the tubes while hindering π-π stacking. Several functionalization strategies have been developed to circumvent this limitation in order for nanotubes applications to thrive. In this review, we summarize the different approaches established so far, emphasizing the balance between functionalization efficacy and the preservation of the tubes’ properties. Much attention will be given to a functionalization strategy overcoming the covalent-noncovalent dichotomy and to the implementation of two advanced functionalization schemes: (a) conjugation with molecular switches, to yield hybrid nanosystems with chemo-physical properties that can be tuned in a controlled and reversible way, and; (b) plasmonic nanosystems, whose ability to concentrate and enhance the electromagnetic fields can be taken advantage of to enhance the optical response of the tubes.

  15. Aperture shape dependencies in extended depth of focus for imaging camera by wavefront coding

    NASA Astrophysics Data System (ADS)

    Sakita, Koichi; Ohta, Mitsuhiko; Shimano, Takeshi; Sakemoto, Akito

    2015-02-01

    Optical transfer functions (OTFs) on various directional spatial frequency axes for cubic phase mask (CPM) with circular and square apertures are investigated. Although OTF has no zero points, it has a very close value to zero for a circular aperture at low frequencies on diagonal axis, which results in degradation of restored images. The reason for close-to-zero value in OTF is also analyzed in connection with point spread function profiles using Fourier slice theorem. To avoid close-to-zero condition, square aperture with CPM is indispensable in WFC. We optimized cubic coefficient α of CPM and coefficients of digital filter, and succeeded to get excellent de-blurred images at large depth of field.

  16. Topics in the two-dimensional sampling and reconstruction of images. [in remote sensing

    NASA Technical Reports Server (NTRS)

    Schowengerdt, R.; Gray, S.; Park, S. K.

    1984-01-01

    Mathematical analysis of image sampling and interpolative reconstruction is summarized and extended to two dimensions for application to data acquired from satellite sensors such as the Thematic mapper and SPOT. It is shown that sample-scene phase influences the reconstruction of sampled images, adds a considerable blur to the average system point spread function, and decreases the average system modulation transfer function. It is also determined that the parametric bicubic interpolator with alpha = -0.5 is more radiometrically accurate than the conventional bicubic interpolator with alpha = -1, and this at no additional cost. Finally, the parametric bicubic interpolator is found to be suitable for adaptive implementation by relating the alpha parameter to the local frequency content of an image.

  17. Spatial and spectral simulation of LANDSAT images of agricultural areas

    NASA Technical Reports Server (NTRS)

    Pont, W. F., Jr. (Principal Investigator)

    1982-01-01

    A LANDSAT scene simulation capability was developed to study the effects of small fields and misregistration on LANDSAT-based crop proportion estimation procedures. The simulation employs a pattern of ground polygons each with a crop ID, planting date, and scale factor. Historical greenness/brightness crop development profiles generate the mean signal values for each polygon. Historical within-field covariances add texture to pixels in each polygon. The planting dates and scale factors create between-field/within-crop variation. Between field and crop variation is achieved by the above and crop profile differences. The LANDSAT point spread function is used to add correlation between nearby pixels. The next effect of the point spread function is to blur the image. Mixed pixels and misregistration are also simulated.

  18. Blind identification of image manipulation type using mixed statistical moments

    NASA Astrophysics Data System (ADS)

    Jeong, Bo Gyu; Moon, Yong Ho; Eom, Il Kyu

    2015-01-01

    We present a blind identification of image manipulation types such as blurring, scaling, sharpening, and histogram equalization. Motivated by the fact that image manipulations can change the frequency characteristics of an image, we introduce three types of feature vectors composed of statistical moments. The proposed statistical moments are generated from separated wavelet histograms, the characteristic functions of the wavelet variance, and the characteristic functions of the spatial image. Our method can solve the n-class classification problem. Through experimental simulations, we demonstrate that our proposed method can achieve high performance in manipulation type detection. The average rate of the correctly identified manipulation types is as high as 99.22%, using 10,800 test images and six manipulation types including the authentic image.

  19. An adaptive multi-feature segmentation model for infrared image

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa

    2016-04-01

    Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.

  20. Comparison of virtual reality based therapy with customized vestibular physical therapy for the treatment of vestibular disorders.

    PubMed

    Alahmari, Khalid A; Sparto, Patrick J; Marchetti, Gregory F; Redfern, Mark S; Furman, Joseph M; Whitney, Susan L

    2014-03-01

    We examined outcomes in persons with vestibular disorders after receiving virtual reality based therapy (VRBT) or customized vestibular physical therapy (PT) as an intervention for habituation of dizziness symptoms. Twenty subjects with vestibular disorders received VRBT and 18 received PT. During the VRBT intervention, subjects walked on a treadmill within an immersive virtual grocery store environment, for six sessions approximately one week apart. The PT intervention consisted of gaze stabilization, standing balance and walking exercises individually tailored to each subject. Before, one week after, and at six months after the intervention, subjects completed self-report and balance performance measures. Before and after each VRBT session, subjects also reported symptoms of nausea, headache, dizziness, and visual blurring. In both groups, significant improvements were noted on the majority of self-report and performance measures one week after the intervention. Subjects maintained improvements on self report and performance measures at six months follow up. There were not between group differences. Nausea, headache, dizziness and visual blurring increased significantly during the VRBT sessions, but overall symptoms were reduced at the end of the six-week intervention. While this study did not find a difference in outcomes between PT and VRBT, the mechanism by which subjects with chronic dizziness demonstrated improvement in dizziness and balance function may be different.

  1. Finger-Vein Image Enhancement Using a Fuzzy-Based Fusion Method with Gabor and Retinex Filtering

    PubMed Central

    Shin, Kwang Yong; Park, Young Ho; Nguyen, Dat Tien; Park, Kang Ryoung

    2014-01-01

    Because of the advantages of finger-vein recognition systems such as live detection and usage as bio-cryptography systems, they can be used to authenticate individual people. However, images of finger-vein patterns are typically unclear because of light scattering by the skin, optical blurring, and motion blurring, which can degrade the performance of finger-vein recognition systems. In response to these issues, a new enhancement method for finger-vein images is proposed. Our method is novel compared with previous approaches in four respects. First, the local and global features of the vein lines of an input image are amplified using Gabor filters in four directions and Retinex filtering, respectively. Second, the means and standard deviations in the local windows of the images produced after Gabor and Retinex filtering are used as inputs for the fuzzy rule and fuzzy membership function, respectively. Third, the optimal weights required to combine the two Gabor and Retinex filtered images are determined using a defuzzification method. Fourth, the use of a fuzzy-based method means that image enhancement does not require additional training data to determine the optimal weights. Experimental results using two finger-vein databases showed that the proposed method enhanced the accuracy of finger-vein recognition compared with previous methods. PMID:24549251

  2. Sulcal depth-based cortical shape analysis in normal healthy control and schizophrenia groups

    NASA Astrophysics Data System (ADS)

    Lyu, Ilwoo; Kang, Hakmook; Woodward, Neil D.; Landman, Bennett A.

    2018-03-01

    Sulcal depth is an important marker of brain anatomy in neuroscience/neurological function. Previously, sulcal depth has been explored at the region-of-interest (ROI) level to increase statistical sensitivity to group differences. In this paper, we present a fully automated method that enables inferences of ROI properties from a sulcal region- focused perspective consisting of two main components: 1) sulcal depth computation and 2) sulcal curve-based refined ROIs. In conventional statistical analysis, the average sulcal depth measurements are employed in several ROIs of the cortical surface. However, taking the average sulcal depth over the full ROI blurs overall sulcal depth measurements which may result in reduced sensitivity to detect sulcal depth changes in neurological and psychiatric disorders. To overcome such a blurring effect, we focus on sulcal fundic regions in each ROI by filtering out other gyral regions. Consequently, the proposed method results in more sensitive to group differences than a traditional ROI approach. In the experiment, we focused on a cortical morphological analysis to sulcal depth reduction in schizophrenia with a comparison to the normal healthy control group. We show that the proposed method is more sensitivity to abnormalities of sulcal depth in schizophrenia; sulcal depth is significantly smaller in most cortical lobes in schizophrenia compared to healthy controls (p < 0.05).

  3. External radioactive markers for PET data-driven respiratory gating in positron emission tomography.

    PubMed

    Büther, Florian; Ernst, Iris; Hamill, James; Eich, Hans T; Schober, Otmar; Schäfers, Michael; Schäfers, Klaus P

    2013-04-01

    Respiratory gating is an established approach to overcoming respiration-induced image artefacts in PET. Of special interest in this respect are raw PET data-driven gating methods which do not require additional hardware to acquire respiratory signals during the scan. However, these methods rely heavily on the quality of the acquired PET data (statistical properties, data contrast, etc.). We therefore combined external radioactive markers with data-driven respiratory gating in PET/CT. The feasibility and accuracy of this approach was studied for [(18)F]FDG PET/CT imaging in patients with malignant liver and lung lesions. PET data from 30 patients with abdominal or thoracic [(18)F]FDG-positive lesions (primary tumours or metastases) were included in this prospective study. The patients underwent a 10-min list-mode PET scan with a single bed position following a standard clinical whole-body [(18)F]FDG PET/CT scan. During this scan, one to three radioactive point sources (either (22)Na or (18)F, 50-100 kBq) in a dedicated holder were attached the patient's abdomen. The list mode data acquired were retrospectively analysed for respiratory signals using established data-driven gating approaches and additionally by tracking the motion of the point sources in sinogram space. Gated reconstructions were examined qualitatively, in terms of the amount of respiratory displacement and in respect of changes in local image intensity in the gated images. The presence of the external markers did not affect whole-body PET/CT image quality. Tracking of the markers led to characteristic respiratory curves in all patients. Applying these curves for gated reconstructions resulted in images in which motion was well resolved. Quantitatively, the performance of the external marker-based approach was similar to that of the best intrinsic data-driven methods. Overall, the gain in measured tumour uptake from the nongated to the gated images indicating successful removal of respiratory motion was correlated with the magnitude of the respiratory displacement of the respective tumour lesion, but not with lesion size. Respiratory information can be assessed from list-mode PET/CT through PET data-derived tracking of external radioactive markers. This information can be successfully applied to respiratory gating to reduce motion-related image blurring. In contrast to other previously described PET data-driven approaches, the external marker approach is independent of tumour uptake and thereby applicable even in patients with poor uptake and small tumours.

  4. Eye-lens accommodation load and static trapezius muscle activity.

    PubMed

    Richter, H O; Bänziger, T; Forsman, M

    2011-01-01

    The purpose of this experimental study was to investigate if sustained periods of oculomotor load impacts on neck/scapular area muscle activity. The static trapezius muscle activity was assessed from bipolar surface electromyography, normalized to a submaximal contraction. Twenty-eight subjects with a mean age of 29 (range 19-42, SD 8) viewed a high-contrast fixation target for two 5-min periods through: (1) -3.5 dioptre (D) lenses; and (2) 0 D lenses. The target was placed 5 D away from the individual's near point of accommodation. Each subject's ability to compensate for the added blur was extracted via infrared photorefraction measurements. Subjects whose accommodative response was higher in the -D blur condition (1) showed relatively more static bilateral trapezius muscle activity level. During no blur (2) there were no signs of relationships. The results indicate that sustained eye-lens accommodation at near, during ergonomically unfavourable viewing conditions, could possibly represent a risk factor for trapezius muscle myalgia.

  5. The algorithm of motion blur image restoration based on PSF half-blind estimation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ke; Lin, Zhe

    2011-08-01

    A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.

  6. Comparison of morphological and conventional edge detectors in medical imaging applications

    NASA Astrophysics Data System (ADS)

    Kaabi, Lotfi; Loloyan, Mansur; Huang, H. K.

    1991-06-01

    Recently, mathematical morphology has been used to develop efficient image analysis tools. This paper compares the performance of morphological and conventional edge detectors applied to radiological images. Two morphological edge detectors including the dilation residue found by subtracting the original signal from its dilation by a small structuring element, and the blur-minimization edge detector which is defined as the minimum of erosion and dilation residues of the blurred image version, are compared with the linear Laplacian and Sobel and the non-linear Robert edge detectors. Various structuring elements were used in this study: regular 2-dimensional, and 3-dimensional. We utilized two criterions for edge detector's performance classification: edge point connectivity and the sensitivity to the noise. CT/MR and chest radiograph images have been used as test data. Comparison results show that the blur-minimization edge detector, with a rolling ball-like structuring element outperforms other standard linear and nonlinear edge detectors. It is less noise sensitive, and performs the most closed contours.

  7. Region-confined restoration method for motion-blurred star image of the star sensor under dynamic conditions.

    PubMed

    Ma, Liheng; Bernelli-Zazzera, Franco; Jiang, Guangwen; Wang, Xingshu; Huang, Zongsheng; Qin, Shiqiao

    2016-06-10

    Under dynamic conditions, the centroiding accuracy of the motion-blurred star image decreases and the number of identified stars reduces, which leads to the degradation of the attitude accuracy of the star sensor. To improve the attitude accuracy, a region-confined restoration method, which concentrates on the noise removal and signal to noise ratio (SNR) improvement of the motion-blurred star images, is proposed for the star sensor under dynamic conditions. A multi-seed-region growing technique with the kinematic recursive model for star image motion is given to find the star image regions and to remove the noise. Subsequently, a restoration strategy is employed in the extracted regions, taking the time consumption and SNR improvement into consideration simultaneously. Simulation results indicate that the region-confined restoration method is effective in removing noise and improving the centroiding accuracy. The identification rate and the average number of identified stars in the experiments verify the advantages of the region-confined restoration method.

  8. Adaptive restoration of a partially coherent blurred image using an all-optical feedback interferometer with a liquid-crystal device.

    PubMed

    Shirai, Tomohiro; Barnes, Thomas H

    2002-02-01

    A liquid-crystal adaptive optics system using all-optical feedback interferometry is applied to partially coherent imaging through a phase disturbance. A theoretical analysis based on the propagation of the cross-spectral density shows that the blurred image due to the phase disturbance can be restored, in principle, irrespective of the state of coherence of the light illuminating the object. Experimental verification of the theory has been performed for two cases when the object to be imaged is illuminated by spatially coherent light originating from a He-Ne laser and by spatially incoherent white light from a halogen lamp. We observed in both cases that images blurred by the phase disturbance were successfully restored, in agreement with the theory, immediately after the adaptive optics system was activated. The origin of the deviation of the experimental results from the theory, together with the effect of the feedback misalignment inherent in our optical arrangement, is also discussed.

  9. Numerical correction of distorted images in full-field optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Min, Gihyeon; Kim, Ju Wan; Choi, Woo June; Lee, Byeong Ha

    2012-03-01

    We propose a numerical method which can numerically correct the distorted en face images obtained with a full field optical coherence tomography (FF-OCT) system. It is shown that the FF-OCT image of the deep region of a biological sample is easily blurred or degraded because the sample has a refractive index (RI) much higher than its surrounding medium in general. It is analyzed that the focal plane of the imaging system is segregated from the imaging plane of the coherence-gated system due to the RI mismatch. This image-blurring phenomenon is experimentally confirmed by imaging the chrome pattern of a resolution test target through its glass substrate in water. Moreover, we demonstrate that the blurred image can be appreciably corrected by using the numerical correction process based on the Fresnel-Kirchhoff diffraction theory. The proposed correction method is applied to enhance the image of a human hair, which permits the distinct identification of the melanin granules inside the cortex layer of the hair shaft.

  10. Neurophysiology underlying influence of stimulus reliability on audiovisual integration.

    PubMed

    Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R; Pitt, Mark A; Shahin, Antoine J

    2018-01-24

    We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Impact of Altering Various Image Parameters on Human Epidermal Growth Factor Receptor 2 Image Analysis Data Quality

    PubMed Central

    Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K.

    2017-01-01

    Introduction: The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Methods: Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. Results: HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. Conclusion: This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output. PMID:28966838

  12. Enhancing facial features by using clear facial features

    NASA Astrophysics Data System (ADS)

    Rofoo, Fanar Fareed Hanna

    2017-09-01

    The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.

  13. Analytical properties of time-of-flight PET data.

    PubMed

    Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M

    2008-06-07

    We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the 'bow-tie' property of the 2D Radon transform to the time-of-flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data.

  14. Analytical properties of time-of-flight PET data

    NASA Astrophysics Data System (ADS)

    Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M.

    2008-06-01

    We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the 'bow-tie' property of the 2D Radon transform to the time-of-flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data.

  15. Blind image deconvolution using the Fields of Experts prior

    NASA Astrophysics Data System (ADS)

    Dong, Wende; Feng, Huajun; Xu, Zhihai; Li, Qi

    2012-11-01

    In this paper, we present a method for single image blind deconvolution. To improve its ill-posedness, we formulate the problem under Bayesian probabilistic framework and use a prior named Fields of Experts (FoE) which is learnt from natural images to regularize the latent image. Furthermore, due to the sparse distribution of the point spread function (PSF), we adopt a Student-t prior to regularize it. An improved alternating minimization (AM) approach is proposed to solve the resulted optimization problem. Experiments on both synthetic and real world blurred images show that the proposed method can achieve results of high quality.

  16. [Spasm of accommodation].

    PubMed

    Lindberg, Laura

    2014-01-01

    Spasm of accommodation refers to prolonged contraction of the ciliary muscle, most commonly causing pseudomyopia to varying degrees in both eyes by keeping the lens in a state of short sightedness. It may also be manifested as inability to allow the adaptation spasticity prevailing in the ciliary muscle relax without measurable myopia. As a rule, this is a functional ailment triggered by prolonged near work and stress. The most common symptoms include blurring of distance vision, varying visual acuity as well as pains in the orbital region and the head, progressing into a chronic state. Cycloplegic eye drops are used as the treatment.

  17. Methods and apparatus for analysis of chromatographic migration patterns

    DOEpatents

    Stockham, Thomas G.; Ives, Jeffrey T.

    1993-01-01

    A method and apparatus for sharpening signal peaks in a signal representing the distribution of biological or chemical components of a mixture separated by a chromatographic technique such as, but not limited to, electrophoresis. A key step in the method is the use of a blind deconvolution technique, presently embodied as homomorphic filtering, to reduce the contribution of a blurring function to the signal encoding the peaks of the distribution. The invention further includes steps and apparatus directed to determination of a nucleotide sequence from a set of four such signals representing DNA sequence data derived by electrophoretic means.

  18. SU-D-17A-04: The Impact of Audiovisual Biofeedback On Image Quality During 4D Functional and Anatomic Imaging: Results of a Prospective Clinical Trial

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keall, P; Pollock, S; Yang, J

    2014-06-01

    Purpose: The ability of audiovisual (AV) biofeedback to improve breathing regularity has not previously been investigated for functional imaging studies. The purpose of this study was to investigate the impact of AV biofeedback on 4D-PET and 4D-CT image quality in a prospective clinical trial. We hypothesized that motion blurring in 4D-PET images and the number of artifacts in 4D-CT images are reduced using AV biofeedback. Methods: AV biofeedback is a real-time, interactive and personalized system designed to help a patient self-regulate his/her breathing using a patient-specific representative waveform and musical guides. In an IRB-approved prospective clinical trial, 4D-PET and 4D-CTmore » images of 10 lung cancer patients were acquired with AV biofeedback (AV) and free breathing (FB). The 4D-PET images in 6 respiratory bins were analyzed for motion blurring by: (1) decrease of GTVPET and (2) increase of SUVmax in 4-DPET compared to 3D-PET. The 4D-CT images were analyzed for artifacts by: (1) comparing normalized cross correlation-based scores (NCCS); and (2) quantifying a visual assessment score (VAS). A two-tailed paired t-test was used to test the hypotheses. Results: The impact of AV biofeedback on 4D-PET and 4D-CT images varied widely between patients, suggesting inconsistent patient comprehension and capability. Overall, the 4D-PET decrease of GTVPET was 2.0±3.0cm3 with AV and 2.3±3.9cm{sup 3} for FB (p=0.61). The 4D-PET increase of SUVmax was 1.6±1.0 with AV and 1.1±0.8 with FB (p=0.002). The 4D-CT NCCS were 0.65±0.27 with AV and 0.60±0.32 for FB (p=0.32). The 4D-CT VAS was 0.0±2.7 (p=ns). Conclusion: A 10-patient study demonstrated a statistically significant reduction of motion blurring of AV over FB for 1/2 functional 4D-PET imaging metrics. No difference between AV and FB was found for 2 anatomic 4D-CT imaging metrics. Future studies will focus on optimizing the human-computer interface and including patient training sessions for improved comprehension and capability. Supported by NIH/NCI R01 CA 093626, Stanford BioX Interdisciplinary Initiatives Program, NHMRC Australia Fellowship, and Kwanjeong Educational Foundation. GE Healthcare provided the Respiratory Gating Toolbox for 4D-PET image reconstruction. Stanford University owns US patent #E7955270 which is unlicensed to any commercial entity.« less

  19. Magnifying Lenses with Weak Achromatic Bends for High-Energy Electron Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walstrom, Peter Lowell

    2015-02-27

    This memo briefly describes bremsstrahlung background effects in GeV-range electron radiography systems and the use of weak bending magnets to deflect the image to the side of the forward bremsstrahlung spot to reduce background. The image deflection introduces first-order chromatic image blur due to dispersion. Two approaches to eliminating the dispersion effect to first order by use of magnifying lens with achromatic bends are described. Also, higher-order image blur terms caused by weak bends are also discussed, and shown to be negligibly small in most cases of interest.

  20. Multiresolution image gathering and restoration

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1992-01-01

    In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.

  1. Computational Imaging in Demanding Conditions

    DTIC Science & Technology

    2015-11-18

    spatiotemporal domain where such blur is not present.  Detailed Accomplishments:  ● Removing  Atmospheric   Turbulence  via Space-Invariant  Deconvolution:  ○ To...given image sequence distorted by  atmospheric   turbulence . This approach  reduces the space and time-varying deblurring problem to a shift invariant...SUBJECT TERMS Image processing, Computational imaging, turbulence , blur, enhancement 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18

  2. Tolerance and Exhaustion: Defining Mechanisms of T cell Dysfunction

    PubMed Central

    Schietinger, Andrea; Greenberg, Philip D.

    2013-01-01

    CD8 T cell activation and differentiation is tightly controlled, and dependent on the context in which naïve T cells encounter antigen, can either result in functional memory or T cell dysfunction, including exhaustion, tolerance, anergy, or senescence. With the identification of phenotypic and functional traits shared in different settings of T cell dysfunction, distinctions between such dysfunctional `states' have become blurred. Here, we discuss distinct states of CD8 T cell dysfunction, with emphasis on (i) T cell tolerance to self-antigens (self-tolerance), (ii) T cell exhaustion during chronic infections, and (iii) tumor-induced T cell dysfunction. We highlight recent findings on cellular and molecular characteristics defining these states, cell-intrinsic regulatory mechanisms that induce and maintain them, and strategies that can lead to their reversal. PMID:24210163

  3. Separation of presampling and postsampling modulation transfer functions in infrared sensor systems

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Olson, Jeffrey T.; O'Shea, Patrick D.; Hodgkin, Van A.; Jacobs, Eddie L.

    2006-05-01

    New methods of measuring the modulation transfer function (MTF) of electro-optical sensor systems are investigated. These methods are designed to allow the separation and extraction of presampling and postsampling components from the total system MTF. The presampling MTF includes all the effects prior to the sampling stage of the imaging process, such as optical blur and detector shape. The postsampling MTF includes all the effects after sampling, such as interpolation filters and display characteristics. Simulation and laboratory measurements are used to assess the utility of these techniques. Knowledge of these components and inclusion into sensor models, such as the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate's NVThermIP, will allow more accurate modeling and complete characterization of sensor performance.

  4. Development and validation of segmentation and interpolation techniques in sinograms for metal artifact suppression in CT.

    PubMed

    Veldkamp, Wouter J H; Joemai, Raoul M S; van der Molen, Aart J; Geleijns, Jacob

    2010-02-01

    Metal prostheses cause artifacts in computed tomography (CT) images. The purpose of this work was to design an efficient and accurate metal segmentation in raw data to achieve artifact suppression and to improve CT image quality for patients with metal hip or shoulder prostheses. The artifact suppression technique incorporates two steps: metal object segmentation in raw data and replacement of the segmented region by new values using an interpolation scheme, followed by addition of the scaled metal signal intensity. Segmentation of metal is performed directly in sinograms, making it efficient and different from current methods that perform segmentation in reconstructed images in combination with Radon transformations. Metal signal segmentation is achieved by using a Markov random field model (MRF). Three interpolation methods are applied and investigated. To provide a proof of concept, CT data of five patients with metal implants were included in the study, as well as CT data of a PMMA phantom with Teflon, PVC, and titanium inserts. Accuracy was determined quantitatively by comparing mean Hounsfield (HU) values and standard deviation (SD) as a measure of distortion in phantom images with titanium (original and suppressed) and without titanium insert. Qualitative improvement was assessed by comparing uncorrected clinical images with artifact suppressed images. Artifacts in CT data of a phantom and five patients were automatically suppressed. The general visibility of structures clearly improved. In phantom images, the technique showed reduced SD close to the SD for the case where titanium was not inserted, indicating improved image quality. HU values in corrected images were different from expected values for all interpolation methods. Subtle differences between interpolation methods were found. The new artifact suppression design is efficient, for instance, in terms of preserving spatial resolution, as it is applied directly to original raw data. It successfully reduced artifacts in CT images of five patients and in phantom images. Sophisticated interpolation methods are needed to obtain reliable HU values close to the prosthesis.

  5. Investigation of the feasibility of a simple method for verifying the motion of a binary multileaf collimator synchronized with the rotation of the gantry for helical tomotherapy

    PubMed Central

    Uematsu, Masahiro; Ito, Makiko; Hama, Yukihiro; Inomata, Takayuki; Fujii, Masahiro; Nishio, Teiji; Nakamura, Naoki; Nakagawa, Keiichi

    2012-01-01

    In this paper, we suggest a new method for verifying the motion of a binary multileaf collimator (MLC) in helical tomotherapy. For this we used a combination of a cylindrical scintillator and a general‐purpose camcorder. The camcorder records the light from the scintillator following photon irradiation, which we use to track the motion of the binary MLC. The purpose of this study is to demonstrate the feasibility of this method as a binary MLC quality assurance (QA) tool. First, the verification was performed using a simple binary MLC pattern with a constant leaf open time; secondly, verification using the binary MLC pattern used in a clinical setting was also performed. Sinograms of simple binary MLC patterns, in which leaves that were open were detected as “open” from the measured light, define the sensitivity which, in this case, was 1.000. On the other hand, the specificity, which gives the fraction of closed leaves detected as “closed”, was 0.919. The leaf open error identified by our method was −1.3±7.5%. The 68.6% of observed leaves were performed within ± 3% relative error. The leaf open error was expressed by the relative errors calculated on the sinogram. In the clinical binary MLC pattern, the sensitivity and specificity were 0.994 and 0.997, respectively. The measurement could be performed with −3.4±8.0% leaf open error. The 77.5% of observed leaves were performed within ± 3% relative error. With this method, we can easily verify the motion of the binary MLC, and the measurement unit developed was found to be an effective QA tool. PACS numbers: 87.56.Fc, 87.56.nk PMID:22231222

  6. SU-F-T-489: 4-Years Experience of QA in TomoTherapy MVCT: What Do We Look Out For?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, F; Chan, K

    2016-06-15

    Purpose: To evaluate the QA results of TomoTherapy MVCT from March 2012 to February 2016, and to identify issues that may affect consistency in HU numbers and reconstructed treatment dose in MVCT. Methods: Monthly QA was performed on our TomoHD system. Phantom with rod inserts of various mass densities was imaged in MVCT and compared to baseline to evaluate HU number consistency. To evaluate treatment dose reconstructed by delivered sinogram and MVCT, a treatment plan was designed on a humanoid skull phantom. The phantom was imaged with MVCT and treatment plan was delivered to obtain the sinogram. The dose reconstructedmore » with the Planned Adaptive software was compared to the dose in the original plan. The QA tolerance for HU numbers was ±30 HU, and ±2% for discrepancy between original plan dose and reconstructed dose. Tolerances were referenced to AAPM TG148. Results: Several technical modifications or maintenance activities to the system have been identified which affected QA Results: 1) Upgrade in console system software which added a weekly HU calibration procedure; 2) Linac or MLC replacement leading to change in Accelerator Output Machine (AOM) parameters; 3) Upgrade in planning system algorithm affecting MVCT dose reconstruction. These events caused abrupt changes in QA results especially for the reconstructed dose. In the past 9 months, when no such modifications were done to the system, reconstructed dose was consistent with maximum deviation from baseline less than 0.6%. The HU number deviated less than 5HU. Conclusion: Routine QA is essential for MVCT, especially if the MVCT is used for daily dose reconstruction to monitor delivered dose to patients. Several technical events which may affect consistency of this are software changes, linac or MLC replacement. QA results reflected changes which justify re-calibration or system adjustment. In normal circumstances, the system should be relatively stable and quarterly QA may be sufficient.« less

  7. Radiation Dose Reduction via Sinogram Affirmed Iterative Reconstruction and Automatic Tube Voltage Modulation (CARE kV) in Abdominal CT

    PubMed Central

    Shin, Hyun Joo; Lee, Young Han; Choi, Jin-Young; Park, Mi-Suk; Kim, Myeong-Jin; Kim, Ki Whang

    2013-01-01

    Objective To evaluate the feasibility of sinogram-affirmed iterative reconstruction (SAFIRE) and automated kV modulation (CARE kV) in reducing radiation dose without increasing image noise for abdominal CT examination. Materials and Methods This retrospective study included 77 patients who received CT imaging with an application of CARE kV with or without SAFIRE and who had comparable previous CT images obtained without CARE kV or SAFIRE, using the standard dose (i.e., reference mAs of 240) on an identical CT scanner and reconstructed with filtered back projection (FBP) within 1 year. Patients were divided into two groups: group A (33 patients, CT scanned with CARE kV); and group B (44 patients, scanned after reducing the reference mAs from 240 to 170 and applying both CARE kV and SAFIRE). CT number, image noise for four organs and radiation dose were compared among the two groups. Results Image noise increased after CARE kV application (p < 0.001) and significantly decreased as SAFIRE strength increased (p < 0.001). Image noise with reduced-mAs scan (170 mAs) in group B became similar to that of standard-dose FBP images after applying CARE kV and SAFIRE strengths of 3 or 4 when measured in the aorta, liver or muscle (p ≥ 0.108). Effective doses decreased by 19.4% and 41.3% for groups A and B, respectively (all, p < 0.001) after application of CARE kV with or without SAFIRE. Conclusion Combining CARE kV, reduction of mAs from 240 to 170 mAs and noise reduction by applying SAFIRE strength 3 or 4 reduced the radiation dose by 41.3% without increasing image noise compared with the standard-dose FBP images. PMID:24265563

  8. Deep machine learning based Image classification in hard disk drive manufacturing (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Rana, Narender; Chien, Chester

    2018-03-01

    A key sensor element in a Hard Disk Drive (HDD) is the read-write head device. The device is complex 3D shape and its fabrication requires over thousand process steps with many of them being various types of image inspection and critical dimension (CD) metrology steps. In order to have high yield of devices across a wafer, very tight inspection and metrology specifications are implemented. Many images are collected on a wafer and inspected for various types of defects and in CD metrology the quality of image impacts the CD measurements. Metrology noise need to be minimized in CD metrology to get better estimate of the process related variations for implementing robust process controls. Though there are specialized tools available for defect inspection and review allowing classification and statistics. However, due to unavailability of such advanced tools or other reasons, many times images need to be manually inspected. SEM Image inspection and CD-SEM metrology tools are different tools differing in software as well. SEM Image inspection and CD-SEM metrology tools are separate tools differing in software and purpose. There have been cases where a significant numbers of CD-SEM images are blurred or have some artefact and there is a need for image inspection along with the CD measurement. Tool may not report a practical metric highlighting the quality of image. Not filtering CD from these blurred images will add metrology noise to the CD measurement. An image classifier can be helpful here for filtering such data. This paper presents the use of artificial intelligence in classifying the SEM images. Deep machine learning is used to train a neural network which is then used to classify the new images as blurred and not blurred. Figure 1 shows the image blur artefact and contingency table of classification results from the trained deep neural network. Prediction accuracy of 94.9 % was achieved in the first model. Paper covers other such applications of the deep neural network in image classification for inspection, review and metrology.

  9. Erasing and blurring memories: The differential impact of interference on separate aspects of forgetting.

    PubMed

    Sun, Sol Z; Fidalgo, Celia; Barense, Morgan D; Lee, Andy C H; Cant, Jonathan S; Ferber, Susanne

    2017-11-01

    Interference disrupts information processing across many timescales, from immediate perception to memory over short and long durations. The widely held similarity assumption states that as similarity between interfering information and memory contents increases, so too does the degree of impairment. However, information is lost from memory in different ways. For instance, studied content might be erased in an all-or-nothing manner. Alternatively, information may be retained but the precision might be degraded or blurred. Here, we asked whether the similarity of interfering information to memory contents might differentially impact these 2 aspects of forgetting. Observers studied colored images of real-world objects, each followed by a stream of interfering objects. Across 4 experiments, we manipulated the similarity between the studied object and the interfering objects in circular color space. After interference, memory for object color was tested continuously on a color wheel, which in combination with mixture modeling, allowed for estimation of how erasing and blurring differentially contribute to forgetting. In contrast to the similarity assumption, we show that highly dissimilar interfering items caused the greatest increase in random guess responses, suggesting a greater frequency of memory erasure (Experiments 1-3). Moreover, we found that observers were generally able to resist interference from highly similar items, perhaps through surround suppression (Experiments 1 and 4). Finally, we report that interference from items of intermediate similarity tended to blur or decrease memory precision (Experiments 3 and 4). These results reveal that the nature of visual similarity can differentially alter how information is lost from memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Influence of different types of astigmatism on visual acuity.

    PubMed

    Remón, Laura; Monsoriu, Juan A; Furlan, Walter D

    To investigate the change in visual acuity (VA) produced by different types of astigmatism (on the basis of the refractive power and position of the principal meridians) on normal accommodating eyes. The lens induced method was employed to simulate a set of 28 astigmatic blur conditions on different healthy emmetropic eyes. Additionally, 24 values of spherical defocus were also simulated on the same eyes for comparison. VA was measured in each case and the results, expressed in logMAR units, were represented against of the modulus of the dioptric power vector (blur strength). LogMAR VA varies in a linear fashion with increasing astigmatic blur, being the slope of the line dependent on the accommodative demand in each type of astigmatism. However, in each case, we found no statistically significant differences between the three axes investigated (0°, 45°, 90°). Non-statistically significant differences were found either for the VA achieved with spherical myopic defocus (MD) and mixed astigmatism (MA). VA with simple hyperopic astigmatism (SHA) was higher than with simple myopic astigmatism (SMA), however, in this case non conclusive results were obtained in terms of statistical significance. The VA achieved with imposed compound hyperopic astigmatism (CHA) was highly influenced by the eye's accommodative response. VA is correlated with the blur strength in a different way for each type of astigmatism, depending on the accommodative demand. VA is better when one of the focal lines lie on the retina irrespective of the axis orientation; accommodation favors this situation. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  11. Accommodative and vergence responses to conflicting blur and disparity stimuli during development

    PubMed Central

    Bharadwaj, Shrikant R.; Candy, T. Rowan

    2014-01-01

    Accommodative and vergence responses of the typically developing visual system are generated using a combination of cues, including retinal blur and disparity. The developmental importance of blur and disparity cues in generating these motor responses was assessed by placing the two cues in conflict with each other. Cue-conflicts were induced by placing either −2 D lenses or 2 MA base-out prisms before both eyes of 140 subjects (2.0 months to 40.8 years) while they watched a cartoon movie binocularly at 80 cm. The frequency and amplitude of accommodation to lenses and vergence to prisms increased with age (both p < 0.001), with the vergence response (mean ± 1 SEM = 1.38 ± 0.05 MA) being slightly larger than the accommodative response (1.18 ± 0.04 D) at all ages (p = 0.007). The amplitude of these responses decreased with an increase in conflict stimuli (1 to 3 D or MA) (both p < 0.01). The coupled vergence response to −2 D lenses (0.31 ± 0.06 MA) and coupled accommodative response to 2 MA base-out prisms (0.21 ± 0.02 D) were significantly smaller than (both p < 0.001) and poorly correlated with the open-loop vergence (r = 0.12; p = 0.44) and open-loop accommodation (r = −0.08; p = 0.69), respectively. The typically developing visual system compensates for transiently induced conflicts between blur and disparity stimuli, without exhibiting a strong preference for either cue. The accuracy of this compensation decreases with an increase in amplitude of cue-conflict. PMID:20053067

  12. Vergence driven accommodation with simulated disparity in myopia and emmetropia.

    PubMed

    Maiello, Guido; Kerber, Kristen L; Thorn, Frank; Bex, Peter J; Vera-Diaz, Fuensanta A

    2018-01-01

    The formation of focused and corresponding foveal images requires a close synergy between the accommodation and vergence systems. This linkage is usually decoupled in virtual reality systems and may be dysfunctional in people who are at risk of developing myopia. We study how refractive error affects vergence-accommodation interactions in stereoscopic displays. Vergence and accommodative responses were measured in 21 young healthy adults (n=9 myopes, 22-31 years) while subjects viewed naturalistic stimuli on a 3D display. In Step 1, vergence was driven behind the monitor using a blurred, non-accommodative, uncrossed disparity target. In Step 2, vergence and accommodation were driven back to the monitor plane using naturalistic images that contained structured depth and focus information from size, blur and/or disparity. In Step 1, both refractive groups converged towards the stereoscopic target depth plane, but the vergence-driven accommodative change was smaller in emmetropes than in myopes (F 1,19 =5.13, p=0.036). In Step 2, there was little effect of peripheral depth cues on accommodation or vergence in either refractive group. However, vergence responses were significantly slower (F 1,19 =4.55, p=0.046) and accommodation variability was higher (F 1,19 =12.9, p=0.0019) in myopes. Vergence and accommodation responses are disrupted in virtual reality displays in both refractive groups. Accommodation responses are less stable in myopes, perhaps due to a lower sensitivity to dioptric blur. Such inaccuracies of accommodation may cause long-term blur on the retina, which has been associated with a failure of emmetropization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Associations of Eye Diseases and Symptoms with Self-Reported Physical and Mental Health

    PubMed Central

    Lee, Paul P.; Cunningham, William E.; Nakazono, Terry T.; Hays, Ron D.

    2009-01-01

    Purpose To study the associations of eye diseases and visual symptoms with the most widely used health-related quality of life (HRQOL) generic profile measure. Design HRQOL was assessed using the SF-36® version 1 survey administered to a sample of patients receiving care provided by a physician group practice association. Methods Eye dieases, ocular symptoms, and general health was assessed in a sample of patients from 48 physician groups. A total of 18,480 surveys were mailed out and 7,093 returned; 5,021of these had complete data. Multiple linear regression models were used to examine the decrements in self-reported physical and mental health associated with eye diseases and symptoms, including trouble seeing and blurred vision. Results Nine percent of the respondents had cataracts, 2% had age-related macular degeneration, 2% glaucoma, 8% blurred vision, and 13% trouble seeing. Trouble seeing and blurred vision both had statistically unique associations with worse scores on the SF-36 mental health summary score. Only trouble seeing had a significant association with the SF-36 physical health summary score. While these ocular symptoms were significantly associated with SF-36® scores, having an eye disease (cataracts, glaucoma, macular degeneration) was not, after adjusting for other variables in the model. Conclusions Our results suggest an important link between visual symptoms and general HRQOL. The study extends the findings of prior research to show that both trouble seeing and blurred vision have independent, measurable associations with HRQOL, while the presence of specific eye diseases may not. PMID:19712923

  14. Geometric correction method for 3d in-line X-ray phase contrast image reconstruction

    PubMed Central

    2014-01-01

    Background Mechanical system with imperfect or misalignment of X-ray phase contrast imaging (XPCI) components causes projection data misplaced, and thus result in the reconstructed slice images of computed tomography (CT) blurred or with edge artifacts. So the features of biological microstructures to be investigated are destroyed unexpectedly, and the spatial resolution of XPCI image is decreased. It makes data correction an essential pre-processing step for CT reconstruction of XPCI. Methods To remove unexpected blurs and edge artifacts, a mathematics model for in-line XPCI is built by considering primary geometric parameters which include a rotation angle and a shift variant in this paper. Optimal geometric parameters are achieved by finding the solution of a maximization problem. And an iterative approach is employed to solve the maximization problem by using a two-step scheme which includes performing a composite geometric transformation and then following a linear regression process. After applying the geometric transformation with optimal parameters to projection data, standard filtered back-projection algorithm is used to reconstruct CT slice images. Results Numerical experiments were carried out on both synthetic and real in-line XPCI datasets. Experimental results demonstrate that the proposed method improves CT image quality by removing both blurring and edge artifacts at the same time compared to existing correction methods. Conclusions The method proposed in this paper provides an effective projection data correction scheme and significantly improves the image quality by removing both blurring and edge artifacts at the same time for in-line XPCI. It is easy to implement and can also be extended to other XPCI techniques. PMID:25069768

  15. Accommodation to wavefront vergence and chromatic aberration.

    PubMed

    Wang, Yinan; Kruger, Philip B; Li, James S; Lin, Peter L; Stark, Lawrence R

    2011-05-01

    Longitudinal chromatic aberration (LCA) provides a cue to accommodation with small pupils. However, large pupils increase monochromatic aberrations, which may obscure chromatic blur. In this study, we examined the effect of pupil size and LCA on accommodation. Accommodation was recorded by infrared optometer while observers (nine normal trichromats) viewed a sinusoidally moving Maltese cross target in a Badal stimulus system. There were two illumination conditions: white (3000 K; 20 cd/m) and monochromatic (550 nm with 10 nm bandwidth; 20 cd/m) and two artificial pupil conditions (3 and 5.7 mm). Separately, static measurements of wavefront aberration were made with the eye accommodating to targets between 0 and 4 D (COAS, Wavefront Sciences). Large individual differences in accommodation to wavefront vergence and to LCA are a hallmark of accommodation. LCA continues to provide a signal at large pupil sizes despite higher levels of monochromatic aberrations. Monochromatic aberrations may defend against chromatic blur at high spatial frequencies, but accommodation responds best to optical vergence and to LCA at 3 c/deg where blur from higher order aberrations is less.

  16. Accommodation to Wavefront Vergence and Chromatic Aberration

    PubMed Central

    Wang, Yinan; Kruger, Philip B.; Li, James S.; Lin, Peter L.; Stark, Lawrence R.

    2011-01-01

    Purpose Longitudinal chromatic aberration (LCA) provides a cue to accommodation with small pupils. However, large pupils increase monochromatic aberrations, which may obscure chromatic blur. In the present study, we examined the effect of pupil size and LCA on accommodation. Methods Accommodation was recorded by infrared optometer while observers (nine normal trichromats) viewed a sinusoidally moving Maltese cross target in a Badal stimulus system. There were two illumination conditions: white (3000 K; 20 cd/m2) and monochromatic (550 nm with 10 nm bandwidth; 20 cd/m2) and two artificial pupil conditions (3 mm and 5.7 mm). Separately, static measurements of wavefront aberration were made with the eye accommodating to targets between 0 and 4 D (COAS, Wavefront Sciences). Results Large individual differences in accommodation to wavefront vergence and to LCA are a hallmark of accommodation. LCA continues to provide a signal at large pupil sizes despite higher levels of monochromatic aberrations. Conclusions Monochromatic aberrations may defend against chromatic blur at high spatial frequencies, but accommodation responds best to optical vergence and to LCA at 3 c/deg where blur from higher order aberrations is less. PMID:21317666

  17. Prosthetic component segmentation with blur compensation: a fast method for 3D fluoroscopy.

    PubMed

    Tarroni, Giacomo; Tersi, Luca; Corsi, Cristiana; Stagni, Rita

    2012-06-01

    A new method for prosthetic component segmentation from fluoroscopic images is presented. The hybrid approach we propose combines diffusion filtering, region growing and level-set techniques without exploiting any a priori knowledge of the analyzed geometry. The method was evaluated on a synthetic dataset including 270 images of knee and hip prosthesis merged to real fluoroscopic data simulating different conditions of blurring and illumination gradient. The performance of the method was assessed by comparing estimated contours to references using different metrics. Results showed that the segmentation procedure is fast, accurate, independent on the operator as well as on the specific geometrical characteristics of the prosthetic component, and able to compensate for amount of blurring and illumination gradient. Importantly, the method allows a strong reduction of required user interaction time when compared to traditional segmentation techniques. Its effectiveness and robustness in different image conditions, together with simplicity and fast implementation, make this prosthetic component segmentation procedure promising and suitable for multiple clinical applications including assessment of in vivo joint kinematics in a variety of cases.

  18. 3D GRASE PROPELLER: Improved Image Acquisition Technique for Arterial Spin Labeling Perfusion Imaging

    PubMed Central

    Tan, Huan; Hoge, W. Scott; Hamilton, Craig A.; Günther, Matthias; Kraft, Robert A.

    2014-01-01

    Arterial spin labeling (ASL) is a non-invasive technique that can quantitatively measure cerebral blood flow (CBF). While traditionally ASL employs 2D EPI or spiral acquisition trajectories, single-shot 3D GRASE is gaining popularity in ASL due to inherent SNR advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T2 decay. A novel technique combining 3D GRASE and a PROPELLER trajectory (3DGP) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3×3×5mm3 nominal voxel size with Q2TIPS-FAIR as the ASL preparation sequence. Data from 5 healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in CBF quantification with 3D GRASE, 3DGP demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use. PMID:21254211

  19. Automatic attention-based prioritization of unconstrained video for compression

    NASA Astrophysics Data System (ADS)

    Itti, Laurent

    2004-06-01

    We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.

  20. Circular blurred shape model for multiclass symbol recognition.

    PubMed

    Escalera, Sergio; Fornés, Alicia; Pujol, Oriol; Lladós, Josep; Radeva, Petia

    2011-04-01

    In this paper, we propose a circular blurred shape model descriptor to deal with the problem of symbol detection and classification as a particular case of object recognition. The feature extraction is performed by capturing the spatial arrangement of significant object characteristics in a correlogram structure. The shape information from objects is shared among correlogram regions, where a prior blurring degree defines the level of distortion allowed in the symbol, making the descriptor tolerant to irregular deformations. Moreover, the descriptor is rotation invariant by definition. We validate the effectiveness of the proposed descriptor in both the multiclass symbol recognition and symbol detection domains. In order to perform the symbol detection, the descriptors are learned using a cascade of classifiers. In the case of multiclass categorization, the new feature space is learned using a set of binary classifiers which are embedded in an error-correcting output code design. The results over four symbol data sets show the significant improvements of the proposed descriptor compared to the state-of-the-art descriptors. In particular, the results are even more significant in those cases where the symbols suffer from elastic deformations.

  1. Direct formation of nano-pillar arrays by phase separation of polymer blend for the enhanced out-coupling of organic light emitting diodes with low pixel blurring.

    PubMed

    Lee, Cholho; Han, Kyung-Hoon; Kim, Kwon-Hyeon; Kim, Jang-Joo

    2016-03-21

    We have demonstrated a simple and efficient method to fabricate OLEDs with enhanced out-coupling efficiencies and with low pixel blurring by inserting nano-pillar arrays prepared through the lateral phase separation of two immiscible polymers in a blend film. By selecting a proper solvent for the polymer and controlling the composition of the polymer blend, the nano-pillar arrays were formed directly after spin-coating of the polymer blend and selective removal of one phase, needing no complicated processes such as nano-imprint lithography. Pattern size and distribution were easily controlled by changing the composition and thickness of the polymer blend film. Phosphorescent OLEDs using the internal light extraction layer containing the nano-pillar arrays showed a 30% enhancement of the power efficiency, no spectral variation with the viewing angle, and only a small increment in pixel blurring. With these advantages, this newly developed method can be adopted for the commercial fabrication process of OLEDs for lighting and display applications.

  2. Figures of merit for detectors in digital radiography. II. Finite number of secondaries and structured backgrounds.

    PubMed

    Pineda, Angel R; Barrett, Harrison H

    2004-02-01

    The current paradigm for evaluating detectors in digital radiography relies on Fourier methods. Fourier methods rely on a shift-invariant and statistically stationary description of the imaging system. The theoretical justification for the use of Fourier methods is based on a uniform background fluence and an infinite detector. In practice, the background fluence is not uniform and detector size is finite. We study the effect of stochastic blurring and structured backgrounds on the correlation between Fourier-based figures of merit and Hotelling detectability. A stochastic model of the blurring leads to behavior similar to what is observed by adding electronic noise to the deterministic blurring model. Background structure does away with the shift invariance. Anatomical variation makes the covariance matrix of the data less amenable to Fourier methods by introducing long-range correlations. It is desirable to have figures of merit that can account for all the sources of variation, some of which are not stationary. For such cases, we show that the commonly used figures of merit based on the discrete Fourier transform can provide an inaccurate estimate of Hotelling detectability.

  3. Evaluation of Deblur Methods for Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, William M.

    2014-03-31

    Radiography is used as a primary diagnostic for dynamic experiments, providing timeresolved radiographic measurements of areal mass density along a line of sight through the experiment. It is well known that the finite spot extent of the radiographic source, as well as scattering, are sources of blurring of the radiographic images. This blurring interferes with quantitative measurement of the areal mass density. In order to improve the quantitative utility of this diagnostic, it is necessary to deblur or “restore” the radiographs to recover the “true” areal mass density from a radiographic transmission measurement. Towards this end, I am evaluating threemore » separate methods currently in use for deblurring radiographs. I begin by briefly describing the problems associated with image restoration, and outlining the three methods. Next, I illustrate how blurring affects the quantitative measurements using radiographs. I then present the results of the various deblur methods, evaluating each according to several criteria. After I have summarized the results of the evaluation, I give a detailed account of how the restoration process is actually implemented.« less

  4. Retrospective analysis of a detector fault for a full field digital mammography system

    NASA Astrophysics Data System (ADS)

    Marshall, N. W.

    2006-11-01

    This paper describes objective and subjective image quality measurements acquired as part of a routine quality assurance (QA) programme for an amorphous selenium (a-Se) full field digital mammography (FFDM) system between August-04 and February-05. During this period, the FFDM detector developed a fault and was replaced. A retrospective analysis of objective image quality parameters (modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE)) is presented to try and gain a deeper understanding of the detector problem that occurred. These measurements are discussed in conjunction with routine contrast-detail (c-d) results acquired with the CDMAM (Artinis, The Netherlands) test object. There was significant reduction in MTF over this period of time indicating an increase in blurring occurring within the a-Se converter layer. This blurring was not isotropic, being greater in the data line direction (left to right across the detector) than in the gate line direction (chest wall to nipple). The initial value of the 50% MTF point was 6 mm-1; for the faulty detector the 50% MTF points occurred at 3.4 mm-1 and 1.0 mm-1 in the gate line and data line directions, respectively. Prior to NNPS estimation, variance images were formed of the detector flat field images. Spatial distribution of variance was not uniform, suggesting that the physical blurring process was not constant across the detector. This change in variance with image position implied that the stationarity of the noise statistics within the image was limited and that care would be needed when performing objective measurements. The NNPS measurements confirmed the results found for the MTF, with a strong reduction in NNPS as a function of spatial frequency. This reduction was far more severe in the data line direction. A somewhat tentative DQE estimate was made; in the gate line direction there was little change in DQE up to 2.5 mm-1 but at the Nyquist frequency the DQE had fallen to approximately 35% of the original value. There was severe attenuation of DQE in the data line direction, the DQE falling to less than 0.01 above approximately 3.0 mm-1. C-d results showed an increase in threshold contrast of approximately 25% for details less than 0.2 mm in diameter, while no reduction in c-d performance was found at the largest detail diameters (1.0 mm and above). Despite the detector fault, the c-d curve was found to pass the European protocol acceptable c-d curve.

  5. SU-D-206-06: Task-Specific Optimization of Scintillator Thickness for CMOS-Detector Based Cone-Beam Breast CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vedantham, S; Shrestha, S; Shi, L

    Purpose: To optimize the cesium iodide (CsI:Tl) scintillator thickness in a complimentary metal-oxide semiconductor (CMOS)-based detector for use in dedicated cone-beam breast CT. Methods: The imaging task considered was the detection of a microcalcification cluster comprising six 220µm diameter calcium carbonate spheres, arranged in the form of a regular pentagon with 2 mm spacing on its sides and a central calcification, similar to that in ACR-recommended mammography accreditation phantom, at a mean glandular dose of 4.5 mGy. Generalized parallel-cascades based linear systems analysis was used to determine Fourier-domain image quality metrics in reconstructed object space, from which the detectability indexmore » inclusive of anatomical noise was determined for a non-prewhitening numerical observer. For 300 projections over 2π, magnification-associated focal-spot blur, Monte Carlo derived x-ray scatter, K-fluorescent emission and reabsorption within CsI:Tl, CsI:Tl quantum efficiency and optical blur, fiberoptic plate transmission efficiency and blur, CMOS quantum efficiency, pixel aperture function and additive noise, and filtered back-projection to isotropic 105µm voxel pitch with bilinear interpolation were modeled. Imaging geometry of a clinical prototype breast CT system, a 60 kV Cu/Al filtered x-ray spectrum from 0.3 mm focal spot incident on a 14 cm diameter semi-ellipsoidal breast were used to determine the detectability index for 300–600 µm thick (75µm increments) CsI:Tl. The CsI:Tl thickness that maximized the detectability index was considered optimal. Results: The limiting resolution (10% modulation transfer function, MTF) progressively decreased with increasing CsI:Tl thickness. The zero-frequency detective quantum efficiency, DQE(0), in projection space increased with increasing CsI:Tl thickness. The maximum detectability index was achieved with 525µm thick CsI:Tl scintillator. Reduced MTF at mid-to-high frequencies for 600µm thick CsI:Tl lowered the detectability index than 525µm CsI:Tl. Conclusion: For the x-ray spectrum and imaging conditions considered, a 525µm thick CsI:Tl scintillator integrated with the CMOS detector is optimal for detecting microcalcification cluster. Funding support: Supported in part by NIH R01 CA195512. The contents are solely the responsibility of the authors and do not reflect the official views of the NIH or the NCI. Disclosures: SV, GV and AK - Research collaboration, Koning Corp., West Henrietta, NY.« less

  6. Through-focus optical characteristics of monofocal and bifocal soft contact lenses across the peripheral visual field.

    PubMed

    Ji, Qiuzhi; Yoo, Young-Sik; Alam, Hira; Yoon, Geunyoung

    2018-05-01

    To characterise the impact of monofocal soft contact lens (SCL) and bifocal SCLs on refractive error, depth of focus (DoF) and orientation of blur in the peripheral visual field. Monofocal and two bifocal SCLs, Acuvue Bifocal (AVB, Johnson & Johnson) and Misight Dual Focus (DF, CooperVision) with +2.0 D add power were modelled using a ray tracing program (ZEMAX) based on their power maps. These SCLs were placed onto the anterior corneal surface of the simulated Atchison myopic eye model to correct for -3.0 D spherical refractive error at the fovea. To quantify through-focus retinal image quality, defocus from -3.5 D to 1.5 D in 0.5 D steps was induced at each horizontal eccentricity from 0 to 40° in 10° steps. Wavefront aberrations were computed for each visual eccentricity and defocus. The retinal images were simulated using a custom software program developed in Matlab (The MathWorks) by convolving the point spread function calculated from the aberration with a reference image. The convolved images were spatially filtered to match the spatial resolution limit of each peripheral eccentricity. Retinal image quality was then quantified by the 2-D cross-correlation between the filtered convolved retinal images and the reference image. Peripheral defocus, DoF and orientation of blur were also estimated. In comparison with the monofocal SCL, the bifocal SCLs degraded retinal image quality while DoF was increased at fovea. From 10 to 20°, a relatively small amount of myopic shift (less than 0.3 D) was induced by bifocal SCLs compared with monofocal. DoF was also increased with bifocal SCLs at peripheral vision of 10 and 20°. The trend of myopic shift became less consistent at larger eccentricity, where at 30° DF showed a 0.75 D myopic shift while AVB showed a 0.2 D hyperopic shift and both AVB and DF exhibited large relative hyperopic defocus at 40°. The anisotropy in orientation of blur was found to increase and change its direction through focus beyond central vision. This trend was found to be less dominant with bifocal SCLs compared to monofocal SCL. Bifocal SCLs have a relatively small impact on myopic shift in peripheral refractive error while DoF is increased significantly. We hypothetically suggest that a mechanism underlying myopia control with these bifocal or multifocal contact lenses is an increase in DoF and a decrease in anisotropy of peripheral optical blur. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.

  7. A hyperspectral image optimizing method based on sub-pixel MTF analysis

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Li, Kai; Wang, Jinqiang; Zhu, Yajie

    2015-04-01

    Hyperspectral imaging is used to collect tens or hundreds of images continuously divided across electromagnetic spectrum so that the details under different wavelengths could be represented. A popular hyperspectral imaging methods uses a tunable optical band-pass filter settled in front of the focal plane to acquire images of different wavelengths. In order to alleviate the influence of chromatic aberration in some segments in a hyperspectral series, in this paper, a hyperspectral optimizing method uses sub-pixel MTF to evaluate image blurring quality was provided. This method acquired the edge feature in the target window by means of the line spread function (LSF) to calculate the reliable position of the edge feature, then the evaluation grid in each line was interpolated by the real pixel value based on its relative position to the optimal edge and the sub-pixel MTF was used to analyze the image in frequency domain, by which MTF calculation dimension was increased. The sub-pixel MTF evaluation was reliable, since no image rotation and pixel value estimation was needed, and no artificial information was introduced. With theoretical analysis, the method proposed in this paper is reliable and efficient when evaluation the common images with edges of small tilt angle in real scene. It also provided a direction for the following hyperspectral image blurring evaluation and the real-time focal plane adjustment in real time in related imaging system.

  8. Comparison of Virtual Reality Based Therapy with Customized Vestibular Physical Therapy for the Treatment of Vestibular Disorders

    PubMed Central

    Alahmari, Khalid A.; Sparto, Patrick J; Marchetti, Gregory F.; Redfern, Mark S.; Furman, Joseph M.; Whitney, Susan L.

    2017-01-01

    We examined outcomes in persons with vestibular disorders after receiving virtual reality based therapy (VRBT) or customized vestibular physical therapy (PT) as an intervention for habituation of dizziness symptoms. Twenty subjects with vestibular disorders received VRBT and 18 received PT. During the VRBT intervention, subjects walked on a treadmill within an immersive virtual grocery store environment, for 6 sessions approximately one week apart. The PT intervention consisted of gaze stabilization, standing balance and walking exercises individually tailored to each subject. Before, one week after, and at 6-months after the intervention, subjects completed self-report and balance performance measures. Before and after each VRBT session, subjects also reported symptoms of nausea, headache, dizziness, and visual blurring. In both groups, significant improvements were noted on the majority of self-report and performance measures one week after the intervention. Subjects maintained improvements on self report and performance measures at 6 months follow up. There were not between group differences. Nausea, headache, dizziness and visual blurring increased significantly during the VRBT sessions, but overall symptoms were reduced at the end of the six-week intervention. While this study did not find a difference in outcomes between PT and VRBT, the mechanism by which subjects with chronic dizziness demonstrated improvement in dizziness and balance function may be different. PMID:24608691

  9. Alternations of functional connectivity in amblyopia patients: a resting-state fMRI study

    NASA Astrophysics Data System (ADS)

    Wang, Jieqiong; Hu, Ling; Li, Wenjing; Xian, Junfang; Ai, Likun; He, Huiguang

    2014-03-01

    Amblyopia is a common yet hard-to-cure disease in children and results in poor or blurred vision. Some efforts such as voxel-based analysis, cortical thickness analysis have been tried to reveal the pathogenesis of amblyopia. However, few studies focused on alterations of the functional connectivity (FC) in amblyopia. In this study, we analyzed the abnormalities of amblyopia patients by both the seed-based FC with the left/right primary visual cortex and the network constructed throughout the whole brain. Experiments showed the following results: (1)As for the seed-based FC analysis, FC between superior occipital gyrus and the primary visual cortex was found to significantly decrease in both sides. The abnormalities were also found in lingual gyrus. The results may reflect functional deficits both in dorsal stream and ventral stream. (2)Two increased functional connectivities and 64 decreased functional connectivities were found in the whole brain network analysis. The decreased functional connectivities most concentrate in the temporal cortex. The results suggest that amblyopia may be caused by the deficits in the visual information transmission.

  10. Altered Functional Connectivity of the Primary Visual Cortex in Subjects with Amblyopia

    PubMed Central

    Ding, Kun; Liu, Yong; Yan, Xiaohe; Lin, Xiaoming; Jiang, Tianzi

    2013-01-01

    Amblyopia, which usually occurs during early childhood and results in poor or blurred vision, is a disorder of the visual system that is characterized by a deficiency in an otherwise physically normal eye or by a deficiency that is out of proportion with the structural or functional abnormalities of the eye. Our previous study demonstrated alterations in the spontaneous activity patterns of some brain regions in individuals with anisometropic amblyopia compared to subjects with normal vision. To date, it remains unknown whether patients with amblyopia show characteristic alterations in the functional connectivity patterns in the visual areas of the brain, particularly the primary visual area. In the present study, we investigated the differences in the functional connectivity of the primary visual area between individuals with amblyopia and normal-sighted subjects using resting functional magnetic resonance imaging. Our findings demonstrated that the cerebellum and the inferior parietal lobule showed altered functional connectivity with the primary visual area in individuals with amblyopia, and this finding provides further evidence for the disruption of the dorsal visual pathway in amblyopic subjects. PMID:23844297

  11. A holographic technique for recording a hypervelocity projectile with front surface resolution.

    PubMed

    Kurtz, R L; Loh, H Y

    1970-05-01

    Any motion of the scene during the exposure of a hologram results in a spatial modulation of the recorded fringe contrast. On reconstruction, this produces a spatial amplitude modulation of the reconstructed wavefront, which results in a blurring of the image, not unlike that of a conventional photograph. For motion of the scene sufficient to change the path length of the signal arm by a half wavelength, this blurring is generally prohibitive. This paper describes a proposed holographic technique which offers promise for front light resolution of targets moving at high speeds, heretofore unobtainable by conventional methods.

  12. Image deblurring in smartphone devices using built-in inertial measurement sensors

    NASA Astrophysics Data System (ADS)

    Šindelář, Ondřej; Šroubek, Filip

    2013-01-01

    Long-exposure handheld photography is degraded with blur, which is difficult to remove without prior information about the camera motion. In this work, we utilize inertial sensors (accelerometers and gyroscopes) in modern smartphones to detect exact motion trajectory of the smartphone camera during exposure and remove blur from the resulting photography based on the recorded motion data. The whole system is implemented on the Android platform and embedded in the smartphone device, resulting in a close-to-real-time deblurring algorithm. The performance of the proposed system is demonstrated in real-life scenarios.

  13. Analytical Properties of Time-of-Flight PET Data

    PubMed Central

    Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M.

    2015-01-01

    We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the “bow-tie” property of the 2D Radon transform to the time of flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data. PMID:18460746

  14. Alignment Solution for CT Image Reconstruction using Fixed Point and Virtual Rotation Axis.

    PubMed

    Jun, Kyungtaek; Yoon, Seokhwan

    2017-01-25

    Since X-ray tomography is now widely adopted in many different areas, it becomes more crucial to find a robust routine of handling tomographic data to get better quality of reconstructions. Though there are several existing techniques, it seems helpful to have a more automated method to remove the possible errors that hinder clearer image reconstruction. Here, we proposed an alternative method and new algorithm using the sinogram and the fixed point. An advanced physical concept of Center of Attenuation (CA) was also introduced to figure out how this fixed point is applied to the reconstruction of image having errors we categorized in this article. Our technique showed a promising performance in restoring images having translation and vertical tilt errors.

  15. Methods and apparatus for analysis of chromatographic migration patterns

    DOEpatents

    Stockham, T.G.; Ives, J.T.

    1993-12-28

    A method and apparatus are presented for sharpening signal peaks in a signal representing the distribution of biological or chemical components of a mixture separated by a chromatographic technique such as, but not limited to, electrophoresis. A key step in the method is the use of a blind deconvolution technique, presently embodied as homomorphic filtering, to reduce the contribution of a blurring function to the signal encoding the peaks of the distribution. The invention further includes steps and apparatus directed to determination of a nucleotide sequence from a set of four such signals representing DNA sequence data derived by electrophoretic means. 16 figures.

  16. Considerations in high resolution skeletal muscle DTI using single-shot EPI with stimulated echo preparation and SENSE

    PubMed Central

    Karampinos, Dimitrios C.; Banerjee, Suchandrima; King, Kevin F.; Link, Thomas M.; Majumdar, Sharmila

    2011-01-01

    Previous studies have shown that skeletal muscle diffusion tensor imaging (DTI) can non-invasively probe changes in the muscle fiber architecture and microstructure in diseased and damaged muscles. However, DTI fiber reconstruction in small muscles and in muscle regions close to aponeuroses and tendons remains challenging because of partial volume effects. Increasing the spatial resolution of skeletal muscle single-shot diffusion weighted (DW)-EPI can be hindered by the inherently low SNR of muscle DW-EPI due to the short muscle T2 and the high sensitivity of single-shot EPI to off-resonance effects and T2* blurring. In the present work, eddy-current compensated diffusion-weighted stimulated echo preparation is combined with sensitivity encoding (SENSE) to maintain good SNR properties and reduce the sensitivity to distortions and T2* blurring in high resolution skeletal muscle single-shot DW-EPI. An analytical framework is developed for optimizing the reduction factor and diffusion weighting time to achieve maximum SNR. Arguments for the selection of the experimental parameters are then presented considering the compromise between SNR, B0-induced distortions, T2* blurring effects and tissue incoherent motion effects. Based on the selected parameters in a high resolution skeletal muscle single-shot DW-EPI protocol, imaging protocols at lower acquisition matrix sizes are defined with matched bandwidth in the phase-encoding direction and SNR. In vivo results show that high resolution skeletal muscle DTI with minimized sensitivity to geometric distortions and T2* blurring is feasible using the proposed methodology. In particular, a significant benefit is demonstrated from reducing partial volume effects on resolving multi-pennate muscles and muscles with small cross sections in calf muscle DTI. PMID:22081519

  17. Blur identification by multilayer neural network based on multivalued neurons.

    PubMed

    Aizenberg, Igor; Paliy, Dmitriy V; Zurada, Jacek M; Astola, Jaakko T

    2008-05-01

    A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.

  18. Adaptive optical imaging through complex living plant cells

    NASA Astrophysics Data System (ADS)

    Tamada, Yosuke; Hayano, Yutaka; Murata, Takashi; Oya, Shin; Honma, Yusuke; Kanazawa, Minoru; Miura, Noriaki; Hasebe, Mitsuyasu; Kamei, Yasuhiro; Hattori, Masayuki

    2017-04-01

    Live-cell imaging using fluorescent molecules is now essential for biological researches. However, images of living cells are accompanied with blur, which becomes stronger according to the depth inside the cells and tissues. This image blur is caused by the disturbance on light that goes through optically inhomogeneous living cells and tissues. Here, we show adaptive optics (AO) imaging of living plant cells. AO has been developed in astronomy to correct the disturbance on light caused by atmospheric turbulence. We developed AO microscope effective for the observation of living plant cells with strong disturbance by chloroplasts, and successfully obtained clear images inside plant cells.

  19. Drag queens' use of language and the performance of blurred gendered and racial identities.

    PubMed

    Mann, Stephen L

    2011-01-01

    Building on Barrett (1998), this study provides a sociolinguistic analysis of the language used by Suzanne, a European-American drag queen, during her on-stage performance in the southeastern United States. Suzanne uses wigs and costumes to portray a female character on stage, but never hides the fact that she is biologically male. She is also a member of a predominantly African-American cast. Through her creative use of linguistic features such as stylemixing (i.e., the use of linguistic features shared across multiple language varieties) and expletives, Suzanne is able to perform an identity that frequently blurs gender and racial lines.

  20. Blurred lines: the General Medical Council guidance on doctors and social media .

    PubMed

    Cork, Nick; Grant, Paul

    2016-06-01

    Digital technology in the early 21st century has introduced significant changes to everyday life and the ways in which we practise medicine. It is important that the ease and practicality of accessing and disseminating information does not intrude on the high standards expected of doctors, and that the boundaries between professional and public life do not become blurred through the increasing adoption of social media. This said, as with any such profound disruption, the social media age could be responsible for driving a new understanding of what it means to be a medical professional. © 2016 Royal College of Physicians.

  1. Feasibility of infrared Earth tracking for deep-space optical communications.

    PubMed

    Chen, Yijiang; Hemmati, Hamid; Ortiz, Gerry G

    2012-01-01

    Infrared (IR) Earth thermal tracking is a viable option for optical communications to distant planet and outer-planetary missions. However, blurring due to finite receiver aperture size distorts IR Earth images in the presence of Earth's nonuniform thermal emission and limits its applicability. We demonstrate a deconvolution algorithm that can overcome this limitation and reduce the error from blurring to a negligible level. The algorithm is applied successfully to Earth thermal images taken by the Mars Odyssey spacecraft. With the solution to this critical issue, IR Earth tracking is established as a viable means for distant planet and outer-planetary optical communications. © 2012 Optical Society of America

  2. Image restoration techniques as applied to Landsat MSS and TM data

    USGS Publications Warehouse

    Meyer, David

    1987-01-01

    Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.

  3. Acute air pollution-related symptoms among residents in Chiang Mai, Thailand.

    PubMed

    Wiwatanadate, Phongtape

    2014-01-01

    Open burnings (forest fires, agricultural, and garbage burnings) are the major sources of air pollution in Chiang Mai, Thailand. A time series prospective study was conducted in which 3025 participants were interviewed for 19 acute symptoms with the daily records of ambient air pollutants: particulate matter less than 10 microm in size (PM10), carbon monoxide (CO), nitrogen dioxide (NO2), sulfur dioxide (SO2), and ozone (O3). PM10 was positively associated with blurred vision with an adjusted odds ratio (OR) of 1.009. CO was positively associated with lower lung and heart symptoms with adjusted ORs of 1.137 and 1.117. NO2 was positively associated with nosebleed, larynx symptoms, dry cough, lower lung symptoms, heart symptoms, and eye irritation with the range of adjusted ORs (ROAORs) of 1.024 to 1.229. SO2 was positively associated with swelling feet, skin symptoms, eye irritation, red eyes, and blurred vision with ROAORs of 1.205 to 2.948. Conversely, O3 was negatively related to running nose, burning nose, dry cough, body rash, red eyes, and blurred vision with ROAORs of 0.891 to 0.979.

  4. Maskless EUV lithography: an already difficult technology made even more complicated?

    NASA Astrophysics Data System (ADS)

    Chen, Yijian

    2012-03-01

    In this paper, we present the research progress made in maskless EUV lithography and discuss the emerging opportunities for this disruptive technology. It will be shown nanomirrors based maskless approach is one path to costeffective and defect-free EUV lithography, rather than making it even more complicated. The focus of our work is to optimize the existing vertical comb process and scale down the mirror size from several microns to sub-micron regime. The nanomirror device scaling, system configuration, and design issues will be addressed. We also report our theoretical and simulation study of reflective EUV nanomirror based imaging behavior. Dense line/space patterns are formed with an EUV nanomirror array by assigning a phase shift of π to neighboring nanomirrors. Our simulation results show that phase/intensity imbalance is an inherent characteristic of maskless EUV lithography while it only poses a manageable challenge to CD control and process window. The wafer scan and EUV laser jitter induced image blur phenomenon is discussed and a blurred imaging theory is constructed. This blur effect is found to degrade the image contrast at a level that mainly depends on the wafer scan speed.

  5. Examination of an Electronic Patient Record Display Method to Protect Patient Information Privacy.

    PubMed

    Niimi, Yukari; Ota, Katsumasa

    2017-02-01

    Electronic patient records facilitate the provision of safe, high-quality medical care. However, because personnel can view almost all stored information, this study designed a display method using a mosaic blur (pixelation) to temporarily conceal information patients do not want shared. This study developed an electronic patient records display method for patient information that balanced the patient's desire for personal information protection against the need for information sharing among medical personnel. First, medical personnel were interviewed about the degree of information required for both individual duties and team-based care. Subsequently, they tested a mock display method that partially concealed information using a mosaic blur, and they were interviewed about the effectiveness of the display method that ensures patient privacy. Participants better understood patients' demand for confidentiality, suggesting increased awareness of patients' privacy protection. However, participants also indicated that temporary concealment of certain information was problematic. Other issues included the inconvenience of removing the mosaic blur to obtain required information and risk of insufficient information for medical care. Despite several issues with using a display method that temporarily conceals information according to patient privacy needs, medical personnel could accept this display method if information essential to medical safety remains accessible.

  6. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images.

    PubMed

    Elad, M; Feuer, A

    1997-01-01

    The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.

  7. Blind motion image deblurring using nonconvex higher-order total variation model

    NASA Astrophysics Data System (ADS)

    Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo

    2016-09-01

    We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.

  8. Computed tomography in the evaluation of Crohn disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, H.I.; Gore, R.M.; Margulis, A.R.

    1983-02-01

    The abdominal and pelvic computed tomographic examinations in 28 patients with Crohn disease were analyzed and correlated with conventional barium studies, sinograms, and surgical findings. Mucosal abnormalities such as aphthous lesions, pseudopolyps, and ulcerations were only imaged by conventional techniques. Computed tomography proved superior in demonstrating the mural, serosal, and mesenteric abnormalities such as bowel wall thickening (82%), fibrofatty proliferation of mesenteric fat (39%), mesenteric abscess (25%), inflammatory reaction of the mesentery (14%), and mesenteric lymphadenopathy (18%). Computed tomography was most useful clinically in defining the nature of mass effects, separation, or displacement of small bowel segments seen on smallmore » bowel series. Although conventional barium studies remain the initial diagnostic procedure in evaluating Crohn disease, computed tomography can be a useful adjunct in resolving difficult clinical and radiologic diagnostic problems.« less

  9. Clinical implementation of an exit detector-based dose reconstruction tool for helical tomotherapy delivery quality assurance.

    PubMed

    Deshpande, Shrikant; Xing, Aitang; Metcalfe, Peter; Holloway, Lois; Vial, Philip; Geurts, Mark

    2017-10-01

    The aim of this study was to validate the accuracy of an exit detector-based dose reconstruction tool for helical tomotherapy (HT) delivery quality assurance (DQA). Exit detector-based DQA tool was developed for patient-specific HT treatment verification. The tool performs a dose reconstruction on the planning image using the sinogram measured by the HT exit detector with no objects in the beam (i.e., static couch), and compares the reconstructed dose to the planned dose. Vendor supplied (three "TomoPhant") plans with a cylindrical solid water ("cheese") phantom were used for validation. Each "TomoPhant" plan was modified with intentional multileaf collimator leaf open time (MLC LOT) errors to assess the sensitivity and robustness of this tool. Four scenarios were tested; leaf 32 was "stuck open," leaf 42 was "stuck open," random leaf LOT was closed first by mean values of 2% and then 4%. A static couch DQA procedure was then run five times (once with the unmodified sinogram and four times with modified sinograms) for each of the three "TomoPhant" treatment plans. First, the original optimized delivery plan was compared with the original machine agnostic delivery plan, then the original optimized plans with a known modification applied (intentional MLC LOT error) were compared to the corresponding error plan exit detector measurements. An absolute dose comparison between calculated and ion chamber (A1SL, Standard Imaging, Inc., WI, USA) measured dose was performed for the unmodified "TomoPhant" plans. A 3D gamma evaluation (2%/2 mm global) was performed by comparing the planned dose ("original planned dose" for unmodified plans and "adjusted planned dose" for each intentional error) to exit detector-reconstructed dose for all three "Tomophant" plans. Finally, DQA for 119 clinical (treatment length <25 cm) and three cranio-spinal irradiation (CSI) plans were measured with both the ArcCHECK phantom (Sun Nuclear Corp., Melbourne, FL, USA) and the exit detector DQA tool to assess the time required for DQA and similarity between two methods. The measured ion chamber dose agreed to within 1.5% of the reconstructed dose computed by the exit detector DQA tool on a cheese phantom for all unmodified "Tomophant" plans. Excellent agreement in gamma pass rate (>95%) was observed between the planned and reconstructed dose for all "Tomophant" plans considered using the tool. The gamma pass rate from 119 clinical plan DQA measurements was 94.9% ± 1.5% and 91.9% ± 4.37% for the exit detector DQA tool and ArcCHECK phantom measurements (P = 0.81), respectively. For the clinical plans (treatment length <25 cm), the average time required to perform DQA was 24.7 ± 3.5 and 39.5 ± 4.5 min using the exit detector QA tool and ArcCHECK phantom, respectively, whereas the average time required for the 3 CSI treatments was 35 ± 3.5 and 90 ± 5.2 min, respectively. The exit detector tool has been demonstrated to be faster for performing the DQA with equivalent sensitivity for detecting MLC LOT errors relative to a conventional phantom-based QA method. In addition, comprehensive MLC performance evaluation and features of reconstructed dose provide additional insight into understanding DQA failures and the clinical relevance of DQA results. © 2017 American Association of Physicists in Medicine.

  10. Optimization of exposure parameters for pediatric chest x-ray imaging

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Ye-Seul; Kim, Hee-Joung

    2012-03-01

    The pediatric patients are more susceptible to the effects of ionizing radiation than adults. Pediatric patients are smaller, more radiosensitive than adult patients and many cannot stand unassisted. Their characteristics affect the method of imaging projection and how dose is optimized. The purpose of this study was to investigate the effect of various technical parameters for the dose optimization in pediatric chest radiological examinations by evaluating effective dose and effective detective quantum efficiency (eDQE) including the scatter radiation from the object, the blur caused by the focal spot, geometric magnification and detector characteristics. For the tube voltages ranging from 40 to 90 kV in 10 kV increments at the focus-to-detector distance of 100, 110, 120, 150, 180 cm, the eDQE was evaluated at same effective dose. The results showed that the eDQE was largest at 60 kVp without and with an anti-scatter grid. Especially, the eDQE was considerably higher without the use of an anti-scatter grid on equivalent effective dose. This indicates that the reducing the scatter radiation did not compensate for the loss of absorbed effective photons in the grid. When the grid is not used the eDQE increased with increasing focus-to-detector distance because of the greater effective modulation transfer function (eMTF) with the lower focal spot blurring. In conclusion, for pediatric patients, the amount of scattered radiation is less, and the amount of grid attenuation increased unnecessary radiation dose.

  11. Axial resolution improvement in spectral domain optical coherence tomography using a depth-adaptive maximum-a-posterior framework

    NASA Astrophysics Data System (ADS)

    Boroomand, Ameneh; Tan, Bingyao; Wong, Alexander; Bizheva, Kostadinka

    2015-03-01

    The axial resolution of Spectral Domain Optical Coherence Tomography (SD-OCT) images degrades with scanning depth due to the limited number of pixels and the pixel size of the camera, any aberrations in the spectrometer optics and wavelength dependent scattering and absorption in the imaged object [1]. Here we propose a novel algorithm which compensates for the blurring effect of these factors of the depth-dependent axial Point Spread Function (PSF) in SDOCT images. The proposed method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model. The aim is to compensate for the depth-dependent axial blur in SD-OCT images and simultaneously suppress the speckle noise which is inherent to all OCT images. Applying the proposed depth-dependent axial resolution enhancement technique to an OCT image of cucumber considerably improved the axial resolution of the image especially at higher imaging depths and allowed for better visualization of cellular membrane and nuclei. Comparing the result of our proposed method with the conventional Lucy-Richardson deconvolution algorithm clearly demonstrates the efficiency of our proposed technique in better visualization and preservation of fine details and structures in the imaged sample, as well as better speckle noise suppression. This illustrates the potential usefulness of our proposed technique as a suitable replacement for the hardware approaches which are often very costly and complicated.

  12. Contrast summation across eyes and space is revealed along the entire dipper function by a "Swiss cheese" stimulus.

    PubMed

    Meese, Tim S; Baker, Daniel H

    2011-01-27

    Previous contrast discrimination experiments have shown that luminance contrast is summed across ocular (T. S. Meese, M. A. Georgeson, & D. H. Baker, 2006) and spatial (T. S. Meese & R. J. Summers, 2007) dimensions at threshold and above. However, is this process sufficiently general to operate across the conjunction of eyes and space? Here we used a "Swiss cheese" stimulus where the blurred "holes" in sine-wave carriers were of equal area to the blurred target ("cheese") regions. The locations of the target regions in the monocular image pairs were interdigitated across eyes such that their binocular sum was a uniform grating. When pedestal contrasts were above threshold, the monocular neural images contained strong evidence that the high-contrast regions in the two eyes did not overlap. Nevertheless, sensitivity to dual contrast increments (i.e., to contrast increments in different locations in the two eyes) was a factor of ∼1.7 greater than to single increments (i.e., increments in a single eye), comparable with conventional binocular summation. This provides evidence for a contiguous area summation process that operates at all contrasts and is influenced little, if at all, by eye of origin. A three-stage model of contrast gain control fitted the results and possessed the properties of ocularity invariance and area invariance owing to its cascade of normalization stages. The implications for a population code for pattern size are discussed.

  13. Deconvolution of the PSF of a seismic lens

    NASA Astrophysics Data System (ADS)

    Yu, Jianhua; Wang, Yue; Schuster, Gerard T.

    2002-12-01

    We show that if seismic data d is related to the migration image by mmig = LTd. then mmig is a blurred version of the actual reflectivity distribution m, i.e., mmig = (LTL)m. Here L is the acoustic forward modeling operator under the Born approximation where d = Lm. The blurring operator (LTL), or point spread function, distorts the image because of defects in the seismic lens, i.e., small source-receiver recording aperture and irregular/coarse geophone-source spacing. These distortions can be partly suppressed by applying the deblurring operator (LTL)-1 to the migration image to get m = (LTL)-1mmig. This deblurred image is known as a least squares migration (LSM) image if (LTL)-1LT is applied to the data d using a conjugate gradient method, and is known as a migration deconvolved (MD) image if (LTL)-1 is directly applied to the migration image mmig in (kx, ky, z) space. The MD algorithm is an order-of-magnitude faster than LSM, but it employs more restrictive assumptions. We also show that deblurring can be used to filter out coherent noise in the data such as multiple reflections. The procedure is to, e.g., decompose the forward modeling operator into both primary and multiple reflection operators d = (Lprim + Lmulti)m, invert for m, and find the primary reflection data by dprim = Lprimm. This method is named least squares migration filtering (LSMF). The above three algorithms (LSM, MD and LSMF) might be useful for attacking problems in optical imaging.

  14. Deconvolution of astronomical images using SOR with adaptive relaxation.

    PubMed

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  15. An overview of methods to mitigate artifacts in optical coherence tomography imaging of the skin.

    PubMed

    Adabi, Saba; Fotouhi, Audrey; Xu, Qiuyun; Daveluy, Steve; Mehregan, Darius; Podoleanu, Adrian; Nasiriavanaki, Mohammadreza

    2018-05-01

    Optical coherence tomography (OCT) of skin delivers three-dimensional images of tissue microstructures. Although OCT imaging offers a promising high-resolution modality, OCT images suffer from some artifacts that lead to misinterpretation of tissue structures. Therefore, an overview of methods to mitigate artifacts in OCT imaging of the skin is of paramount importance. Speckle, intensity decay, and blurring are three major artifacts in OCT images. Speckle is due to the low coherent light source used in the configuration of OCT. Intensity decay is a deterioration of light with respect to depth, and blurring is the consequence of deficiencies of optical components. Two speckle reduction methods (one based on artificial neural network and one based on spatial compounding), an attenuation compensation algorithm (based on Beer-Lambert law) and a deblurring procedure (using deconvolution), are described. Moreover, optical properties extraction algorithm based on extended Huygens-Fresnel (EHF) principle to obtain some additional information from OCT images are discussed. In this short overview, we summarize some of the image enhancement algorithms for OCT images which address the abovementioned artifacts. The results showed a significant improvement in the visibility of the clinically relevant features in the images. The quality improvement was evaluated using several numerical assessment measures. Clinical dermatologists benefit from using these image enhancement algorithms to improve OCT diagnosis and essentially function as a noninvasive optical biopsy. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Heterochromatin protein 1: don't judge the book by its cover!

    PubMed

    Hediger, Florence; Gasser, Susan M

    2006-04-01

    The name heterochromatin protein 1 (HP1) suggests that this small nuclear factor plays a role in forming heterochromatic domains. It was noticed years ago, however, that the distribution of HP1 on polytene chromosomes was not restricted to chromocenters or telomeres. HP1 was also found, reproducibly, along the euchromatic arms. A possible function in euchromatic gene regulation was postulated. Now, a large body of data has blurred the definition of HP1 as a structural component of heterochromatin, revealing its two-faced nature. Not only do HP1 isoforms have specific binding sites in both heterochromatic and euchromatic domains but they might also participate in the repression and activation of transcription in both compartments.

  17. Ultrafast electron microscopy: Instrument response from the single-electron to high bunch-charge regimes

    NASA Astrophysics Data System (ADS)

    Plemmons, Dayne A.; Flannigan, David J.

    2017-09-01

    We determine the instrument response of an ultrafast electron microscope equipped with a conventional thermionic electron gun and absent modifications beyond the optical ports. Using flat, graphite-encircled LaB6 cathodes, we image space-charge effects as a function of photoelectron-packet population and find that an applied Wehnelt bias has a negligible effect on the threshold levels (>103 electrons per pulse) but does appear to suppress blurring at the upper limits (∼105 electrons). Using plasma lensing, we determine the instrument-response time for 700-fs laser pulses and find that single-electron packets are laser limited (1 ps), while broadening occurs well below the space-charge limit.

  18. Adaptive windowing and windowless approaches to estimate dynamic functional brain connectivity

    NASA Astrophysics Data System (ADS)

    Yaesoubi, Maziar; Calhoun, Vince D.

    2017-08-01

    In this work, we discuss estimation of dynamic dependence of a multi-variate signal. Commonly used approaches are often based on a locality assumption (e.g. sliding-window) which can miss spontaneous changes due to blurring with local but unrelated changes. We discuss recent approaches to overcome this limitation including 1) a wavelet-space approach, essentially adapting the window to the underlying frequency content and 2) a sparse signal-representation which removes any locality assumption. The latter is especially useful when there is no prior knowledge of the validity of such assumption as in brain-analysis. Results on several large resting-fMRI data sets highlight the potential of these approaches.

  19. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yan-Rong; Wang, Jian-Min; Bai, Jin-Ming, E-mail: liyanrong@mail.ihep.ac.cn

    Broad emission lines of active galactic nuclei stem from a spatially extended region (broad-line region, BLR) that is composed of discrete clouds and photoionized by the central ionizing continuum. The temporal behaviors of these emission lines are blurred echoes of continuum variations (i.e., reverberation mapping, RM) and directly reflect the structures and kinematic information of BLRs through the so-called transfer function (also known as the velocity-delay map). Based on the previous works of Rybicki and Press and Zu et al., we develop an extended, non-parametric approach to determine the transfer function for RM data, in which the transfer function ismore » expressed as a sum of a family of relatively displaced Gaussian response functions. Therefore, arbitrary shapes of transfer functions associated with complicated BLR geometry can be seamlessly included, enabling us to relax the presumption of a specified transfer function frequently adopted in previous studies and to let it be determined by observation data. We formulate our approach in a previously well-established framework that incorporates the statistical modeling of continuum variations as a damped random walk process and takes into account long-term secular variations which are irrelevant to RM signals. The application to RM data shows the fidelity of our approach.« less

  1. Chorioretinitis sclopetaria from BB ex memoria.

    PubMed

    Otto, C S; Nixon, K L; Mazzoli, R A; Raymond, W R; Ainbinder, D J; Hansen, E A; Krolicki, T J

    2001-01-01

    Chorioretinitis sclopetaria presents a characteristic pattern of choroidal and retinal changes caused by a high velocity projectile passing into the orbit, in close proximity to the globe. While it is unlikely that a patient should completely forget the trauma causing such damage, preserved or compensated visual function may blur the patient's memory of these events over time. Characteristic physical findings help to clarify the antecedent history. Despite the lack of an acknowledged history of ocular trauma or surgery, in our case, the characteristic ocular findings discovered at presentation allowed for recognition of the underlying etiology. Because of good visual function, the patient had completely forgotten about the trauma that occurred 12 years earlier. Strabismus surgery was performed for treatment of the presenting symptomatic diplopia. The pathognomonic findings in chorioretinitis sclopetaria are invaluable in correctly diagnosing this condition, especially when a history of ocular trauma is unavailable.

  2. Image resolution enhancement via image restoration using neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Shuangteng; Lu, Yihong

    2011-04-01

    Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.

  3. FROM SELECTIVE VULNERABILITY TO CONNECTIVITY: INSIGHTS FROM NEWBORN BRAIN IMAGING

    PubMed Central

    Miller, Steven P.; Ferriero, Donna M

    2009-01-01

    The ability to image the newborn brain during development has provided new information regarding the effects of injury on brain development at different vulnerable time periods. Studies in animal models of brain injury correlate beautifully with what is now observed in the human newborn. We now know that injury at term results in a predilection for gray matter injury while injury in the premature brain results in a white matter predominant pattern although recent evidence suggests a blurring of this distinction. These injuries affect how the brain matures subsequently and again, imaging has led to new insights that allow us to match function and structure. This review will focus on these patterns of injury that are so critically determined by age at insult. In addition, this review will highlight how the brain responds to these insults with changes in connectivity that have profound functional consequences. PMID:19712981

  4. Performance quantification of a millimeter-wavelength imaging system based on inexpensive glow-discharge-detector focal-plane array.

    PubMed

    Shilemay, Moshe; Rozban, Daniel; Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S; Yadid-Pecht, Orly; Abramovich, Amir

    2013-03-01

    Inexpensive millimeter-wavelength (MMW) optical digital imaging raises a challenge of evaluating the imaging performance and image quality because of the large electromagnetic wavelengths and pixel sensor sizes, which are 2 to 3 orders of magnitude larger than those of ordinary thermal or visual imaging systems, and also because of the noisiness of the inexpensive glow discharge detectors that compose the focal-plane array. This study quantifies the performances of this MMW imaging system. Its point-spread function and modulation transfer function were investigated. The experimental results and the analysis indicate that the image quality of this MMW imaging system is limited mostly by the noise, and the blur is dominated by the pixel sensor size. Therefore, the MMW image might be improved by oversampling, given that noise reduction is achieved. Demonstration of MMW image improvement through oversampling is presented.

  5. Comparison of analytic and iterative digital tomosynthesis reconstructions for thin slab objects

    NASA Astrophysics Data System (ADS)

    Yun, J.; Kim, D. W.; Ha, S.; Kim, H. K.

    2017-11-01

    For digital x-ray tomosynthesis of thin slab objects, we compare the tomographic imaging performances obtained from the filtered backprojection (FBP) and simultaneous algebraic reconstruction (SART) algorithms. The imaging performance includes the in-plane molulation-transfer function (MTF), the signal difference-to-noise ratio (SDNR), and the out-of-plane blur artifact or artifact-spread function (ASF). The MTF is measured using a thin tungsten-wire phantom, and the SDNR and the ASF are measured using a thin aluminum-disc phantom embedded in a plastic cylinder. The FBP shows a better MTF performance than the SART. On the contrary, the SART outperforms the FBP with regard to the SDNR and ASF performances. Detailed experimental results and their analysis results are described in this paper. For a more proper use of digital tomosynthesis technique, this study suggests to use a reconstuction algorithm suitable for application-specific purposes.

  6. The use of cues to convergence and accommodation in naïve, uninstructed participants.

    PubMed

    Horwood, Anna M; Riddell, Patricia M

    2008-07-01

    A remote haploscopic video refractor was used to assess vergence and accommodation responses in a group of 32 emmetropic, orthophoric, symptom free, young adults naïve to vision experiments in a minimally instructed setting. Picture targets were presented at four positions between 2 m and 33 cm. Blur, disparity and looming cues were presented in combination or separately to asses their contributions to the total near response in a within-subjects design. Response gain for both vergence and accommodation reduced markedly whenever disparity was excluded, with much smaller effects when blur and proximity were excluded. Despite the clinical homogeneity of the participant group there were also some individual differences.

  7. The use of cues to convergence and accommodation in naïve, uninstructed participants

    PubMed Central

    Horwood, Anna M; Riddell, Patricia M

    2015-01-01

    A remote haploscopic video refractor was used to assess vergence and accommodation responses in a group of 32 emmetropic, orthophoric, symptom free, young adults naïve to vision experiments in a minimally instructed setting. Picture targets were presented at four positions between 2m and 33cm. Blur, disparity and looming cues were presented in combination or separately to asses their contributions to the total near response in a within-subjects design. Response gain for both vergence and accommodation reduced markedly whenever disparity was excluded, with much smaller effects when blur and proximity were excluded. Despite the clinical homogeneity of the participant group there were also some individual differences. PMID:18538815

  8. Optic disk findings in hypervitaminosis A.

    PubMed

    Marcus, D F; Turgeon, P; Aaberg, T M; Wiznia, R A; Wetzig, P C; Bovino, J A

    1985-07-01

    Three cases of papilledema secondary to chronic excessive vitamin A intake are presented, and the optic disk changes are documented with intravenous fluorescein angiography. Two of the three patients reported in this study were symptomatic with blurred vision and systemic complaints. The symptoms of blurred vision and systemic complaints disappeared within one week, and papilledema resolved over several months after discontinuance of vitamin A. The fluorescein angiographic changes observed in the optic disk of patients with hypervitaminosis A are similar to those associated with other known causes of papilledema. Since vitamin A is a nonprescription drug, and its indiscriminate use is potentially great, any history of vitamin ingestion should be elicited during the evaluation of papilledema.

  9. BILATERAL SEROUS MACULAR DETACHMENT IN A PATIENT WITH NEPHROTIC SYNDROME.

    PubMed

    Bilge, Ayse D; Yaylali, Sevil A; Yavuz, Sara; Simsek, İlke B

    2018-01-01

    The purpose of this study was to report a case of a woman with nephrotic syndrome who presented with blurred vision because of bilateral serous macular detachment. Case report and literature review. A 55-year-old woman with a history of essential hypertension, diabetes, and nephrotic syndrome was presented with blurred vision in both eyes. Her fluorescein angiography revealed dye leakage in the early and subretinal pooling in the late phases, and optical coherence tomography scans confirmed the presence of subretinal fluid in the subfovel area. In nephrotic syndrome cases especially with accompaniment of high blood pressure, fluid accumulation in the retina layer may occur. Serous macular detachment must be kept in mind when treating these patients.

  10. Iterative raw measurements restoration method with penalized weighted least squares approach for low-dose CT

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu

    2014-03-01

    Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.

  11. Geometric artifacts reduction for cone-beam CT via L0-norm minimization without dedicated phantoms.

    PubMed

    Gong, Changcheng; Cai, Yufang; Zeng, Li

    2018-01-01

    For cone-beam computed tomography (CBCT), transversal shifts of the rotation center exist inevitably, which will result in geometric artifacts in CT images. In this work, we propose a novel geometric calibration method for CBCT, which can also be used in micro-CT. The symmetry property of the sinogram is used for the first calibration, and then L0-norm of the gradient image from the reconstructed image is used as the cost function to be minimized for the second calibration. An iterative search method is adopted to pursue the local minimum of the L0-norm minimization problem. The transversal shift value is updated with affirmatory step size within a search range determined by the first calibration. In addition, graphic processing unit (GPU)-based FDK algorithm and acceleration techniques are designed to accelerate the calibration process of the presented new method. In simulation experiments, the mean absolute difference (MAD) and the standard deviation (SD) of the transversal shift value were less than 0.2 pixels between the noise-free and noisy projection images, which indicated highly accurate calibration applying the new calibration method. In real data experiments, the smaller entropies of the corrected images also indicated that higher resolution image was acquired using the corrected projection data and the textures were well protected. Study results also support the feasibility of applying the proposed method to other imaging modalities.

  12. Development and Characterization of a Dither-Based Super-Resolution Reconstruction Method for Fiber Imaging Arrays

    NASA Astrophysics Data System (ADS)

    Languirand, Eric Robert

    Chemical imaging is an important tool for providing insight into function, role, and spatial distribution of analytes. This thesis describes the use of imaging fiber bundles (IFB) for super-resolution reconstruction using surface enhanced Raman scattering (SERS) showing improvement in resolution with arrayed bundles for the first time. Additionally this thesis describes characteristics of the IFB with regards to cross-talk as a function of aperture size. The first part of this thesis characterizes the IFB for both tapered and untapered bundles in terms of cross-talk. Cross-talk is defined as the amount of light leaking from a central fiber element in the imaging fiber bundle to surrounding fiber elements. To make this measurement ubiquitous for all imaging bundles, quantum dots were employed. Untapered and tapered IFB possess cross-talk of 2% or less, with fiber elements down to 32nm. The second part of this thesis employs a super resolution reconstruction algorithm using projection onto convex sets for resolution improvement. When using IFB arrays, the point spread function (PSF) of the array can be known accurately if the fiber elements over fill the pixel detector array. Therefore, the use of the known PSF compared to a general blurring kernel was evaluated. Relative increases in resolution of 12% and 2% at the 95% confidence level are found, when compared to a reference image, for the general blurring kernel and PSF, respectively. The third part of this thesis shows for the first time the use of SERS with a dithered IFB array coupled with super-resolution reconstruction. The resolution improvement across a step-edge is shown to be approximately 20% when compared to a reference image. This provides an additional means of increasing the resolution of fiber bundles beyond that of just tapering. Furthermore, this provides a new avenue for nanoscale imaging using these bundles. Lastly, synthetic data with varying degrees of signal-to-noise (S/N) were employed to explore the relationship S/N has with the reconstruction process. It is generally shown that increasing the number images used in the reconstruction process and increasing the S/N will improve the reconstruction providing larger increases in resolution.

  13. Evaluation of dynamic row-action maximum likelihood algorithm reconstruction for quantitative 15O brain PET.

    PubMed

    Ibaraki, Masanobu; Sato, Kaoru; Mizuta, Tetsuro; Kitamura, Keishi; Miura, Shuichi; Sugawara, Shigeki; Shinohara, Yuki; Kinoshita, Toshibumi

    2009-09-01

    A modified version of row-action maximum likelihood algorithm (RAMLA) using a 'subset-dependent' relaxation parameter for noise suppression, or dynamic RAMLA (DRAMA), has been proposed. The aim of this study was to assess the capability of DRAMA reconstruction for quantitative (15)O brain positron emission tomography (PET). Seventeen healthy volunteers were studied using a 3D PET scanner. The PET study included 3 sequential PET scans for C(15)O, (15)O(2) and H (2) (15) O. First, the number of main iterations (N (it)) in DRAMA was optimized in relation to image convergence and statistical image noise. To estimate the statistical variance of reconstructed images on a pixel-by-pixel basis, a sinogram bootstrap method was applied using list-mode PET data. Once the optimal N (it) was determined, statistical image noise and quantitative parameters, i.e., cerebral blood flow (CBF), cerebral blood volume (CBV), cerebral metabolic rate of oxygen (CMRO(2)) and oxygen extraction fraction (OEF) were compared between DRAMA and conventional FBP. DRAMA images were post-filtered so that their spatial resolutions were matched with FBP images with a 6-mm FWHM Gaussian filter. Based on the count recovery data, N (it) = 3 was determined as an optimal parameter for (15)O PET data. The sinogram bootstrap analysis revealed that DRAMA reconstruction resulted in less statistical noise, especially in a low-activity region compared to FBP. Agreement of quantitative values between FBP and DRAMA was excellent. For DRAMA images, average gray matter values of CBF, CBV, CMRO(2) and OEF were 46.1 +/- 4.5 (mL/100 mL/min), 3.35 +/- 0.40 (mL/100 mL), 3.42 +/- 0.35 (mL/100 mL/min) and 42.1 +/- 3.8 (%), respectively. These values were comparable to corresponding values with FBP images: 46.6 +/- 4.6 (mL/100 mL/min), 3.34 +/- 0.39 (mL/100 mL), 3.48 +/- 0.34 (mL/100 mL/min) and 42.4 +/- 3.8 (%), respectively. DRAMA reconstruction is applicable to quantitative (15)O PET study and is superior to conventional FBP in terms of image quality.

  14. 3D GRASE PROPELLER: improved image acquisition technique for arterial spin labeling perfusion imaging.

    PubMed

    Tan, Huan; Hoge, W Scott; Hamilton, Craig A; Günther, Matthias; Kraft, Robert A

    2011-07-01

    Arterial spin labeling is a noninvasive technique that can quantitatively measure cerebral blood flow. While traditionally arterial spin labeling employs 2D echo planar imaging or spiral acquisition trajectories, single-shot 3D gradient echo and spin echo (GRASE) is gaining popularity in arterial spin labeling due to inherent signal-to-noise ratio advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T(2) decay. A novel technique combining 3D GRASE and a periodically rotated overlapping parallel lines with enhanced reconstruction trajectory (PROPELLER) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3 × 3 × 5 mm(3) nominal voxel size with pulsed arterial spin labeling preparation sequence. Data from five healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in cerebral blood flow quantification with 3D gradient echo and spin echo, 3D GRASE PROPELLER demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use. Copyright © 2011 Wiley-Liss, Inc.

  15. An improved non-uniformity correction algorithm and its hardware implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Rong, Shenghui; Zhou, Huixin; Wen, Zhigang; Qin, Hanlin; Qian, Kun; Cheng, Kuanhong

    2017-09-01

    The Non-uniformity of Infrared Focal Plane Arrays (IRFPA) severely degrades the infrared image quality. An effective non-uniformity correction (NUC) algorithm is necessary for an IRFPA imaging and application system. However traditional scene-based NUC algorithm suffers the image blurring and artificial ghosting. In addition, few effective hardware platforms have been proposed to implement corresponding NUC algorithms. Thus, this paper proposed an improved neural-network based NUC algorithm by the guided image filter and the projection-based motion detection algorithm. First, the guided image filter is utilized to achieve the accurate desired image to decrease the artificial ghosting. Then a projection-based moving detection algorithm is utilized to determine whether the correction coefficients should be updated or not. In this way the problem of image blurring can be overcome. At last, an FPGA-based hardware design is introduced to realize the proposed NUC algorithm. A real and a simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. Experimental results indicated that the proposed NUC algorithm can effectively eliminate the fix pattern noise with less image blurring and artificial ghosting. The proposed hardware design takes less logic elements in FPGA and spends less clock cycles to process one frame of image.

  16. Is HE 0436-4717 Anemic? A deep look at a bare Seyfert 1 galaxy

    NASA Astrophysics Data System (ADS)

    Bonson, K.; Gallo, L. C.; Vasudevan, R.

    2015-06-01

    A multi-epoch, multi-instrument analysis of the Seyfert 1 galaxy HE 0436-4717 is conducted using optical to X-ray data from XMM-Newton and Swift (including the Burst Alert Telescope). Fitting of the UV-to-X-ray spectral energy distribution shows little evidence of extinction and the X-ray spectral analysis does not confirm previous reports of deep absorption edges from O VIII. HE 0436-4717 is a `bare' Seyfert with negligible line-of-sight absorption making it ideal to study the central X-ray emitting region. Three scenarios were considered to describe the X-ray data: partial covering absorption, blurred reflection, and soft Comptonization. All three interpretations describe the 0.5-10.0 keV spectra well. Extrapolating the models to 100 keV results in poorer fits for the partial covering model. When also considering the rapid variability during one of the XMM-Newton observations, the blurred reflection model appears to describe all the observations in the most self-consistent manner. If adopted, the blurred reflection model requires a very low iron abundance in HE 0436-4717. We consider the possibilities that this is an artefact of the fitting process, but it appears possible that it is intrinsic to the object.

  17. Deblurring sequential ocular images from multi-spectral imaging (MSI) via mutual information.

    PubMed

    Lian, Jian; Zheng, Yuanjie; Jiao, Wanzhen; Yan, Fang; Zhao, Bojun

    2018-06-01

    Multi-spectral imaging (MSI) produces a sequence of spectral images to capture the inner structure of different species, which was recently introduced into ocular disease diagnosis. However, the quality of MSI images can be significantly degraded by motion blur caused by the inevitable saccades and exposure time required for maintaining a sufficiently high signal-to-noise ratio. This degradation may confuse an ophthalmologist, reduce the examination quality, or defeat various image analysis algorithms. We propose an early work specially on deblurring sequential MSI images, which is distinguished from many of the current image deblurring techniques by resolving the blur kernel simultaneously for all the images in an MSI sequence. It is accomplished by incorporating several a priori constraints including the sharpness of the latent clear image, the spatial and temporal smoothness of the blur kernel and the similarity between temporally-neighboring images in MSI sequence. Specifically, we model the similarity between MSI images with mutual information considering the different wavelengths used for capturing different images in MSI sequence. The optimization of the proposed approach is based on a multi-scale framework and stepwise optimization strategy. Experimental results from 22 MSI sequences validate that our approach outperforms several state-of-the-art techniques in natural image deblurring.

  18. A Probability-Based Algorithm Using Image Sensors to Track the LED in a Vehicle Visible Light Communication System.

    PubMed

    Huynh, Phat; Do, Trong-Hop; Yoo, Myungsik

    2017-02-10

    This paper proposes a probability-based algorithm to track the LED in vehicle visible light communication systems using a camera. In this system, the transmitters are the vehicles' front and rear LED lights. The receivers are high speed cameras that take a series of images of the LEDs. ThedataembeddedinthelightisextractedbyfirstdetectingthepositionoftheLEDsintheseimages. Traditionally, LEDs are detected according to pixel intensity. However, when the vehicle is moving, motion blur occurs in the LED images, making it difficult to detect the LEDs. Particularly at high speeds, some frames are blurred at a high degree, which makes it impossible to detect the LED as well as extract the information embedded in these frames. The proposed algorithm relies not only on the pixel intensity, but also on the optical flow of the LEDs and on statistical information obtained from previous frames. Based on this information, the conditional probability that a pixel belongs to a LED is calculated. Then, the position of LED is determined based on this probability. To verify the suitability of the proposed algorithm, simulations are conducted by considering the incidents that can happen in a real-world situation, including a change in the position of the LEDs at each frame, as well as motion blur due to the vehicle speed.

  19. A stochastic approach to quantifying the blur with uncertainty estimation for high-energy X-ray imaging systems

    DOE PAGES

    Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; ...

    2015-06-03

    One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posteriormore » is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.« less

  20. How face blurring affects body language processing of static gestures in women and men.

    PubMed

    Proverbio, A M; Ornaghi, L; Gabaro, V

    2018-05-14

    The role of facial coding in body language comprehension was investigated by ERP recordings in 31 participants viewing 800 photographs of gestures (iconic, deictic and emblematic), which could be congruent or incongruent with their caption. Facial information was obscured by blurring in half of the stimuli. The task consisted of evaluating picture/caption congruence. Quicker response times were observed in women than in men to congruent stimuli, and a cost for incongruent vs. congruent stimuli was found only in men. Face obscuration did not affect accuracy in women as reflected by omission percentages, nor reduced their cognitive potentials, thus suggesting a better comprehension of face deprived pantomimes. N170 response (modulated by congruity and face presence) peaked later in men than in women. Late Positivity was much larger for congruent stimuli in the female brain, regardless of face blurring. Face presence specifically activated the right superior temporal and fusiform gyri, cingulate cortex and insula, according to source reconstruction. These regions have been reported to be insufficiently activated in face-avoiding individuals with social deficits. Overall, the results corroborate the hypothesis that females might be more resistant to the lack of facial information or better at understanding body language in face-deprived social information.

Top