Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration
NASA Astrophysics Data System (ADS)
Zhou, Jian; Qi, Jinyi
2011-03-01
Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.
Efficient system modeling for a small animal PET scanner with tapered DOI detectors.
Zhang, Mengxi; Zhou, Jian; Yang, Yongfeng; Rodríguez-Villafuerte, Mercedes; Qi, Jinyi
2016-01-21
A prototype small animal positron emission tomography (PET) scanner for mouse brain imaging has been developed at UC Davis. The new scanner uses tapered detector arrays with depth of interaction (DOI) measurement. In this paper, we present an efficient system model for the tapered PET scanner using matrix factorization and a virtual scanner geometry. The factored system matrix mainly consists of two components: a sinogram blurring matrix and a geometrical matrix. The geometric matrix is based on a virtual scanner geometry. The sinogram blurring matrix is estimated by matrix factorization. We investigate the performance of different virtual scanner geometries. Both simulation study and real data experiments are performed in the fully 3D mode to study the image quality under different system models. The results indicate that the proposed matrix factorization can maintain image quality while substantially reduce the image reconstruction time and system matrix storage cost. The proposed method can be also applied to other PET scanners with DOI measurement.
Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.
Zhang, Hua; Sonke, Jan-Jakob
2013-01-01
Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.
PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.
2016-02-01
Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.
Comparing implementations of penalized weighted least-squares sinogram restoration.
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-11-01
A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.
Comparing implementations of penalized weighted least-squares sinogram restoration
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-01-01
Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. Results: All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors’ previous penalized-likelihood implementation. Conclusions: Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes. PMID:21158306
NASA Astrophysics Data System (ADS)
Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.
Miksys, N; Xu, C; Beaulieu, L; Thomson, R M
2015-08-07
This work investigates and compares CT image metallic artifact reduction (MAR) methods and tissue assignment schemes (TAS) for the development of virtual patient models for permanent implant brachytherapy Monte Carlo (MC) dose calculations. Four MAR techniques are investigated to mitigate seed artifacts from post-implant CT images of a homogeneous phantom and eight prostate patients: a raw sinogram approach using the original CT scanner data and three methods (simple threshold replacement (STR), 3D median filter, and virtual sinogram) requiring only the reconstructed CT image. Virtual patient models are developed using six TAS ranging from the AAPM-ESTRO-ABG TG-186 basic approach of assigning uniform density tissues (resulting in a model not dependent on MAR) to more complex models assigning prostate, calcification, and mixtures of prostate and calcification using CT-derived densities. The EGSnrc user-code BrachyDose is employed to calculate dose distributions. All four MAR methods eliminate bright seed spot artifacts, and the image-based methods provide comparable mitigation of artifacts compared with the raw sinogram approach. However, each MAR technique has limitations: STR is unable to mitigate low CT number artifacts, the median filter blurs the image which challenges the preservation of tissue heterogeneities, and both sinogram approaches introduce new streaks. Large local dose differences are generally due to differences in voxel tissue-type rather than mass density. The largest differences in target dose metrics (D90, V100, V150), over 50% lower compared to the other models, are when uncorrected CT images are used with TAS that consider calcifications. Metrics found using models which include calcifications are generally a few percent lower than prostate-only models. Generally, metrics from any MAR method and any TAS which considers calcifications agree within 6%. Overall, the studied MAR methods and TAS show promise for further retrospective MC dose calculation studies for various permanent implant brachytherapy treatments.
Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob
2017-03-01
The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.
Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction.
Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob
2017-03-21
The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.
Penalized weighted least-squares approach for low-dose x-ray computed tomography
NASA Astrophysics Data System (ADS)
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-03-01
The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.
Sinogram restoration in computed tomography with an edge-preserving penalty
Little, Kevin J.; La Rivière, Patrick J.
2015-01-01
Purpose: With the goal of producing a less computationally intensive alternative to fully iterative penalized-likelihood image reconstruction, our group has explored the use of penalized-likelihood sinogram restoration for transmission tomography. Previously, we have exclusively used a quadratic penalty in our restoration objective function. However, a quadratic penalty does not excel at preserving edges while reducing noise. Here, we derive a restoration update equation for nonquadratic penalties. Additionally, we perform a feasibility study to extend our sinogram restoration method to a helical cone-beam geometry and clinical data. Methods: A restoration update equation for nonquadratic penalties is derived using separable parabolic surrogates (SPS). A method for calculating sinogram degradation coefficients for a helical cone-beam geometry is proposed. Using simulated data, sinogram restorations are performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods are used to obtain reconstructions, and resolution-noise trade-offs are investigated. For the fan-beam geometry, a comparison is made to image-domain SPS reconstruction using the Huber penalty. The effects of varying object size and contrast are also investigated. For the helical cone-beam geometry, we investigate the effect of helical pitch (axial movement/rotation). Huber-penalty sinogram restoration is performed on 3D clinical data, and the reconstructed images are compared to those generated with no restoration. Results: We find that by applying the edge-preserving Huber penalty to our sinogram restoration methods, the reconstructed image has a better resolution-noise relationship than an image produced using a quadratic penalty in the sinogram restoration. However, we find that this relatively straightforward approach to edge preservation in the sinogram domain is affected by the physical size of imaged objects in addition to the contrast across the edge. This presents some disadvantages of this method relative to image-domain edge-preserving methods, although the computational burden of the sinogram-domain approach is much lower. For a helical cone-beam geometry, we found applying sinogram restoration in 3D was reasonable and that pitch did not make a significant difference in the general effect of sinogram restoration. The application of Huber-penalty sinogram restoration to clinical data resulted in a reconstruction with less noise while retaining resolution. Conclusions: Sinogram restoration with the Huber penalty is able to provide better resolution-noise performance than restoration with a quadratic penalty. Additionally, sinogram restoration with the Huber penalty is feasible for helical cone-beam CT and can be applied to clinical data. PMID:25735286
Sinogram restoration in computed tomography with an edge-preserving penalty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, Kevin J., E-mail: little@uchicago.edu; La Rivière, Patrick J.
2015-03-15
Purpose: With the goal of producing a less computationally intensive alternative to fully iterative penalized-likelihood image reconstruction, our group has explored the use of penalized-likelihood sinogram restoration for transmission tomography. Previously, we have exclusively used a quadratic penalty in our restoration objective function. However, a quadratic penalty does not excel at preserving edges while reducing noise. Here, we derive a restoration update equation for nonquadratic penalties. Additionally, we perform a feasibility study to extend our sinogram restoration method to a helical cone-beam geometry and clinical data. Methods: A restoration update equation for nonquadratic penalties is derived using separable parabolic surrogatesmore » (SPS). A method for calculating sinogram degradation coefficients for a helical cone-beam geometry is proposed. Using simulated data, sinogram restorations are performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods are used to obtain reconstructions, and resolution-noise trade-offs are investigated. For the fan-beam geometry, a comparison is made to image-domain SPS reconstruction using the Huber penalty. The effects of varying object size and contrast are also investigated. For the helical cone-beam geometry, we investigate the effect of helical pitch (axial movement/rotation). Huber-penalty sinogram restoration is performed on 3D clinical data, and the reconstructed images are compared to those generated with no restoration. Results: We find that by applying the edge-preserving Huber penalty to our sinogram restoration methods, the reconstructed image has a better resolution-noise relationship than an image produced using a quadratic penalty in the sinogram restoration. However, we find that this relatively straightforward approach to edge preservation in the sinogram domain is affected by the physical size of imaged objects in addition to the contrast across the edge. This presents some disadvantages of this method relative to image-domain edge-preserving methods, although the computational burden of the sinogram-domain approach is much lower. For a helical cone-beam geometry, we found applying sinogram restoration in 3D was reasonable and that pitch did not make a significant difference in the general effect of sinogram restoration. The application of Huber-penalty sinogram restoration to clinical data resulted in a reconstruction with less noise while retaining resolution. Conclusions: Sinogram restoration with the Huber penalty is able to provide better resolution-noise performance than restoration with a quadratic penalty. Additionally, sinogram restoration with the Huber penalty is feasible for helical cone-beam CT and can be applied to clinical data.« less
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
WE-G-18A-06: Sinogram Restoration in Helical Cone-Beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, K; Riviere, P La
2014-06-15
Purpose: To extend CT sinogram restoration, which has been shown in 2D to reduce noise and to correct for geometric effects and other degradations at a low computational cost, from 2D to a 3D helical cone-beam geometry. Methods: A method for calculating sinogram degradation coefficients for a helical cone-beam geometry was proposed. These values were used to perform penalized-likelihood sinogram restoration on simulated data that were generated from the FORBILD thorax phantom. Sinogram restorations were performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods were used to obtain reconstructions. Resolution-variance trade-offs weremore » investigated for several locations within the reconstructions for the purpose of comparing sinogram restoration to no restoration. In order to compare potential differences, reconstructions were performed using different groups of neighbors in the penalty, two analytical reconstruction methods (Katsevich and single-slice rebinning), and differing helical pitches. Results: The resolution-variance properties of reconstructions restored using sinogram restoration with a Huber penalty outperformed those of reconstructions with no restoration. However, the use of a quadratic sinogram restoration penalty did not lead to an improvement over performing no restoration at the outer regions of the phantom. Application of the Huber penalty to neighbors both within a view and across views did not perform as well as only applying the penalty to neighbors within a view. General improvements in resolution-variance properties using sinogram restoration with the Huber penalty were not dependent on the reconstruction method used or the magnitude of the helical pitch. Conclusion: Sinogram restoration for noise and degradation effects for helical cone-beam CT is feasible and should be able to be applied to clinical data. When applied with the edge-preserving Huber penalty, sinogram restoration leads to an improvement in resolution-variance tradeoffs.« less
... Fistulogram/Sinogram A fistulogram uses a form of real-time x-ray called fluoroscopy and a barium-based ... best treatment plan for you. Fistulograms/sinograms provide real-time images that may be evaluated immediately. No radiation ...
Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.
Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg
2009-07-01
In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.
NASA Astrophysics Data System (ADS)
Chen, Zhe; Parker, B. J.; Feng, D. D.; Fulton, R.
2004-10-01
In this paper, we compare various temporal analysis schemes applied to dynamic PET for improved quantification, image quality and temporal compression purposes. We compare an optimal sampling schedule (OSS) design, principal component analysis (PCA) applied in the image domain, and principal component analysis applied in the sinogram domain; for region-of-interest quantification, sinogram-domain PCA is combined with the Huesman algorithm to quantify from the sinograms directly without requiring reconstruction of all PCA channels. Using a simulated phantom FDG brain study and three clinical studies, we evaluate the fidelity of the compressed data for estimation of local cerebral metabolic rate of glucose by a four-compartment model. Our results show that using a noise-normalized PCA in the sinogram domain gives similar compression ratio and quantitative accuracy to OSS, but with substantially better precision. These results indicate that sinogram-domain PCA for dynamic PET can be a useful preprocessing stage for PET compression and quantification applications.
Multiresolution image gathering and restoration
NASA Technical Reports Server (NTRS)
Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1992-01-01
In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.
Noise reduction for low-dose helical CT by 3D penalized weighted least-squares sinogram smoothing
NASA Astrophysics Data System (ADS)
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-03-01
Helical computed tomography (HCT) has several advantages over conventional step-and-shoot CT for imaging a relatively large object, especially for dynamic studies. However, HCT may increase X-ray exposure significantly to the patient. This work aims to reduce the radiation by lowering the X-ray tube current (mA) and filtering the low-mA (or dose) sinogram noise. Based on the noise properties of HCT sinogram, a three-dimensional (3D) penalized weighted least-squares (PWLS) objective function was constructed and an optimal sinogram was estimated by minimizing the objective function. To consider the difference of signal correlation among different direction of the HCT sinogram, an anisotropic Markov random filed (MRF) Gibbs function was designed as the penalty. The minimization of the objection function was performed by iterative Gauss-Seidel updating strategy. The effectiveness of the 3D-PWLS sinogram smoothing for low-dose HCT was demonstrated by a 3D Shepp-Logan head phantom study. Comparison studies with our previously developed KL domain PWLS sinogram smoothing algorithm indicate that the KL+2D-PWLS algorithm shows better performance on in-plane noise-resolution trade-off while the 3D-PLWS shows better performance on z-axis noise-resolution trade-off. Receiver operating characteristic (ROC) studies by using channelized Hotelling observer (CHO) shows that 3D-PWLS and KL+2DPWLS algorithms have similar performance on detectability in low-contrast environment.
NASA Astrophysics Data System (ADS)
Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.
2017-11-01
This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.
Zhang, Shuangyue; Han, Dong; Politte, David G; Williamson, Jeffrey F; O'Sullivan, Joseph A
2018-05-01
The purpose of this study was to assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. All three selected DECT-SPR models predict the SPR of all tissue types with less than 0.2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0.05% and 0.3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by sixfold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are insensitive to size change. Among all the investigated methods, the JSIR-BVM method achieves the best performance for SPR estimation in our simulation phantom study. This novel method is robust with respect to sinogram noise and residual beam-hardening effects, yielding SPR estimation errors comparable to intrinsic BVM modeling error. In contrast, the achievable SPR estimation accuracy of the image- and sinogram-domain decomposition methods is dominated by the CT image intensity uncertainties introduced by the reconstruction and decomposition processes. © 2018 American Association of Physicists in Medicine.
Distinguishing dose, focus, and blur for lithography characterization and control
NASA Astrophysics Data System (ADS)
Ausschnitt, Christopher P.; Brunner, Timothy A.
2007-03-01
We derive a physical model to describe the dependence of pattern dimensions on dose, defocus and blur. The coefficients of our model are constants of a given lithographic process. Model inversion applied to dimensional measurements then determines effective dose, defocus and blur for wafers patterned with the same process. In practice, our approach entails the measurement of proximate grating targets of differing dose and focus sensitivity. In our embodiment, the measured attribute of one target is exclusively sensitive to dose, whereas the measured attributes of a second target are distinctly sensitive to defocus and blur. On step-and-scan exposure tools, z-blur is varied in a controlled manner by adjusting the across slit tilt of the image plane. The effects of z-blur and x,y-blur are shown to be equivalent. Furthermore, the exposure slit width is shown to determine the tilt response of the grating attributes. Thus, the response of the measured attributes can be characterized by a conventional focus-exposure matrix (FEM), over which the exposure tool settings are intentionally changed. The model coefficients are determined by a fit to the measured FEM response. The model then fully defines the response for wafers processed under "fixed" dose, focus and blur conditions. Model inversion applied to measurements from the same targets on all such wafers enables the simultaneous determination of effective dose and focus/tilt (DaFT) at each measurement site.
NASA Astrophysics Data System (ADS)
Jin, Xiao; Chan, Chung; Mulnix, Tim; Panin, Vladimir; Casey, Michael E.; Liu, Chi; Carson, Richard E.
2013-08-01
Whole-body PET/CT scanners are important clinical and research tools to study tracer distribution throughout the body. In whole-body studies, respiratory motion results in image artifacts. We have previously demonstrated for brain imaging that, when provided with accurate motion data, event-by-event correction has better accuracy than frame-based methods. Therefore, the goal of this work was to develop a list-mode reconstruction with novel physics modeling for the Siemens Biograph mCT with event-by-event motion correction, based on the MOLAR platform (Motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction). Application of MOLAR for the mCT required two algorithmic developments. First, in routine studies, the mCT collects list-mode data in 32 bit packets, where averaging of lines-of-response (LORs) by axial span and angular mashing reduced the number of LORs so that 32 bits are sufficient to address all sinogram bins. This degrades spatial resolution. In this work, we proposed a probabilistic LOR (pLOR) position technique that addresses axial and transaxial LOR grouping in 32 bit data. Second, two simplified approaches for 3D time-of-flight (TOF) scatter estimation were developed to accelerate the computationally intensive calculation without compromising accuracy. The proposed list-mode reconstruction algorithm was compared to the manufacturer's point spread function + TOF (PSF+TOF) algorithm. Phantom, animal, and human studies demonstrated that MOLAR with pLOR gives slightly faster contrast recovery than the PSF+TOF algorithm that uses the average 32 bit LOR sinogram positioning. Moving phantom and a whole-body human study suggested that event-by-event motion correction reduces image blurring caused by respiratory motion. We conclude that list-mode reconstruction with pLOR positioning provides a platform to generate high quality images for the mCT, and to recover fine structures in whole-body PET scans through event-by-event motion correction.
Jin, Xiao; Chan, Chung; Mulnix, Tim; Panin, Vladimir; Casey, Michael E.; Liu, Chi; Carson, Richard E.
2013-01-01
Whole-body PET/CT scanners are important clinical and research tools to study tracer distribution throughout the body. In whole-body studies, respiratory motion results in image artifacts. We have previously demonstrated for brain imaging that, when provided accurate motion data, event-by-event correction has better accuracy than frame-based methods. Therefore, the goal of this work was to develop a list-mode reconstruction with novel physics modeling for the Siemens Biograph mCT with event-by-event motion correction, based on the MOLAR platform (Motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction). Application of MOLAR for the mCT required two algorithmic developments. First, in routine studies, the mCT collects list-mode data in 32-bit packets, where averaging of lines of response (LORs) by axial span and angular mashing reduced the number of LORs so that 32 bits are sufficient to address all sinogram bins. This degrades spatial resolution. In this work, we proposed a probabilistic assignment of LOR positions (pLOR) that addresses axial and transaxial LOR grouping in 32-bit data. Second, two simplified approaches for 3D TOF scatter estimation were developed to accelerate the computationally intensive calculation without compromising accuracy. The proposed list-mode reconstruction algorithm was compared to the manufacturer's point spread function + time-of-flight (PSF+TOF) algorithm. Phantom, animal, and human studies demonstrated that MOLAR with pLOR gives slightly faster contrast recovery than the PSF+TOF algorithm that uses the average 32-bit LOR sinogram positioning. Moving phantom and a whole-body human study suggested that event-by-event motion correction reduces image blurring caused by respiratory motion. We conclude that list-mode reconstruction with pLOR positioning provides a platform to generate high quality images for the mCT, and to recover fine structures in whole-body PET scans through event-by-event motion correction. PMID:23892635
NASA Astrophysics Data System (ADS)
Li, Tianfang; Wang, Jing; Wen, Junhai; Li, Xiang; Lu, Hongbing; Hsieh, Jiang; Liang, Zhengrong
2004-05-01
To treat the noise in low-dose x-ray CT projection data more accurately, analysis of the noise properties of the data and development of a corresponding efficient noise treatment method are two major problems to be addressed. In order to obtain an accurate and realistic model to describe the x-ray CT system, we acquired thousands of repeated measurements on different phantoms at several fixed scan angles by a GE high-speed multi-slice spiral CT scanner. The collected data were calibrated and log-transformed by the sophisticated system software, which converts the detected photon energy into sinogram data that satisfies the Radon transform. From the analysis of these experimental data, a nonlinear relation between mean and variance for each datum of the sinogram was obtained. In this paper, we integrated this nonlinear relation into a penalized likelihood statistical framework for a SNR (signal-to-noise ratio) adaptive smoothing of noise in the sinogram. After the proposed preprocessing, the sinograms were reconstructed with unapodized FBP (filtered backprojection) method. The resulted images were evaluated quantitatively, in terms of noise uniformity and noise-resolution tradeoff, with comparison to other noise smoothing methods such as Hanning filter and Butterworth filter at different cutoff frequencies. Significant improvement on noise and resolution tradeoff and noise property was demonstrated.
Application of side-oblique image-motion blur correction to Kuaizhou-1 agile optical images.
Sun, Tao; Long, Hui; Liu, Bao-Cheng; Li, Ying
2016-03-21
Given the recent development of agile optical satellites for rapid-response land observation, side-oblique image-motion (SOIM) detection and blur correction have become increasingly essential for improving the radiometric quality of side-oblique images. The Chinese small-scale agile mapping satellite Kuaizhou-1 (KZ-1) was developed by the Harbin Institute of Technology and launched for multiple emergency applications. Like other agile satellites, KZ-1 suffers from SOIM blur, particularly in captured images with large side-oblique angles. SOIM detection and blur correction are critical for improving the image radiometric accuracy. This study proposes a SOIM restoration method based on segmental point spread function detection. The segment region width is determined by satellite parameters such as speed, height, integration time, and side-oblique angle. The corresponding algorithms and a matrix form are proposed for SOIM blur correction. Radiometric objective evaluation indices are used to assess the restoration quality. Beijing regional images from KZ-1 are used as experimental data. The radiometric quality is found to increase greatly after SOIM correction. Thus, the proposed method effectively corrects image motion for KZ-1 agile optical satellites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thiyagarajan, Rajesh; Karrthick, KP; Kataria, Tejinder
Purpose: Performing DQA for Bilateral (B-L) breast tomotherapy is a challenging task due to the limitation of any commercially available detector array or film. Aim of this study is to perform DQA for B-L breast tomotherapy plan using MLC fluence sinogram. Methods: Treatment plan was generated on Tomotherapy system for B-L breast tumour. B-L breast targets were given 50.4 Gy prescribed over 28 fractions. Plan is generated with 6 MV photon beam & pitch was set to 0.3. As the width of the total target is 39 cm (left & right) length is 20 cm. DQA plan delivered without anymore » phantom on the mega voltage computed tomography (MCVT) detector system. The pulses recorded by MVCT system were exported to the delivery analysis software (Tomotherapy Inc.) for reconstruction. The detector signals are reconstructed to a sonogram and converted to MLC fluence sonogram. The MLC fluence sinogram compared with the planned fluence sinogram. Also point dose measured with cheese phantom and ionization chamber to verify the absolute dose component Results: Planned fluence sinogram and reconstructed MLC fluence sinogram were compared using Gamma metric. MLC positional difference and intensity of the beamlet were used as parameters to evaluate gamma. 3 mm positional difference and 3% beamlet intensity difference were used set for gamma calculation. A total of 26784 non-zero beamlets were included in the analysis out of which 161 beamlets had gamma more than 1. The gamma passing rate found to be 99.4%. Point dose measurements were within 1.3% of the calculated dose. Conclusion: MLC fluence sinogram based delivery quality assurance performed for bilateral breast irradiation. This would be a suitable alternate for large volume targets like bilateral breast, Total body irradiation etc. However conventional method of DQA should be used to validate this method periodically.« less
Sinogram noise reduction for low-dose CT by statistics-based nonlinear filters
NASA Astrophysics Data System (ADS)
Wang, Jing; Lu, Hongbing; Li, Tianfang; Liang, Zhengrong
2005-04-01
Low-dose CT (computed tomography) sinogram data have been shown to be signal-dependent with an analytical relationship between the sample mean and sample variance. Spatially-invariant low-pass linear filters, such as the Butterworth and Hanning filters, could not adequately handle the data noise and statistics-based nonlinear filters may be an alternative choice, in addition to other choices of minimizing cost functions on the noisy data. Anisotropic diffusion filter and nonlinear Gaussian filters chain (NLGC) are two well-known classes of nonlinear filters based on local statistics for the purpose of edge-preserving noise reduction. These two filters can utilize the noise properties of the low-dose CT sinogram for adaptive noise reduction, but can not incorporate signal correlative information for an optimal regularized solution. Our previously-developed Karhunen-Loeve (KL) domain PWLS (penalized weighted least square) minimization considers the signal correlation via the KL strategy and seeks the PWLS cost function minimization for an optimal regularized solution for each KL component, i.e., adaptive to the KL components. This work compared the nonlinear filters with the KL-PWLS framework for low-dose CT application. Furthermore, we investigated the nonlinear filters for post KL-PWLS noise treatment in the sinogram space, where the filters were applied after ramp operation on the KL-PWLS treated sinogram data prior to backprojection operation (for image reconstruction). By both computer simulation and experimental low-dose CT data, the nonlinear filters could not outperform the KL-PWLS framework. The gain of post KL-PWLS edge-preserving noise filtering in the sinogram space is not significant, even the noise has been modulated by the ramp operation.
Gaussian diffusion sinogram inpainting for X-ray CT metal artifact reduction.
Peng, Chengtao; Qiu, Bensheng; Li, Ming; Guan, Yihui; Zhang, Cheng; Wu, Zhongyi; Zheng, Jian
2017-01-05
Metal objects implanted in the bodies of patients usually generate severe streaking artifacts in reconstructed images of X-ray computed tomography, which degrade the image quality and affect the diagnosis of disease. Therefore, it is essential to reduce these artifacts to meet the clinical demands. In this work, we propose a Gaussian diffusion sinogram inpainting metal artifact reduction algorithm based on prior images to reduce these artifacts for fan-beam computed tomography reconstruction. In this algorithm, prior information that originated from a tissue-classified prior image is used for the inpainting of metal-corrupted projections, and it is incorporated into a Gaussian diffusion function. The prior knowledge is particularly designed to locate the diffusion position and improve the sparsity of the subtraction sinogram, which is obtained by subtracting the prior sinogram of the metal regions from the original sinogram. The sinogram inpainting algorithm is implemented through an approach of diffusing prior energy and is then solved by gradient descent. The performance of the proposed metal artifact reduction algorithm is compared with two conventional metal artifact reduction algorithms, namely the interpolation metal artifact reduction algorithm and normalized metal artifact reduction algorithm. The experimental datasets used included both simulated and clinical datasets. By evaluating the results subjectively, the proposed metal artifact reduction algorithm causes fewer secondary artifacts than the two conventional metal artifact reduction algorithms, which lead to severe secondary artifacts resulting from impertinent interpolation and normalization. Additionally, the objective evaluation shows the proposed approach has the smallest normalized mean absolute deviation and the highest signal-to-noise ratio, indicating that the proposed method has produced the image with the best quality. No matter for the simulated datasets or the clinical datasets, the proposed algorithm has reduced the metal artifacts apparently.
Development of a high-performance noise-reduction filter for tomographic reconstruction
NASA Astrophysics Data System (ADS)
Kao, Chien-Min; Pan, Xiaochuan
2001-07-01
We propose a new noise-reduction method for tomographic reconstruction. The method incorporates a priori information on the source image for allowing the derivation of the energy spectrum of its ideal sinogram. In combination with the energy spectrum of the Poisson noise in the measured sinogram, we are able to derive a Wiener-like filter for effective suppression of the sinogram noise. The filtered backprojection (FBP) algorithm, with a ramp filter, is then applied to the filtered sinogram to produce tomographic images. The resulting filter has a closed-form expression in the frequency space and contains a single user-adjustable regularization parameter. The proposed method is hence simple to implement and easy to use. In contrast to the ad hoc apodizing windows, such as Hanning and Butterworth filters, that are commonly used in the conventional FBP reconstruction, the proposed filter is theoretically more rigorous as it is derived by basing upon an optimization criterion, subject to a known class of source image intensity distributions.
Nam, Haewon
2017-01-01
We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794
Tsai, Yu-Hsiang; Huang, Mao-Hsiu; Jeng, Wei-de; Huang, Ting-Wei; Lo, Kuo-Lung; Ou-Yang, Mang
2015-10-01
Transparent display is one of the main technologies in next-generation displays, especially for augmented reality applications. An aperture structure is attached on each display pixel to partition them into transparent and black regions. However, diffraction blurs caused by the aperture structure typically degrade the transparent image when the light from a background object passes through finite aperture window. In this paper, the diffraction effect of an active-matrix organic light-emitting diode display (AMOLED) is studied. Several aperture structures have been proposed and implemented. Based on theoretical analysis and simulation, the appropriate aperture structure will effectively reduce the blur. The analysis data are also consistent with the experimental results. Compared with the various transparent aperture structure on AMOLED, diffraction width (zero energy position of diffraction pattern) of the optimize aperture structure can be reduced 63% and 31% in the x and y directions in CASE 3. Associated with a lenticular lens on the aperture structure, the improvement could reach to 77% and 54% of diffraction width in the x and y directions. Modulation transfer function and practical images are provided to evaluate the improvement of image blurs.
Nagayama, Yasunori; Nakaura, Takeshi; Tsuji, Akinori; Urata, Joji; Furusawa, Mitsuhiro; Yuki, Hideaki; Hirarta, Kenichiro; Kidoh, Masafumi; Oda, Seitaro; Utsunomiya, Daisuke; Yamashita, Yasuyuki
2017-07-01
To retrospectively evaluate the image quality and radiation dose of 100-kVp scans with sinogram-affirmed iterative reconstruction (IR) for unenhanced head CT in adolescents. Sixty-nine patients aged 12-17 years underwent head CT under 120- (n = 34) or 100-kVp (n = 35) protocols. The 120-kVp images were reconstructed with filtered back-projection (FBP), 100-kVp images with FBP (100-kVp-F) and sinogram-affirmed IR (100-kVp-S). We compared the effective dose (ED), grey-white matter (GM-WM) contrast, image noise, and contrast-to-noise ratio (CNR) between protocols in supratentorial (ST) and posterior fossa (PS). We also assessed GM-WM contrast, image noise, sharpness, artifacts, and overall image quality on a four-point scale. ED was 46% lower with 100- than 120-kVp (p < 0.001). GM-WM contrast was higher, and image noise was lower, on 100-kVp-S than 120-kVp at ST (p < 0.001). CNR of 100-kVp-S was higher than of 120-kVp (p < 0.001). GM-WM contrast of 100-kVp-S was subjectively rated as better than of 120-kVp (p < 0.001). There were no significant differences in the other criteria between 100-kVp-S and 120-kVp (p = 0.072-0.966). The 100-kVp with sinogram-affirmed IR facilitated dramatic radiation reduction and better GM-WM contrast without increasing image noise in adolescent head CT. • 100-kVp head CT provides 46% radiation dose reduction compared with 120-kVp. • 100-kVp scanning improves subjective and objective GM-WM contrast. • Sinogram-affirmed IR decreases head CT image noise, especially in supratentorial region. • 100-kVp protocol with sinogram-affirmed IR is suited for adolescent head CT.
Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua
2014-06-16
To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as "ASR-TV-POCS." To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation.
Variance analysis of x-ray CT sinograms in the presence of electronic noise background.
Ma, Jianhua; Liang, Zhengrong; Fan, Yi; Liu, Yan; Huang, Jing; Chen, Wufan; Lu, Hongbing
2012-07-01
Low-dose x-ray computed tomography (CT) is clinically desired. Accurate noise modeling is a fundamental issue for low-dose CT image reconstruction via statistics-based sinogram restoration or statistical iterative image reconstruction. In this paper, the authors analyzed the statistical moments of low-dose CT data in the presence of electronic noise background. The authors first studied the statistical moment properties of detected signals in CT transmission domain, where the noise of detected signals is considered as quanta fluctuation upon electronic noise background. Then the authors derived, via the Taylor expansion, a new formula for the mean-variance relationship of the detected signals in CT sinogram domain, wherein the image formation becomes a linear operation between the sinogram data and the unknown image, rather than a nonlinear operation in the CT transmission domain. To get insight into the derived new formula by experiments, an anthropomorphic torso phantom was scanned repeatedly by a commercial CT scanner at five different mAs levels from 100 down to 17. The results demonstrated that the electronic noise background is significant when low-mAs (or low-dose) scan is performed. The influence of the electronic noise background should be considered in low-dose CT imaging.
Variance analysis of x-ray CT sinograms in the presence of electronic noise background
Ma, Jianhua; Liang, Zhengrong; Fan, Yi; Liu, Yan; Huang, Jing; Chen, Wufan; Lu, Hongbing
2012-01-01
Purpose: Low-dose x-ray computed tomography (CT) is clinically desired. Accurate noise modeling is a fundamental issue for low-dose CT image reconstruction via statistics-based sinogram restoration or statistical iterative image reconstruction. In this paper, the authors analyzed the statistical moments of low-dose CT data in the presence of electronic noise background. Methods: The authors first studied the statistical moment properties of detected signals in CT transmission domain, where the noise of detected signals is considered as quanta fluctuation upon electronic noise background. Then the authors derived, via the Taylor expansion, a new formula for the mean–variance relationship of the detected signals in CT sinogram domain, wherein the image formation becomes a linear operation between the sinogram data and the unknown image, rather than a nonlinear operation in the CT transmission domain. To get insight into the derived new formula by experiments, an anthropomorphic torso phantom was scanned repeatedly by a commercial CT scanner at five different mAs levels from 100 down to 17. Results: The results demonstrated that the electronic noise background is significant when low-mAs (or low-dose) scan is performed. Conclusions: The influence of the electronic noise background should be considered in low-dose CT imaging. PMID:22830738
Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua
2014-01-01
To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as “ASR-TV-POCS.” To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation. PMID:24977611
Respiratory-gated CT as a tool for the simulation of breathing artifacts in PET and PET/CT.
Hamill, J J; Bosmans, G; Dekker, A
2008-02-01
Respiratory motion in PET and PET/CT blurs the images and can cause attenuation-related errors in quantitative parameters such as standard uptake values. In rare instances, this problem even causes localization errors and the disappearance of tumors that should be detectable. Attenuation errors are severe near the diaphragm and can be enhanced when the attenuation correction is based on a CT series acquired during a breath-hold. To quantify the errors and identify the parameters associated with them, the authors performed a simulated PET scan based on respiratory-gated CT studies of five lung cancer patients. Diaphragmatic motion ranged from 8 to 25 mm in the five patients. The CT series were converted to 511-keV attenuation maps which were forward-projected and exponentiated to form sinograms of PET attenuation factors at each phase of respiration. The CT images were also segmented to form a PET object, moving with the same motion as the CT series. In the moving PET object, spherical 20 mm mobile tumors were created in the vicinity of the dome of the liver and immobile 20 mm tumors in the midchest region. The moving PET objects were forward-projected and attenuated, then reconstructed in several ways: phase-matched PET and CT, gated PET with ungated CT, ungated PET with gated CT, and conventional PET. Spatial resolution and statistical noise were not modeled. In each case, tumor uptake recovery factor was defined by comparing the maximum reconstructed pixel value with the known correct value. Mobile 10 and 30 mm tumors were also simulated in the case of a patient with 11 mm of breathing motion. Phase-matched gated PET and CT gave essentially perfect PET reconstructions in the simulation. Gated PET with ungated CT gave tumors of the correct shape, but recovery was too large by an amount that depended on the extent of the motion, as much as 90% for mobile tumors and 60% for immobile tumors. Gated CT with ungated PET resulted in blurred tumors and caused recovery errors between -50% and +75%. Recovery in clinical scans would be 0%-20% lower than stated because spatial resolution was not included in the simulation. Mobile tumors near the dome of the liver were subject to the largest errors in either case. Conventional PET for 20 mm tumors was quantitative in cases of motion less than 15 mm because of canceling errors in blurring and attenuation, but the recovery factors were too low by as much as 30% in cases of motion greater than 15 mm. The 10 mm tumors were blurred by motion to a greater extent, causing a greater SUV underestimation than in the case of 20 mm tumors, and the 30 mm tumors were blurred less. Quantitative PET imaging near the diaphragm requires proper matching of attenuation information to the emission information. The problem of missed tumors near the diaphragm can be reduced by acquiring attenuation-correction information near end expiration. A simple PET/CT protocol requiring no gating equipment also addresses this problem.
HeinzelCluster: accelerated reconstruction for FORE and OSEM3D.
Vollmar, S; Michel, C; Treffert, J T; Newport, D F; Casey, M; Knöss, C; Wienhard, K; Liu, X; Defrise, M; Heiss, W D
2002-08-07
Using iterative three-dimensional (3D) reconstruction techniques for reconstruction of positron emission tomography (PET) is not feasible on most single-processor machines due to the excessive computing time needed, especially so for the large sinogram sizes of our high-resolution research tomograph (HRRT). In our first approach to speed up reconstruction time we transform the 3D scan into the format of a two-dimensional (2D) scan with sinograms that can be reconstructed independently using Fourier rebinning (FORE) and a fast 2D reconstruction method. On our dedicated reconstruction cluster (seven four-processor systems, Intel PIII@700 MHz, switched fast ethernet and Myrinet, Windows NT Server), we process these 2D sinograms in parallel. We have achieved a speedup > 23 using 26 processors and also compared results for different communication methods (RPC, Syngo, Myrinet GM). The other approach is to parallelize OSEM3D (implementation of C Michel), which has produced the best results for HRRT data so far and is more suitable for an adequate treatment of the sinogram gaps that result from the detector geometry of the HRRT. We have implemented two levels of parallelization for four dedicated cluster (a shared memory fine-grain level on each node utilizing all four processors and a coarse-grain level allowing for 15 nodes) reducing the time for one core iteration from over 7 h to about 35 min.
Deblurring of Class-Averaged Images in Single-Particle Electron Microscopy.
Park, Wooram; Madden, Dean R; Rockmore, Daniel N; Chirikjian, Gregory S
2010-03-01
This paper proposes a method for deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre-Fourier expansions, and Hermite expansion and Laguerre-Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method.
Assessment of the impact of modeling axial compression on PET image reconstruction.
Belzunce, Martin A; Reader, Andrew J
2017-10-01
To comprehensively evaluate both the acceleration and image-quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end-point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5-10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher frequencies. Modeling the axial compression also achieved a lower coefficient of variation but with an increase of intervoxel correlations. The unmatched projector/backprojector achieved similar contrast values to the matched version at considerably lower reconstruction times, but at the cost of noisier images. For a line source scan, the reconstructions with modeling of the axial compression achieved similar resolution to the span 1 reconstructions. Axial compression applied to PET sinograms was found to have a negligible impact for span values lower than 7. For span values up to 21, the spatial resolution degradation due to the axial compression can be almost completely compensated for by modeling this effect in the system matrix at the expense of considerably larger processing times and higher intervoxel correlations, while retaining the storage benefit of compressed data. For even higher span values, the resolution loss cannot be completely compensated possibly due to an effective null space in the system. The use of an unmatched projector/backprojector proved to be a practical solution to compensate for the spatial resolution degradation at a reasonable computational cost but can lead to noisier images. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Multi-frame partially saturated images blind deconvolution
NASA Astrophysics Data System (ADS)
Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2016-12-01
When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.
Pineda, Angel R; Barrett, Harrison H
2004-02-01
The current paradigm for evaluating detectors in digital radiography relies on Fourier methods. Fourier methods rely on a shift-invariant and statistically stationary description of the imaging system. The theoretical justification for the use of Fourier methods is based on a uniform background fluence and an infinite detector. In practice, the background fluence is not uniform and detector size is finite. We study the effect of stochastic blurring and structured backgrounds on the correlation between Fourier-based figures of merit and Hotelling detectability. A stochastic model of the blurring leads to behavior similar to what is observed by adding electronic noise to the deterministic blurring model. Background structure does away with the shift invariance. Anatomical variation makes the covariance matrix of the data less amenable to Fourier methods by introducing long-range correlations. It is desirable to have figures of merit that can account for all the sources of variation, some of which are not stationary. For such cases, we show that the commonly used figures of merit based on the discrete Fourier transform can provide an inaccurate estimate of Hotelling detectability.
Iris recognition based on robust principal component analysis
NASA Astrophysics Data System (ADS)
Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong
2014-11-01
Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.
Baumueller, Stephan; Hilty, Regina; Nguyen, Thi Dan Linh; Weder, Walter; Alkadhi, Hatem; Frauenfelder, Thomas
2016-01-01
The purpose of this study was to evaluate the influence of sinogram-affirmed iterative reconstruction (SAFIRE) on quantification of lung volume and pulmonary emphysema in low-dose chest computed tomography compared with filtered back projection (FBP). Enhanced or nonenhanced low-dose chest computed tomography was performed in 20 patients with chronic obstructive pulmonary disease (group A) and in 20 patients without lung disease (group B). Data sets were reconstructed with FBP and SAFIRE strength levels 3 to 5. Two readers semiautomatically evaluated lung volumes and automatically quantified pulmonary emphysema, and another assessed image quality. Radiation dose parameters were recorded. Lung volume between FBP and SAFIRE 3 to 5 was not significantly different among both groups (all P > 0.05). When compared with those of FBP, total emphysema volume was significantly lower among reconstructions with SAFIRE 4 and 5 (mean difference, 0.56 and 0.79 L; all P < 0.001). There was no nondiagnostic image quality. Sinogram-affirmed iterative reconstruction does not alter lung volume measurements, although quantification of lung emphysema is affected at higher strength levels.
Lung dynamic MRI deblurring using low-rank decomposition and dictionary learning.
Gou, Shuiping; Wang, Yueyue; Wu, Jiaolong; Lee, Percy; Sheng, Ke
2015-04-01
Lung dynamic MRI (dMRI) has emerged to be an appealing tool to quantify lung motion for both planning and treatment guidance purposes. However, this modality can result in blurry images due to intrinsically low signal-to-noise ratio in the lung and spatial/temporal interpolation. The image blurring could adversely affect the image processing that depends on the availability of fine landmarks. The purpose of this study is to reduce dMRI blurring using image postprocessing. To enhance the image quality and exploit the spatiotemporal continuity of dMRI sequences, a low-rank decomposition and dictionary learning (LDDL) method was employed to deblur lung dMRI and enhance the conspicuity of lung blood vessels. Fifty frames of continuous 2D coronal dMRI frames using a steady state free precession sequence were obtained from five subjects including two healthy volunteer and three lung cancer patients. In LDDL, the lung dMRI was decomposed into sparse and low-rank components. Dictionary learning was employed to estimate the blurring kernel based on the whole image, low-rank or sparse component of the first image in the lung MRI sequence. Deblurring was performed on the whole image sequences using deconvolution based on the estimated blur kernel. The deblurring results were quantified using an automated blood vessel extraction method based on the classification of Hessian matrix filtered images. Accuracy of automated extraction was calculated using manual segmentation of the blood vessels as the ground truth. In the pilot study, LDDL based on the blurring kernel estimated from the sparse component led to performance superior to the other ways of kernel estimation. LDDL consistently improved image contrast and fine feature conspicuity of the original MRI without introducing artifacts. The accuracy of automated blood vessel extraction was on average increased by 16% using manual segmentation as the ground truth. Image blurring in dMRI images can be effectively reduced using a low-rank decomposition and dictionary learning method using kernels estimated by the sparse component.
Processing of CT sinograms acquired using a VRX detector
NASA Astrophysics Data System (ADS)
Jordan, Lawrence M.; DiBianca, Frank A.; Zou, Ping; Laughter, Joseph S.; Zeman, Herbert D.
2000-04-01
A 'variable resolution x-ray detector' (VRX) capable of resolving beyond 100 cycles/main a single dimension has been proposed by DiBianca, et al. The use of detectors of this design for computed-tomography (CT) imaging requires novel preprocessing of data to correct for the detector's non- uniform imaging characteristics over its range of view. This paper describes algorithms developed specifically to adjust VRX data for varying magnification, source-to-detector range and beam obliquity and to sharpen reconstructions by deconvolving the ray impulse function. The preprocessing also incorporates nonlinear interpolation of VRX raw data into canonical CT sinogram formats.
Modeling blur in various detector geometries for MeV radiography
NASA Astrophysics Data System (ADS)
Winch, Nicola M.; Watson, Scott A.; Hunter, James F.
2017-03-01
Monte Carlo transport codes have been used to model the detector blur and energy deposition in various detector geometries for applications in MeV radiography. Segmented scintillating detectors, where low Z scintillators combined with a high-Z metal matrix, can be designed in which the resolution increases with increasing metal fraction. The combination of various types of metal intensification screens and storage phosphor imaging plates has also been studied. A storage phosphor coated directly onto a metal intensification screen has superior performance over a commercial plate. Stacks of storage phosphor plates and tantalum intensification screens show an increase in energy deposited and detective quantum efficiency with increasing plate number, at the expense of resolution. Select detector geometries were tested by comparing simulation and experimental modulation transfer functions to validate the approach.
Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate.
Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan
2017-07-24
Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. 'scatter-tails'. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the 'scatter-tails'. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68 Ga-PSMA scan, and 23 whole-body 18 F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical 'halo' artifacts that are often observed in the vicinity of high focal uptake regions.
Image Restoration for Fluorescence Planar Imaging with Diffusion Model
Gong, Yuzhu; Li, Yang
2017-01-01
Fluorescence planar imaging (FPI) is failure to capture high resolution images of deep fluorochromes due to photon diffusion. This paper presents an image restoration method to deal with this kind of blurring. The scheme of this method is conceived based on a reconstruction method in fluorescence molecular tomography (FMT) with diffusion model. A new unknown parameter is defined through introducing the first mean value theorem for definite integrals. System matrix converting this unknown parameter to the blurry image is constructed with the elements of depth conversion matrices related to a chosen plane named focal plane. Results of phantom and mouse experiments show that the proposed method is capable of reducing the blurring of FPI image caused by photon diffusion when the depth of focal plane is chosen within a proper interval around the true depth of fluorochrome. This method will be helpful to the estimation of the size of deep fluorochrome. PMID:29279843
Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2017-01-01
Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time. PMID:29270539
Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging.
Li, Yusheng; Matej, Samuel; Karp, Joel S; Metzler, Scott D
2017-05-01
Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time.
Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate
NASA Astrophysics Data System (ADS)
Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan
2017-08-01
Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. ‘scatter-tails’. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the ‘scatter-tails’. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68Ga-PSMA scan, and 23 whole-body 18F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical ‘halo’ artifacts that are often observed in the vicinity of high focal uptake regions.
Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon
2018-01-01
We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.
Sparse representation based image interpolation with nonlocal autoregressive modeling.
Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming
2013-04-01
Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.
2010-06-01
infrastructures, the information can now be dynamically "personalized" and made available on demand, thus blurring the boundaries between storage and...It is also the first truly global media carrier. It has enabled multi-directional communication between different individuals (or group of individuals...terms. Such common terms refer to the objects, and relations that may exist between those objects/terms, as applied to the phenomenon of cyberspace, which
Karampinos, Dimitrios C.; Banerjee, Suchandrima; King, Kevin F.; Link, Thomas M.; Majumdar, Sharmila
2011-01-01
Previous studies have shown that skeletal muscle diffusion tensor imaging (DTI) can non-invasively probe changes in the muscle fiber architecture and microstructure in diseased and damaged muscles. However, DTI fiber reconstruction in small muscles and in muscle regions close to aponeuroses and tendons remains challenging because of partial volume effects. Increasing the spatial resolution of skeletal muscle single-shot diffusion weighted (DW)-EPI can be hindered by the inherently low SNR of muscle DW-EPI due to the short muscle T2 and the high sensitivity of single-shot EPI to off-resonance effects and T2* blurring. In the present work, eddy-current compensated diffusion-weighted stimulated echo preparation is combined with sensitivity encoding (SENSE) to maintain good SNR properties and reduce the sensitivity to distortions and T2* blurring in high resolution skeletal muscle single-shot DW-EPI. An analytical framework is developed for optimizing the reduction factor and diffusion weighting time to achieve maximum SNR. Arguments for the selection of the experimental parameters are then presented considering the compromise between SNR, B0-induced distortions, T2* blurring effects and tissue incoherent motion effects. Based on the selected parameters in a high resolution skeletal muscle single-shot DW-EPI protocol, imaging protocols at lower acquisition matrix sizes are defined with matched bandwidth in the phase-encoding direction and SNR. In vivo results show that high resolution skeletal muscle DTI with minimized sensitivity to geometric distortions and T2* blurring is feasible using the proposed methodology. In particular, a significant benefit is demonstrated from reducing partial volume effects on resolving multi-pennate muscles and muscles with small cross sections in calf muscle DTI. PMID:22081519
Resolution enhancement of tri-stereo remote sensing images by super resolution methods
NASA Astrophysics Data System (ADS)
Tuna, Caglayan; Akoguz, Alper; Unal, Gozde; Sertel, Elif
2016-10-01
Super resolution (SR) refers to generation of a High Resolution (HR) image from a decimated, blurred, low-resolution (LR) image set, which can be either a single frame or multi-frame that contains a collection of several images acquired from slightly different views of the same observation area. In this study, we propose a novel application of tri-stereo Remote Sensing (RS) satellite images to the super resolution problem. Since the tri-stereo RS images of the same observation area are acquired from three different viewing angles along the flight path of the satellite, these RS images are properly suited to a SR application. We first estimate registration between the chosen reference LR image and other LR images to calculate the sub pixel shifts among the LR images. Then, the warping, blurring and down sampling matrix operators are created as sparse matrices to avoid high memory and computational requirements, which would otherwise make the RS-SR solution impractical. Finally, the overall system matrix, which is constructed based on the obtained operator matrices is used to obtain the estimate HR image in one step in each iteration of the SR algorithm. Both the Laplacian and total variation regularizers are incorporated separately into our algorithm and the results are presented to demonstrate an improved quantitative performance against the standard interpolation method as well as improved qualitative results due expert evaluations.
Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images
Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi
2016-01-01
Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704
Study of blur discrimination for 3D stereo viewing
NASA Astrophysics Data System (ADS)
Subedar, Mahesh; Karam, Lina J.
2014-03-01
Blur is an important attribute in the study and modeling of the human visual system. Blur discrimination was studied extensively using 2D test patterns. In this study, we present the details of subjective tests performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. Specifically, the effect of disparity on the blur discrimination thresholds is studied on a passive stereoscopic 3D display. The blur discrimination thresholds are measured using stereoscopic 3D test patterns with positive, negative and zero disparity values, at multiple reference blur levels. A disparity value of zero represents the 2D viewing case where both the eyes will observe the same image. The subjective test results indicate that the blur discrimination thresholds remain constant as we vary the disparity value. This further indicates that binocular disparity does not affect blur discrimination thresholds and the models developed for 2D blur discrimination thresholds can be extended to stereoscopic 3D blur discrimination thresholds. We have presented fitting of the Weber model to the 3D blur discrimination thresholds measured from the subjective experiments.
Blur Detection is Unaffected by Cognitive Load.
Loschky, Lester C; Ringer, Ryan V; Johnson, Aaron P; Larson, Adam M; Neider, Mark; Kramer, Arthur F
2014-03-01
Blur detection is affected by retinal eccentricity, but is it also affected by attentional resources? Research showing effects of selective attention on acuity and contrast sensitivity suggests that allocating attention should increase blur detection. However, research showing that blur affects selection of saccade targets suggests that blur detection may be pre-attentive. To investigate this question, we carried out experiments in which viewers detected blur in real-world scenes under varying levels of cognitive load manipulated by the N -back task. We used adaptive threshold estimation to measure blur detection thresholds at 0°, 3°, 6°, and 9° eccentricity. Participants carried out blur detection as a single task, a single task with to-be-ignored letters, or an N-back task with four levels of cognitive load (0, 1, 2, or 3-back). In Experiment 1, blur was presented gaze-contingently for occasional single eye fixations while participants viewed scenes in preparation for an easy picture recognition memory task, and the N -back stimuli were presented auditorily. The results for three participants showed a large effect of retinal eccentricity on blur thresholds, significant effects of N -back level on N -back performance, scene recognition memory, and gaze dispersion, but no effect of N -back level on blur thresholds. In Experiment 2, we replicated Experiment 1 but presented the images tachistoscopically for 200 ms (half with, half without blur), to determine whether gaze-contingent blur presentation in Experiment 1 had produced attentional capture by blur onset during a fixation, thus eliminating any effect of cognitive load on blur detection. The results with three new participants replicated those of Experiment 1, indicating that the use of gaze-contingent blur presentation could not explain the lack of effect of cognitive load on blur detection. Thus, apparently blur detection in real-world scene images is unaffected by attentional resources, as manipulated by the cognitive load produced by the N -back task.
Effect of visual target blurring on accommodation under distance viewing
NASA Astrophysics Data System (ADS)
Iwata, Yo; Handa, Tomoya; Ishikawa, Hitoshi
2018-04-01
Evaluation of the spline reconstruction technique for PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kastis, George A., E-mail: gkastis@academyofathens.gr; Kyriakopoulou, Dimitra; Gaitanis, Anastasios
2014-04-15
Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors havemore » implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real point-source studies. In all simulated phantoms, the SRT exhibits higher contrast and lower bias than FBP at all noise levels, by increasing the COV in the reconstructed images. Finally, in real studies, whereas the contrast of the cold chambers are similar for both algorithms, the SRT reconstructed images of the NEMA phantom exhibit slightly higher COV values than those of FBP. In the Derenzo phantom, SRT resolves the 2-mm separated holes slightly better than FBP. The small-animal and human reconstructions via SRT exhibit slightly higher resolution and contrast than the FBP reconstructions. Conclusions: The SRT provides images of higher resolution, higher contrast, and lower bias than FBP, by increasing slightly the noise in the reconstructed images. Furthermore, it eliminates streak artifacts outside the object boundary. Unlike other analytic algorithms, the reconstruction time of SRT is comparable with that of FBP. The source code for SRT will become available in a future release of STIR.« less
Hybrid registration of PET/CT in thoracic region with pre-filtering PET sinogram
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Marhaban, M. H.; Nordin, A. J.; Hashim, S.
2015-11-01
The integration of physiological (PET) and anatomical (CT) images in cancer delineation requires an accurate spatial registration technique. Although hybrid PET/CT scanner is used to co-register these images, significant misregistrations exist due to patient and respiratory/cardiac motions. This paper proposes a hybrid feature-intensity based registration technique for hybrid PET/CT scanner. First, simulated PET sinogram was filtered with a 3D hybrid mean-median before reconstructing the image. The features were then derived from the segmented structures (lung, heart and tumor) from both images. The registration was performed based on modified multi-modality demon registration with multiresolution scheme. Apart from visual observations improvements, the proposed registration technique increased the normalized mutual information index (NMI) between the PET/CT images after registration. All nine tested datasets show marked improvements in mutual information (MI) index than free form deformation (FFD) registration technique with the highest MI increase is 25%.
Blur Clarified: A review and Synthesis of Blur Discrimination
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J.
2011-01-01
Blur is an important attribute of human spatial vision, and sensitivity to blur has been the subject of considerable experimental research and theoretical modeling. Often these models have invoked specialized concepts or mechanisms, such as intrinsic blur, multiple channels, or blur estimation units. In this paper we review the several experimental studies of blur discrimination and find they are in broad empirical agreement. But contrary to previous modeling efforts, we find that the essential features of blur discrimination are fully accounted for by a visible contrast energy model (ViCE), in which two spatial patterns are distinguished when the integrated difference between their masked local contrast energy responses reaches a threshold value.
Richardson-Lucy deblurring for the star scene under a thinning motion path
NASA Astrophysics Data System (ADS)
Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining
2015-05-01
This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.
The natural statistics of blur
Sprague, William W.; Cooper, Emily A.; Reissier, Sylvain; Yellapragada, Baladitya; Banks, Martin S.
2016-01-01
Blur from defocus can be both useful and detrimental for visual perception: It can be useful as a source of depth information and detrimental because it degrades image quality. We examined these aspects of blur by measuring the natural statistics of defocus blur across the visual field. Participants wore an eye-and-scene tracker that measured gaze direction, pupil diameter, and scene distances as they performed everyday tasks. We found that blur magnitude increases with increasing eccentricity. There is a vertical gradient in the distances that generate defocus blur: Blur below the fovea is generally due to scene points nearer than fixation; blur above the fovea is mostly due to points farther than fixation. There is no systematic horizontal gradient. Large blurs are generally caused by points farther rather than nearer than fixation. Consistent with the statistics, participants in a perceptual experiment perceived vertical blur gradients as slanted top-back whereas horizontal gradients were perceived equally as left-back and right-back. The tendency for people to see sharp as near and blurred as far is also consistent with the observed statistics. We calculated how many observations will be perceived as unsharp and found that perceptible blur is rare. Finally, we found that eye shape in ground-dwelling animals conforms to that required to put likely distances in best focus. PMID:27580043
Karampinos, Dimitrios C; Banerjee, Suchandrima; King, Kevin F; Link, Thomas M; Majumdar, Sharmila
2012-05-01
Previous studies have shown that skeletal muscle diffusion tensor imaging (DTI) can noninvasively probe changes in the muscle fiber architecture and microstructure in diseased and damaged muscles. However, DTI fiber reconstruction in small muscles and in muscle regions close to aponeuroses and tendons remains challenging because of partial volume effects. Increasing the spatial resolution of skeletal muscle single-shot diffusion-weighted echo planar imaging (DW-EPI) can be hindered by the inherently low signal-to-noise ratio (SNR) of muscle DW-EPI because of the short muscle T(2) and the high sensitivity of single-shot EPI to off-resonance effects and T(2)* blurring. In this article, eddy current-compensated diffusion-weighted stimulated-echo preparation is combined with sensitivity encoding (SENSE) to maintain good SNR properties and to reduce the sensitivity to distortions and T(2)* blurring in high-resolution skeletal muscle single-shot DW-EPI. An analytical framework is developed to optimize the reduction factor and diffusion weighting time to achieve maximum SNR. Arguments for the selection of the experimental parameters are then presented considering the compromise between SNR, B(0)-induced distortions, T(2)* blurring effects and tissue incoherent motion effects. On the basis of the selected parameters in a high-resolution skeletal muscle single-shot DW-EPI protocol, imaging protocols at lower acquisition matrix sizes are defined with matched bandwidth in the phase-encoding direction and SNR. In vivo results show that high-resolution skeletal muscle DTI with minimized sensitivity to geometric distortions and T(2)* blurring is feasible using the proposed methodology. In particular, a significant benefit is demonstrated from a reduction in partial volume effects for resolving multi-pennate muscles and muscles with small cross-sections in calf muscle DTI. Copyright © 2011 John Wiley & Sons, Ltd.
Robust x-ray based material identification using multi-energy sinogram decomposition
NASA Astrophysics Data System (ADS)
Yuan, Yaoshen; Tracey, Brian; Miller, Eric
2016-05-01
There is growing interest in developing X-ray computed tomography (CT) imaging systems with improved ability to discriminate material types, going beyond the attenuation imaging provided by most current systems. Dual- energy CT (DECT) systems can partially address this problem by estimating Compton and photoelectric (PE) coefficients of the materials being imaged, but DECT is greatly degraded by the presence of metal or other materials with high attenuation. Here we explore the advantages of multi-energy CT (MECT) systems based on photon-counting detectors. The utility of MECT has been demonstrated in medical applications where photon- counting detectors allow for the resolution of absorption K-edges. Our primary concern is aviation security applications where K-edges are rare. We simulate phantoms with differing amounts of metal (high, medium and low attenuation), both for switched-source DECT and for MECT systems, and include a realistic model of detector energy 0 resolution. We extend the DECT sinogram decomposition method of Ying et al. to MECT, allowing estimation of separate Compton and photoelectric sinograms. We furthermore introduce a weighting based on a quadratic approximation to the Poisson likelihood function that deemphasizes energy bins with low signal. Simulation results show that the proposed approach succeeds in estimating material properties even in high-attenuation scenarios where the DECT method fails, improving the signal to noise ratio of reconstructions by over 20 dB for the high-attenuation phantom. Our work demonstrates the potential of using photon counting detectors for stably recovering material properties even when high attenuation is present, thus enabling the development of improved scanning systems.
Effects of Optical Blur Reduction on Equivalent Intrinsic Blur
Valeshabad, Ali Kord; Wanek, Justin; McAnany, J. Jason; Shahidi, Mahnaz
2015-01-01
Purpose To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Methods Twelve visually normal individuals (age; 31 ± 12 years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) due to high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. Results σopt and σint were significantly reduced and visual acuity (VA) was significantly improved after AO correction (P ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, P ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (P = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, P < 0.001) and the two parameters were related linearly with a slope of 0.46. Conclusions Reduction in equivalent intrinsic blur was greater than the reduction in optical blur due to AO correction of wavefront error. This finding implies that VA in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone. PMID:25785538
Effects of optical blur reduction on equivalent intrinsic blur.
Kord Valeshabad, Ali; Wanek, Justin; McAnany, J Jason; Shahidi, Mahnaz
2015-04-01
To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Twelve visually normal subjects (mean [±SD] age, 31 [±12] years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) caused by high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. σopt and σint were significantly reduced and visual acuity was significantly improved after AO correction (p ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, p ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (p = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, p < 0.001), and the two parameters were related linearly with a slope of 0.46. Reduction in equivalent intrinsic blur was greater than the reduction in optical blur after AO correction of wavefront error. This finding implies that visual acuity in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone.
Fast restoration approach for motion blurred image based on deconvolution under the blurring paths
NASA Astrophysics Data System (ADS)
Shi, Yu; Song, Jie; Hua, Xia
2015-12-01
For the real-time motion deblurring, it is of utmost importance to get a higher processing speed with about the same image quality. This paper presents a fast Richardson-Lucy motion deblurring approach to remove motion blur which rotates blurred image under blurring paths. Hence, the computational time is reduced sharply by using one-dimensional Fast Fourier Transform in one-dimensional Richardson-Lucy method. In order to obtain accurate transformational results, interpolation method is incorporated to fetch the gray values. Experiment results demonstrate that the proposed approach is efficient and effective to reduce motion blur under the blur paths.
Response normalization and blur adaptation: Data and multi-scale model
Elliott, Sarah L.; Georgeson, Mark A.; Webster, Michael A.
2011-01-01
Adapting to blurred or sharpened images alters perceived blur of a focused image (M. A. Webster, M. A. Georgeson, & S. M. Webster, 2002). We asked whether blur adaptation results in (a) renormalization of perceived focus or (b) a repulsion aftereffect. Images were checkerboards or 2-D Gaussian noise, whose amplitude spectra had (log–log) slopes from −2 (strongly blurred) to 0 (strongly sharpened). Observers adjusted the spectral slope of a comparison image to match different test slopes after adaptation to blurred or sharpened images. Results did not show repulsion effects but were consistent with some renormalization. Test blur levels at and near a blurred or sharpened adaptation level were matched by more focused slopes (closer to 1/f) but with little or no change in appearance after adaptation to focused (1/f) images. A model of contrast adaptation and blur coding by multiple-scale spatial filters predicts these blur aftereffects and those of Webster et al. (2002). A key proposal is that observers are pre-adapted to natural spectra, and blurred or sharpened spectra induce changes in the state of adaptation. The model illustrates how norms might be encoded and recalibrated in the visual system even when they are represented only implicitly by the distribution of responses across multiple channels. PMID:21307174
Adapting to blur produced by ocular high-order aberrations
Sawides, Lucie; de Gracia, Pablo; Dorronsoro, Carlos; Webster, Michael; Marcos, Susana
2011-01-01
The perceived focus of an image can be strongly biased by prior adaptation to a blurred or sharpened image. We examined whether these adaptation effects can occur for the natural patterns of retinal image blur produced by high-order aberrations (HOAs) in the optics of the eye. Focus judgments were measured for 4 subjects to estimate in a forced choice procedure (sharp/blurred) their neutral point after adaptation to different levels of blur produced by scaled increases or decreases in their HOAs. The optical blur was simulated by convolution of the PSFs from the 4 different HOA patterns, with Zernike coefficients (excluding tilt, defocus, and astigmatism) multiplied by a factor between 0 (diffraction limited) and 2 (double amount of natural blur). Observers viewed the images through an Adaptive Optics system that corrected their aberrations and made settings under neutral adaptation to a gray field or after adapting to 5 different blur levels. All subjects adapted to changes in the level of blur imposed by HOA regardless of which observer’s HOA was used to generate the stimuli, with the perceived neutral point proportional to the amount of blur in the adapting image. PMID:21712375
Adapting to blur produced by ocular high-order aberrations.
Sawides, Lucie; de Gracia, Pablo; Dorronsoro, Carlos; Webster, Michael; Marcos, Susana
2011-06-28
The perceived focus of an image can be strongly biased by prior adaptation to a blurred or sharpened image. We examined whether these adaptation effects can occur for the natural patterns of retinal image blur produced by high-order aberrations (HOAs) in the optics of the eye. Focus judgments were measured for 4 subjects to estimate in a forced choice procedure (sharp/blurred) their neutral point after adaptation to different levels of blur produced by scaled increases or decreases in their HOAs. The optical blur was simulated by convolution of the PSFs from the 4 different HOA patterns, with Zernike coefficients (excluding tilt, defocus, and astigmatism) multiplied by a factor between 0 (diffraction limited) and 2 (double amount of natural blur). Observers viewed the images through an Adaptive Optics system that corrected their aberrations and made settings under neutral adaptation to a gray field or after adapting to 5 different blur levels. All subjects adapted to changes in the level of blur imposed by HOA regardless of which observer's HOA was used to generate the stimuli, with the perceived neutral point proportional to the amount of blur in the adapting image.
Chromatic blur perception in the presence of luminance contrast.
Jennings, Ben J; Kingdom, Frederick A A
2017-06-01
Hel-Or showed that blurring the chromatic but not the luminance layer of an image of a natural scene failed to elicit any impression of blur. Subsequent studies have suggested that this effect is due either to chromatic blur being masked by spatially contiguous luminance edges in the scene (Journal of Vision 13 (2013) 14), or to a relatively compressed transducer function for chromatic blur (Journal of Vision 15 (2015) 6). To test between the two explanations we conducted experiments using as stimuli both images of natural scenes as well as simple edges. First, we found that in color-and-luminance images of natural scenes more chromatic blur was needed to perceptually match a given level of blur in an isoluminant, i.e. colour-only scene. However, when the luminance layer in the scene was rotated relative to the chromatic layer, thus removing the colour-luminance edge correlations, the matched blur levels were near equal. Both results are consistent with Sharman et al.'s explanation. Second, when observers matched the blurs of luminance-only with isoluminant scenes, the matched blurs were equal, against Kingdom et al.'s prediction. Third, we measured the perceived blur in a square-wave as a function of (i) contrast (ii) number of luminance edges and (iii) the relative spatial phase between the colour and luminance edges. We found that the perceived chromatic blur was dependent on both relative phase and the number of luminance edges, or dependent on the luminance contrast if only a single edge is present. We conclude that this Hel-Or effect is largely due to masking of chromatic blur by spatially contiguous luminance edges. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ma, Wang Kei; Borgen, Rita; Kelly, Judith; Millington, Sara; Hilton, Beverley; Aspin, Rob; Lança, Carla; Hogg, Peter
2017-03-01
Blurred images in full-field digital mammography are a problem in the UK Breast Screening Programme. Technical recalls may be due to blurring not being seen on lower resolution monitors used for review. This study assesses the visual detection of blurring on a 2.3-MP monitor and a 5-MP report grade monitor and proposes an observer standard for the visual detection of blurring on a 5-MP reporting grade monitor. 28 observers assessed 120 images for blurring; 20 images had no blurring present, whereas 100 images had blurring imposed through mathematical simulation at 0.2, 0.4, 0.6, 0.8 and 1.0 mm levels of motion. Technical recall rate for both monitors and angular size at each level of motion were calculated. χ 2 tests were used to test whether significant differences in blurring detection existed between 2.3- and 5-MP monitors. The technical recall rate for 2.3- and 5-MP monitors are 20.3% and 9.1%, respectively. The angular size for 0.2- to 1-mm motion varied from 55 to 275 arc s. The minimum amount of motion for visual detection of blurring in this study is 0.4 mm. For 0.2-mm simulated motion, there was no significant difference [χ 2 (1, N = 1095) = 1.61, p = 0.20] in blurring detection between the 2.3- and 5-MP monitors. According to this study, monitors ≤2.3 MP are not suitable for technical review of full-field digital mammography images for the detection of blur. Advances in knowledge: This research proposes the first observer standard for the visual detection of blurring.
Borgen, Rita; Kelly, Judith; Millington, Sara; Hilton, Beverley; Aspin, Rob; Lança, Carla; Hogg, Peter
2017-01-01
Objective: Blurred images in full-field digital mammography are a problem in the UK Breast Screening Programme. Technical recalls may be due to blurring not being seen on lower resolution monitors used for review. This study assesses the visual detection of blurring on a 2.3-MP monitor and a 5-MP report grade monitor and proposes an observer standard for the visual detection of blurring on a 5-MP reporting grade monitor. Methods: 28 observers assessed 120 images for blurring; 20 images had no blurring present, whereas 100 images had blurring imposed through mathematical simulation at 0.2, 0.4, 0.6, 0.8 and 1.0 mm levels of motion. Technical recall rate for both monitors and angular size at each level of motion were calculated. χ2 tests were used to test whether significant differences in blurring detection existed between 2.3- and 5-MP monitors. Results: The technical recall rate for 2.3- and 5-MP monitors are 20.3% and 9.1%, respectively. The angular size for 0.2- to 1-mm motion varied from 55 to 275 arc s. The minimum amount of motion for visual detection of blurring in this study is 0.4 mm. For 0.2-mm simulated motion, there was no significant difference [χ2 (1, N = 1095) = 1.61, p = 0.20] in blurring detection between the 2.3- and 5-MP monitors. Conclusion: According to this study, monitors ≤2.3 MP are not suitable for technical review of full-field digital mammography images for the detection of blur. Advances in knowledge: This research proposes the first observer standard for the visual detection of blurring. PMID:28134567
Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong
2014-01-01
As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition.
The Effect of Dioptric Blur on Reading Performance
Chung, Susana T.L.; Jarvis, Samuel H.; Cheung, Sing-Hang
2013-01-01
Little is known about the systematic impact of blur on reading performance. The purpose of this study was to quantify the effect of dioptric blur on reading performance in a group of normally sighted young adults. We measured monocular reading performance and visual acuity for 19 observers with normal vision, for five levels of optical blur (no blur, 0.5, 1, 2 and 3D). Dioptric blur was induced using convex trial lenses placed in front of the testing eye, with the pupil dilated and in the presence of a 3 mm artificial pupil. Reading performance was assessed using eight versions of the MNREAD Acuity Chart. For each level of dioptric blur, observers read aloud sentences on one of these charts, from large to small print. Reading time for each sentence and the number of errors made were recorded and converted to reading speed in words per minute. Visual acuity was measured using 4-orientation Landolt C stimuli. For all levels of dioptric blur, reading speed increased with print size up to a certain print size and then remained constant at the maximum reading speed. By fitting nonlinear mixed-effects models, we found that the maximum reading speed was minimally affected by blur up to 2D, but was ~23% slower for 3D of blur. When the amount of blur increased from 0 (no-blur) to 3D, the threshold print size (print size corresponded to 80% of the maximum reading speed) increased from 0.01 to 0.88 logMAR, reading acuity worsened from −0.16 to 0.58 logMAR, and visual acuity worsened from −0.19 to 0.64 logMAR. The similar rates of change with blur for threshold print size, reading acuity and visual acuity implicates that visual acuity is a good predictor of threshold print size and reading acuity. Like visual acuity, reading performance is susceptible to the degrading effect of optical blur. For increasing amount of blur, larger print sizes are required to attain the maximum reading speed. PMID:17442363
A blur-invariant local feature for motion blurred image matching
NASA Astrophysics Data System (ADS)
Tong, Qiang; Aoki, Terumasa
2017-07-01
Image matching between a blurred (caused by camera motion, out of focus, etc.) image and a non-blurred image is a critical task for many image/video applications. However, most of the existing local feature schemes fail to achieve this work. This paper presents a blur-invariant descriptor and a novel local feature scheme including the descriptor and the interest point detector based on moment symmetry - the authors' previous work. The descriptor is based on a new concept - center peak moment-like element (CPME) which is robust to blur and boundary effect. Then by constructing CPMEs, the descriptor is also distinctive and suitable for image matching. Experimental results show our scheme outperforms state of the art methods for blurred image matching
Role of parafovea in blur perception.
Venkataraman, Abinaya Priya; Radhakrishnan, Aiswaryah; Dorronsoro, Carlos; Lundström, Linda; Marcos, Susana
2017-09-01
The blur experienced by our visual system is not uniform across the visual field. Additionally, lens designs with variable power profile such as contact lenses used in presbyopia correction and to control myopia progression create variable blur from the fovea to the periphery. The perceptual changes associated with varying blur profile across the visual field are unclear. We therefore measured the perceived neutral focus with images of different angular subtense (from 4° to 20°) and found that the amount of blur, for which focus is perceived as neutral, increases when the stimulus was extended to cover the parafovea. We also studied the changes in central perceived neutral focus after adaptation to images with similar magnitude of optical blur across the image or varying blur from center to the periphery. Altering the blur in the periphery had little or no effect on the shift of perceived neutral focus following adaptation to normal/blurred central images. These perceptual outcomes should be considered while designing bifocal optical solutions for myopia or presbyopia. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Yang, Wen Jie; Yan, Fu Hua; Liu, Bo; Pang, Li Fang; Hou, Liang; Zhang, Huan; Pan, Zi Lai; Chen, Ke Min
2013-01-01
To evaluate the performance of sinogram-affirmed iterative (SAFIRE) reconstruction on image quality of low-dose lung computed tomographic (CT) screening compared with filtered back projection (FBP). Three hundred four patients for annual low-dose lung CT screening were examined by a dual-source CT system at 120 kilovolt (peak) with reference tube current of 40 mA·s. Six image serials were reconstructed, including one data set of FBP and 5 data sets of SAFIRE with different reconstruction strengths from 1 to 5. Image noise was recorded; and subjective scores of image noise, images artifacts, and the overall image quality were also assessed by 2 radiologists. The mean ± SD weight for all patients was 66.3 ± 12.8 kg, and the body mass index was 23.4 ± 3.2. The mean ± SD dose-length product was 95.2 ± 30.6 mGy cm, and the mean ± SD effective dose was 1.6 ± 0.5 mSv. The observation agreements for image noise grade, artifact grade, and the overall image quality were 0.785, 0.595 and 0.512, respectively. Among the overall 6 data sets, both the measured mean objective image noise and the subjective image noise of FBP was the highest, and the image noise decreased with the increasing of SAFIRE reconstruction strength. The data sets of S3 obtained the best image quality scores. Sinogram-affirmed iterative reconstruction can significantly improve image quality of low-dose lung CT screening compared with FBP, and SAFIRE with reconstruction strength 3 was a pertinent choice for low-dose lung CT.
Image-Based 2D Re-Projection for Attenuation Substitution in PET Neuroimaging.
Laymon, Charles M; Minhas, Davneet S; Becker, Carl R; Matan, Cristy; Oborski, Matthew J; Price, Julie C; Mountz, James M
2018-02-27
In dual modality positron emission tomography (PET)/magnetic resonance imaging (MRI), attenuation correction (AC) methods are continually improving. Although a new AC can sometimes be generated from existing MR data, its application requires a new reconstruction. We evaluate an approximate 2D projection method that allows offline image-based reprocessing. 2-Deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) brain scans were acquired (Siemens HR+) for six subjects. Attenuation data were obtained using the scanner's transmission source (SAC). Additional scanning was performed on a Siemens mMR including production of a Dixon-based MR AC (MRAC). The MRAC was imported to the HR+ and the PET data were reconstructed twice: once using native SAC (ground truth); once using the imported MRAC (imperfect AC). The re-projection method was implemented as follows. The MRAC PET was forward projected to approximately reproduce attenuation-corrected sinograms. The SAC and MRAC images were forward projected and converted to attenuation-correction factors (ACFs). The MRAC ACFs were removed from the MRAC PET sinograms by division; the SAC ACFs were applied by multiplication. The regenerated sinograms were reconstructed by filtered back projection to produce images (SUBAC PET) in which SAC has been substituted for MRAC. Ideally SUBAC PET should match SAC PET. Via coregistered T1 images, FreeSurfer (FS; MGH, Boston) was used to define a set of cortical gray matter regions of interest. Regional activity concentrations were extracted for SAC PET, MRAC PET, and SUBAC PET. SUBAC PET showed substantially smaller root mean square error than MRAC PET with averaged values of 1.5 % versus 8.1 %. Re-projection is a viable image-based method for the application of an alternate attenuation correction in neuroimaging.
Single neural code for blur in subjects with different interocular optical blur orientation
Radhakrishnan, Aiswaryah; Sawides, Lucie; Dorronsoro, Carlos; Peli, Eli; Marcos, Susana
2015-01-01
The ability of the visual system to compensate for differences in blur orientation between eyes is not well understood. We measured the orientation of the internal blur code in both eyes of the same subject monocularly by presenting pairs of images blurred with real ocular point spread functions (PSFs) of similar blur magnitude but varying in orientations. Subjects assigned a level of confidence to their selection of the best perceived image in each pair. Using a classification-images–inspired paradigm and applying a reverse correlation technique, a classification map was obtained from the weighted averages of the PSFs, representing the internal blur code. Positive and negative neural PSFs were obtained from the classification map, representing the neural blur for best and worse perceived blur, respectively. The neural PSF was found to be highly correlated in both eyes, even for eyes with different ocular PSF orientations (rPos = 0.95; rNeg = 0.99; p < 0.001). We found that in subjects with similar and with different ocular PSF orientations between eyes, the orientation of the positive neural PSF was closer to the orientation of the ocular PSF of the eye with the better optical quality (average difference was ∼10°), while the orientation of the positive and negative neural PSFs tended to be orthogonal. These results suggest a single internal code for blur with orientation driven by the orientation of the optical blur of the eye with better optical quality. PMID:26114678
An improved robust blind motion de-blurring algorithm for remote sensing images
NASA Astrophysics Data System (ADS)
He, Yulong; Liu, Jin; Liang, Yonghui
2016-10-01
Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.
NASA Astrophysics Data System (ADS)
Jimenez, Edward S.; Thompson, Kyle R.; Stohn, Adriana; Goodner, Ryan N.
2017-09-01
Sandia National Laboratories has recently developed the capability to acquire multi-channel radio- graphs for multiple research and development applications in industry and security. This capability allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for computed tomography can actually benefit from binned projection data compared to traditionally acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis- tinguish between two different materials with similar absorption properties, artifact reduction, and signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology to non-destructive evaluation, national security, and industry is wide-ranging and has to potential to improve upon many inspection methods such as dual-energy methods, material identification, object segmentation, and computer vision on radiographs.
Practical implementation of tetrahedral mesh reconstruction in emission tomography
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2014-01-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise. PMID:23588373
Practical implementation of tetrahedral mesh reconstruction in emission tomography
NASA Astrophysics Data System (ADS)
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2013-05-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise.
Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong
2014-01-01
As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese–Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred–PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328
Theory of reflectivity blurring in seismic depth imaging
NASA Astrophysics Data System (ADS)
Thomson, C. J.; Kitchenside, P. W.; Fletcher, R. P.
2016-05-01
A subsurface extended image gather obtained during controlled-source depth imaging yields a blurred kernel of an interface reflection operator. This reflectivity kernel or reflection function is comprised of the interface plane-wave reflection coefficients and so, in principle, the gather contains amplitude versus offset or angle information. We present a modelling theory for extended image gathers that accounts for variable illumination and blurring, under the assumption of a good migration-velocity model. The method involves forward modelling as well as migration or back propagation so as to define a receiver-side blurring function, which contains the effects of the detector array for a given shot. Composition with the modelled incident wave and summation over shots then yields an overall blurring function that relates the reflectivity to the extended image gather obtained from field data. The spatial evolution or instability of blurring functions is a key concept and there is generally not just spatial blurring in the apparent reflectivity, but also slowness or angle blurring. Gridded blurring functions can be estimated with, for example, a reverse-time migration modelling engine. A calibration step is required to account for ad hoc band limitedness in the modelling and the method also exploits blurring-function reciprocity. To demonstrate the concepts, we show numerical examples of various quantities using the well-known SIGSBEE test model and a simple salt-body overburden model, both for 2-D. The moderately strong slowness/angle blurring in the latter model suggests that the effect on amplitude versus offset or angle analysis should be considered in more realistic structures. Although the description and examples are for 2-D, the extension to 3-D is conceptually straightforward. The computational cost of overall blurring functions implies their targeted use for the foreseeable future, for example, in reservoir characterization. The description is for scalar waves, but the extension to elasticity is foreseeable and we emphasize the separation of the overburden and survey-geometry blurring effects from the nature of the target scatterer.
Adaptation to interocular differences in blur
Kompaniez, Elysse; Sawides, Lucie; Marcos, Susana; Webster, Michael A.
2013-01-01
Adaptation to a blurred image causes a physically focused image to appear too sharp, and shifts the point of subjective focus toward the adapting blur, consistent with a renormalization of perceived focus. We examined whether and how this adaptation normalizes to differences in blur between the two eyes, which can routinely arise from differences in refractive errors. Observers adapted to images filtered to simulate optical defocus or different axes of astigmatism, as well as to images that were isotropically blurred or sharpened by varying the slope of the amplitude spectrum. Adaptation to the different types of blur produced strong aftereffects that showed strong transfer across the eyes, as assessed both in a monocular adaptation task and in a contingent adaptation task in which the two eyes were simultaneously exposed to different blur levels. Selectivity for the adapting eye was thus generally weak. When one eye was exposed to a sharper image than the other, the aftereffects also tended to be dominated by the sharper image. Our results suggest that while short-term adaptation can rapidly recalibrate the perception of blur, it cannot do so independently for the two eyes, and that the binocular adaptation of blur is biased by the sharper of the two eyes' retinal images. PMID:23729770
Multiple feature fusion via covariance matrix for visual tracking
NASA Astrophysics Data System (ADS)
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui
2018-04-01
Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.
Esparza, Cesar; Borisov, R S; Varlamov, A V; Zaikin, V G
2016-10-28
New composite matrices have been suggested for the analysis of mixtures of different synthetic organic compounds (N-containing heterocycles and erectile dysfunction drugs) by thin layer chromatography/matrix-assisted laser desorption ionization time-of-flight mass spectrometry (TLC/MALDI-TOF). Different mixtures of classical MALDI matrices and graphite particles dispersed in glycerol were used for the registration of MALDI mass spectra directly from TLC plates after analytes separation. In most of cases, the mass spectra possessed [M+H] + ions; however, for some analytes only [M+Na] + and [M+K] + ions were observed. These ions have been used to generate visualized TLC chromatograms. The described approach increases the desorption/ionization efficiencies of analytes separated by TLC, prevent spot blurring, simplifies and decrease time for sample preparation. Copyright © 2016 Elsevier B.V. All rights reserved.
Sutherland, J G H; Miksys, N; Furutani, K M; Thomson, R M
2014-01-01
To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxel and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for (125)I, (103)Pd, and (131)Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for (103)Pd seeds and smallest but still considerable differences for (131)Cs seeds. Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.
Blind restoration of retinal images degraded by space-variant blur with adaptive blur estimation
NASA Astrophysics Data System (ADS)
Marrugo, Andrés. G.; Millán, María. S.; Å orel, Michal; Å roubek, Filip
2013-11-01
Retinal images are often degraded with a blur that varies across the field view. Because traditional deblurring algorithms assume the blur to be space-invariant they typically fail in the presence of space-variant blur. In this work we consider the blur to be both unknown and space-variant. To carry out the restoration, we assume that in small regions the space-variant blur can be approximated by a space-invariant point-spread function (PSF). However, instead of deblurring the image on a per-patch basis, we extend individual PSFs by linear interpolation and perform a global restoration. Because the blind estimation of local PSFs may fail we propose a strategy for the identification of valid local PSFs and perform interpolation to obtain the space-variant PSF. The method was tested on artificial and real degraded retinal images. Results show significant improvement in the visibility of subtle details like small blood vessels.
Kotasidis, F A; Matthews, J C; Angelis, G I; Noonan, P J; Jackson, A; Price, P; Lionheart, W R; Reader, A J
2011-05-21
Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagesh, S Setlur; Rana, R; Russ, M
Purpose: CMOS-based aSe detectors compared to CsI-TFT-based flat panels have the advantages of higher spatial sampling due to smaller pixel size and decreased blurring characteristic of direct rather than indirect detection. For systems with such detectors, the limiting factor degrading image resolution then becomes the focal-spot geometric unsharpness. This effect can seriously limit the use of such detectors in areas such as cone beam computed tomography, clinical fluoroscopy and angiography. In this work a technique to remove the effect of focal-spot blur is presented for a simulated aSe detector. Method: To simulate images from an aSe detector affected with focal-spotmore » blur, first a set of high-resolution images of a stent (FRED from Microvention, Inc.) were acquired using a 75µm pixel size Dexela-Perkin-Elmer detector and averaged to reduce quantum noise. Then the averaged image was blurred with a known Gaussian blur at two different magnifications to simulate an idealized focal spot. The blurred images were then deconvolved with a set of different Gaussian blurs to remove the effect of focal-spot blurring using a threshold-based, inverse-filtering method. Results: The blur was removed by deconvolving the images using a set of Gaussian functions for both magnifications. Selecting the correct function resulted in an image close to the original; however, selection of too wide a function would cause severe artifacts. Conclusion: Experimentally, focal-spot blur at different magnifications can be measured using a pin hole with a high resolution detector. This spread function can be used to deblur the input images that are acquired at corresponding magnifications to correct for the focal spot blur. For CBCT applications, the magnification of specific objects can be obtained using initial reconstructions then corrected for focal-spot blurring to improve resolution. Similarly, if object magnification can be determined such correction may be applied in fluoroscopy and angiography.« less
Comparison of ring artifact removal methods using flat panel detector based CT images
2011-01-01
Background Ring artifacts are the concentric rings superimposed on the tomographic images often caused by the defective and insufficient calibrated detector elements as well as by the damaged scintillator crystals of the flat panel detector. It may be also generated by objects attenuating X-rays very differently in different projection direction. Ring artifact reduction techniques so far reported in the literature can be broadly classified into two groups. One category of the approaches is based on the sinogram processing also known as the pre-processing techniques and the other category of techniques perform processing on the 2-D reconstructed images, recognized as the post-processing techniques in the literature. The strength and weakness of these categories of approaches are yet to be explored from a common platform. Method In this paper, a comparative study of the two categories of ring artifact reduction techniques basically designed for the multi-slice CT instruments is presented from a common platform. For comparison, two representative algorithms from each of the two categories are selected from the published literature. A very recently reported state-of-the-art sinogram domain ring artifact correction method that classifies the ring artifacts according to their strength and then corrects the artifacts using class adaptive correction schemes is also included in this comparative study. The first sinogram domain correction method uses a wavelet based technique to detect the corrupted pixels and then using a simple linear interpolation technique estimates the responses of the bad pixels. The second sinogram based correction method performs all the filtering operations in the transform domain, i.e., in the wavelet and Fourier domain. On the other hand, the two post-processing based correction techniques actually operate on the polar transform domain of the reconstructed CT images. The first method extracts the ring artifact template vector using a homogeneity test and then corrects the CT images by subtracting the artifact template vector from the uncorrected images. The second post-processing based correction technique performs median and mean filtering on the reconstructed images to produce the corrected images. Results The performances of the comparing algorithms have been tested by using both quantitative and perceptual measures. For quantitative analysis, two different numerical performance indices are chosen. On the other hand, different types of artifact patterns, e.g., single/band ring, artifacts from defective and mis-calibrated detector elements, rings in highly structural object and also in hard object, rings from different flat-panel detectors are analyzed to perceptually investigate the strength and weakness of the five methods. An investigation has been also carried out to compare the efficacy of these algorithms in correcting the volume images from a cone beam CT with the parameters determined from one particular slice. Finally, the capability of each correction technique in retaining the image information (e.g., small object at the iso-center) accurately in the corrected CT image has been also tested. Conclusions The results show that the performances of the algorithms are limited and none is fully suitable for correcting different types of ring artifacts without introducing processing distortion to the image structure. To achieve the diagnostic quality of the corrected slices a combination of the two approaches (sinogram- and post-processing) can be used. Also the comparing methods are not suitable for correcting the volume images from a cone beam flat-panel detector based CT. PMID:21846411
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeon, Hosang; Park, Dahl; Kim, Wontaek
Purpose: The overall goal of this study is to restore kilovoltage computed tomography (kV-CT) images which are disfigured by patients’ metal prostheses. By generating a hybrid sinogram that is a combination of kV and megavoltage (MV) projection data, the authors suggest a novel metal artifact-reduction (MAR) method that retains the image quality to match that of kV-CT and simultaneously restores the information of metal prostheses lost due to photon starvation. Methods: CT projection data contain information about attenuation coefficients and the total length of the attenuation. By normalizing raw kV projections with their own total lengths of attenuation, mean attenuationmore » projections were obtained. In the same manner, mean density projections of MV-CT were obtained by the normalization of MV projections resulting from the forward projection of density-calibrated MV-CT images with the geometric parameters of the kV-CT device. To generate the hybrid sinogram, metal-affected signals of the kV sinogram were identified and replaced by the corresponding signals of the MV sinogram following a density calibration step with kV data. Filtered backprojection was implemented to reconstruct the hybrid CT image. To validate the authors’ approach, they simulated four different scenarios for three heads and one pelvis using metallic rod inserts within a cylindrical phantom. Five inserts describing human body elements were also included in the phantom. The authors compared the image qualities among the kV, MV, and hybrid CT images by measuring the contrast-to-noise ratio (CNR), the signal-to-noise ratio (SNR), the densities of all inserts, and the spatial resolution. In addition, the MAR performance was compared among three existing MAR methods and the authors’ hybrid method. Finally, for clinical trials, the authors produced hybrid images of three patients having dental metal prostheses to compare their MAR performances with those of the kV, MV, and three existing MAR methods. Results: The authors compared the image quality and MAR performance of the hybrid method with those of other imaging modalities and the three MAR methods, respectively. The total measured mean of the CNR (SNR) values for the nonmetal inserts was determined to be 14.3 (35.3), 15.3 (37.8), and 25.5 (64.3) for the kV, MV, and hybrid images, respectively, and the spatial resolutions of the hybrid images were similar to those of the kV images. The measured densities of the metal and nonmetal inserts in the hybrid images were in good agreement with their true densities, except in cases of extremely low densities, such as air and lung. Using the hybrid method, major streak artifacts were suitably removed and no secondary artifacts were introduced in the resultant image. In clinical trials, the authors verified that kV and MV projections were successfully combined and turned into the resultant hybrid image with high image contrast, accurate metal information, and few metal artifacts. The hybrid method also outperformed the three existing MAR methods with regard to metal information restoration and secondary artifact prevention. Conclusions: The authors have shown that the hybrid method can restore the overall image quality of kV-CT disfigured by severe metal artifacts and restore the information of metal prostheses lost due to photon starvation. The hybrid images may allow for the improved delineation of structures of interest and accurate dose calculations for radiation treatment planning for patients with metal prostheses.« less
Recognition of blurred images by the method of moments.
Flusser, J; Suk, T; Saic, S
1996-01-01
The article is devoted to the feature-based recognition of blurred images acquired by a linear shift-invariant imaging system against an image database. The proposed approach consists of describing images by features that are invariant with respect to blur and recognizing images in the feature space. The PSF identification and image restoration are not required. A set of symmetric blur invariants based on image moments is introduced. A numerical experiment is presented to illustrate the utilization of the invariants for blurred image recognition. Robustness of the features is also briefly discussed.
Reconstruction of noisy and blurred images using blur kernel
NASA Astrophysics Data System (ADS)
Ellappan, Vijayan; Chopra, Vishal
2017-11-01
Blur is a common in so many digital images. Blur can be caused by motion of the camera and scene object. In this work we proposed a new method for deblurring images. This work uses sparse representation to identify the blur kernel. By analyzing the image coordinates Using coarse and fine, we fetch the kernel based image coordinates and according to that observation we get the motion angle of the shaken or blurred image. Then we calculate the length of the motion kernel using radon transformation and Fourier for the length calculation of the image and we use Lucy Richardson algorithm which is also called NON-Blind(NBID) Algorithm for more clean and less noisy image output. All these operation will be performed in MATLAB IDE.
Compensation for Blur Requires Increase in Field of View and Viewing Time
Kwon, MiYoung; Liu, Rong; Chien, Lillian
2016-01-01
Spatial resolution is an important factor for human pattern recognition. In particular, low resolution (blur) is a defining characteristic of low vision. Here, we examined spatial (field of view) and temporal (stimulus duration) requirements for blurry object recognition. The spatial resolution of an image such as letter or face, was manipulated with a low-pass filter. In experiment 1, studying spatial requirement, observers viewed a fixed-size object through a window of varying sizes, which was repositioned until object identification (moving window paradigm). Field of view requirement, quantified as the number of “views” (window repositions) for correct recognition, was obtained for three blur levels, including no blur. In experiment 2, studying temporal requirement, we determined threshold viewing time, the stimulus duration yielding criterion recognition accuracy, at six blur levels, including no blur. For letter and face recognition, we found blur significantly increased the number of views, suggesting a larger field of view is required to recognize blurry objects. We also found blur significantly increased threshold viewing time, suggesting longer temporal integration is necessary to recognize blurry objects. The temporal integration reflects the tradeoff between stimulus intensity and time. While humans excel at recognizing blurry objects, our findings suggest compensating for blur requires increased field of view and viewing time. The need for larger spatial and longer temporal integration for recognizing blurry objects may further challenge object recognition in low vision. Thus, interactions between blur and field of view should be considered for developing low vision rehabilitation or assistive aids. PMID:27622710
Restoration of motion blurred image with Lucy-Richardson algorithm
NASA Astrophysics Data System (ADS)
Li, Jing; Liu, Zhao Hui; Zhou, Liang
2015-10-01
Images will be blurred by relative motion between the camera and the object of interest. In this paper, we analyzed the process of motion-blurred image, and demonstrated a restoration method based on Lucy-Richardson algorithm. The blur extent and angle can be estimated by Radon transform algorithm and auto-correlation function, respectively, and then the point spread function (PSF) of the motion-blurred image can be obtained. Thus with the help of the obtained PSF, the Lucy-Richardson restoration algorithm is used for experimental analysis on the motion-blurred images that have different blur extents, spatial resolutions and signal-to-noise ratios (SNR's). Further, its effectiveness is also evaluated by structural similarity (SSIM). Further studies show that, at first, for the image with a spatial frequency of 0.2 per pixel, the modulation transfer function (MTF) of the restored images can maintains above 0.7 when the blur extent is no bigger than 13 pixels. That means the method compensates low frequency information of the image, while attenuates high frequency information. At second, we fund that the method is more effective on condition that the product of the blur extent and spatial frequency is smaller than 3.75. Finally, the Lucy-Richardson algorithm is found insensitive to the Gaussian noise (of which the variance is not bigger than 0.1) by calculating the MTF of the restored image.
Optical security verification for blurred fingerprints
NASA Astrophysics Data System (ADS)
Soon, Boon Y.; Karim, Mohammad A.; Alam, Mohammad S.
1998-12-01
Optical fingerprint security verification is gaining popularity, as it has the potential to perform correlation at the speed of light. With advancement in optical security verification techniques, authentication process can be almost foolproof and reliable for financial transaction, banking, etc. In law enforcement, when a fingerprint is obtained from a crime scene, it may be blurred and can be an unhealthy candidate for correlation purposes. Therefore, the blurred fingerprint needs to be clarified before it is used for the correlation process. There are a several different types of blur, such as linear motion blur and defocus blur, induced by aberration of imaging system. In addition, we may or may not know the blur function. In this paper, we propose the non-singularity inverse filtering in frequency/power domain for deblurring known motion-induced blur in fingerprints. This filtering process will be incorporated with the pow spectrum subtraction technique, uniqueness comparison scheme, and the separated target and references planes method in the joint transform correlator. The proposed hardware implementation is a hybrid electronic-optical correlator system. The performance of the proposed system would be verified with computer simulation for both cases: with and without additive random noise corruption.
Using Blur to Affect Perceived Distance and Size
HELD, ROBERT T.; COOPER, EMILY A.; O’BRIEN, JAMES F.; BANKS, MARTIN S.
2011-01-01
We present a probabilistic model of how viewers may use defocus blur in conjunction with other pictorial cues to estimate the absolute distances to objects in a scene. Our model explains how the pattern of blur in an image together with relative depth cues indicates the apparent scale of the image’s contents. From the model, we develop a semiautomated algorithm that applies blur to a sharply rendered image and thereby changes the apparent distance and scale of the scene’s contents. To examine the correspondence between the model/algorithm and actual viewer experience, we conducted an experiment with human viewers and compared their estimates of absolute distance to the model’s predictions. We did this for images with geometrically correct blur due to defocus and for images with commonly used approximations to the correct blur. The agreement between the experimental data and model predictions was excellent. The model predicts that some approximations should work well and that others should not. Human viewers responded to the various types of blur in much the way the model predicts. The model and algorithm allow one to manipulate blur precisely and to achieve the desired perceived scale efficiently. PMID:21552429
Restoration of retinal images with space-variant blur.
Marrugo, Andrés G; Millán, María S; Sorel, Michal; Sroubek, Filip
2014-01-01
Retinal images are essential clinical resources for the diagnosis of retinopathy and many other ocular diseases. Because of improper acquisition conditions or inherent optical aberrations in the eye, the images are often degraded with blur. In many common cases, the blur varies across the field of view. Most image deblurring algorithms assume a space-invariant blur, which fails in the presence of space-variant (SV) blur. In this work, we propose an innovative strategy for the restoration of retinal images in which we consider the blur to be both unknown and SV. We model the blur by a linear operation interpreted as a convolution with a point-spread function (PSF) that changes with the position in the image. To achieve an artifact-free restoration, we propose a framework for a robust estimation of the SV PSF based on an eye-domain knowledge strategy. The restoration method was tested on artificially and naturally degraded retinal images. The results show an important enhancement, significant enough to leverage the images' clinical use.
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji
2017-01-01
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385
Blurred image recognition by legendre moment invariants
Zhang, Hui; Shu, Huazhong; Han, Guo-Niu; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis
2010-01-01
Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments. PMID:19933003
Restoration of non-uniform exposure motion blurred image
NASA Astrophysics Data System (ADS)
Luo, Yuanhong; Xu, Tingfa; Wang, Ningming; Liu, Feng
2014-11-01
Restoring motion-blurred image is the key technologies in the opto-electronic detection system. The imaging sensors such as CCD and infrared imaging sensor, which are mounted on the motion platforms, quickly move together with the platforms of high speed. As a result, the images become blur. The image degradation will cause great trouble for the succeeding jobs such as objects detection, target recognition and tracking. So the motion-blurred images must be restoration before detecting motion targets in the subsequent images. On the demand of the real weapon task, in order to deal with targets in the complex background, this dissertation uses the new theories in the field of image processing and computer vision to research the new technology of motion deblurring and motion detection. The principle content is as follows: 1) When the prior knowledge about degradation function is unknown, the uniform motion blurred images are restored. At first, the blur parameters, including the motion blur extent and direction of PSF(point spread function), are estimated individually in domain of logarithmic frequency. The direction of PSF is calculated by extracting the central light line of the spectrum, and the extent is computed by minimizing the correction between the fourier spectrum of the blurred image and a detecting function. Moreover, in order to remove the strip in the deblurred image, windows technique is employed in the algorithm, which makes the deblurred image clear. 2) According to the principle of infrared image non-uniform exposure, a new restoration model for infrared blurred images is developed. The fitting of infrared image non-uniform exposure curve is performed by experiment data. The blurred images are restored by the fitting curve.
Edge Modeling by Two Blur Parameters in Varying Contrasts.
Seo, Suyoung
2018-06-01
This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.
Wang, Fang; Sun, Ying; Cao, Meng; Nishi, Ryuji
2016-04-01
This study investigates the influence of structure depth on image blurring of micrometres-thick films by experiment and simulation with a conventional transmission electron microscope (TEM). First, ultra-high-voltage electron microscope (ultra-HVEM) images of nanometer gold particles embedded in thick epoxy-resin films were acquired in the experiment and compared with simulated images. Then, variations of image blurring of gold particles at different depths were evaluated by calculating the particle diameter. The results showed that with a decrease in depth, image blurring increased. This depth-related property was more apparent for thicker specimens. Fortunately, larger particle depth involves less image blurring, even for a 10-μm-thick epoxy-resin film. The quality dependence on depth of a 3D reconstruction of particle structures in thick specimens was revealed by electron tomography. The evolution of image blurring with structure depth is determined mainly by multiple elastic scattering effects. Thick specimens of heavier materials produced more blurring due to a larger lateral spread of electrons after scattering from the structure. Nevertheless, increasing electron energy to 2MeV can reduce blurring and produce an acceptable image quality for thick specimens in the TEM. Copyright © 2016 Elsevier Ltd. All rights reserved.
Addressing the third gamma problem in PET
NASA Astrophysics Data System (ADS)
Schueller, M. J.; Mulnix, T. L.; Christian, B. T.; Jensen, M.; Holm, S.; Oakes, T. R.; Roberts, A. D.; Dick, D. W.; Martin, C. C.; Nickles, R. J.
2003-02-01
PET brings the promise of quantitative imaging of the in-vivo distribution of any positron emitting nuclide, a list with hundreds of candidates. All but a few of these, the "pure positron" emitters, have isotropic coincident gamma rays that give rise to misrepresented events in the sinogram and in the resulting reconstructed image. Of particular interest are /sup 10/C, /sup 14/O, /sup 38/K, /sup 52m/Mn, /sup 60/Cu, /sup 61/Cu, /sup 94m/Tc, and /sup 124/I, each having high-energy gammas that are Compton-scattered down into the 511 keV window. The problems arising from the "third gamma," and its accommodation by standard scatter correction algorithms, were studied empirically, employing three scanner models (CTI 933/04, CTI HR+ and GE Advance), imaging three phantoms (line source, NEMA scatter and contrast/detail), with /sup 18/F or /sup 38/K and /sup 72/As mimicking /sup 14/O and /sup 10/C, respectively, in 2-D and 3-D modes. Five findings emerge directly from the image analysis. The third gamma: 1) does, obviously, tax the single event rate of the PET scanners, particularly in the absence of septa, from activity outside of the axial field of view; 2) does, therefore, tax the random rate, which is second order in singles, although the gamma is a prompt coincidence partner; 3) does enter the sinogram as an additional flat background, like randoms, but unlike scatter; 4) is not seriously misrepresented by the scatter algorithm which fits the correction to the wings of the sinogram; and 5) does introduce additional statistical noise from the subsequent subtraction, but does not seriously compromise the detectability of lesions as seen in the contrast/detail phantom. As a safeguard against the loss of accuracy in image quantitation, fiducial sources of known activity are included in the field of view alongside of the subject. With this precaution, a much wider selection of imaging agents can enjoy the advantages of positron emission tomography.
SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, C; Qi, H; Chen, Z
Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using meanmore » filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.« less
NASA Astrophysics Data System (ADS)
Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia
2016-06-01
A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra-reconstruction smoothing.
The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images.
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2015-01-01
We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer's own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed.
The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J.
2015-01-01
We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed. PMID:26447793
Radiation-induced refraction artifacts in the optical CT readout of polymer gel dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Warren G.; Jirasek, Andrew, E-mail: jirasek@uvic.ca; Wells, Derek M.
2014-11-01
Purpose: The objective of this work is to demonstrate imaging artifacts that can occur during the optical computed tomography (CT) scanning of polymer gel dosimeters due to radiation-induced refractive index (RI) changes in polyacrylamide gels. Methods: A 1 L cylindrical polyacrylamide gel dosimeter was irradiated with 3 × 3 cm{sup 2} square beams of 6 MV photons. A prototype fan-beam optical CT scanner was used to image the dosimeter. Investigative optical CT scans were performed to examine two types of rayline bending: (i) bending within the plane of the fan-beam and (ii) bending out the plane of the fan-beam. Tomore » address structured errors, an iterative Savitzky–Golay (ISG) filtering routine was designed to filter 2D projections in sinogram space. For comparison, 2D projections were alternatively filtered using an adaptive-mean (AM) filter. Results: In-plane rayline bending was most notably observed in optical CT projections where rays of the fan-beam confronted a sustained dose gradient that was perpendicular to their trajectory but within the fan-beam plane. These errors caused distinct streaking artifacts in image reconstructions due to the refraction of higher intensity rays toward more opaque regions of the dosimeter. Out-of-plane rayline bending was observed in slices of the dosimeter that featured dose gradients perpendicular to the plane of the fan-beam. These errors caused widespread, severe overestimations of dose in image reconstructions due to the higher-than-actual opacity that is perceived by the scanner when light is bent off of the detector array. The ISG filtering routine outperformed AM filtering for both in-plane and out-of-plane rayline errors caused by radiation-induced RI changes. For in-plane rayline errors, streaks in an irradiated region (>7 Gy) were as high as 49% for unfiltered data, 14% for AM, and 6% for ISG. For out-of-plane rayline errors, overestimations of dose in a low-dose region (∼50 cGy) were as high as 13 Gy for unfiltered data, 10 Gy for AM, and 3.1 Gy for ISG. The ISG routine also addressed unrelated artifacts that previously needed to be manually removed in sinogram space. However, the ISG routine blurred reconstructions, causing losses in spatial resolution of ∼5 mm in the plane of the fan-beam and ∼8 mm perpendicular to the fan-beam. Conclusions: This paper reveals a new category of imaging artifacts that can affect the optical CT readout of polyacrylamide gel dosimeters. Investigative scans show that radiation-induced RI changes can cause significant rayline errors when rays confront a prolonged dose gradient that runs perpendicular to their trajectory. In fan-beam optical CT, these errors manifested in two ways: (1) distinct streaking artifacts caused by in-plane rayline bending and (2) severe overestimations of opacity caused by rays bending out of the fan-beam plane and missing the detector array. Although the ISG filtering routine mitigated these errors better than an adaptive-mean filtering routine, it caused unacceptable losses in spatial resolution.« less
Radiation-induced refraction artifacts in the optical CT readout of polymer gel dosimeters.
Campbell, Warren G; Wells, Derek M; Jirasek, Andrew
2014-11-01
The objective of this work is to demonstrate imaging artifacts that can occur during the optical computed tomography (CT) scanning of polymer gel dosimeters due to radiation-induced refractive index (RI) changes in polyacrylamide gels. A 1 L cylindrical polyacrylamide gel dosimeter was irradiated with 3 × 3 cm(2) square beams of 6 MV photons. A prototype fan-beam optical CT scanner was used to image the dosimeter. Investigative optical CT scans were performed to examine two types of rayline bending: (i) bending within the plane of the fan-beam and (ii) bending out the plane of the fan-beam. To address structured errors, an iterative Savitzky-Golay (ISG) filtering routine was designed to filter 2D projections in sinogram space. For comparison, 2D projections were alternatively filtered using an adaptive-mean (AM) filter. In-plane rayline bending was most notably observed in optical CT projections where rays of the fan-beam confronted a sustained dose gradient that was perpendicular to their trajectory but within the fan-beam plane. These errors caused distinct streaking artifacts in image reconstructions due to the refraction of higher intensity rays toward more opaque regions of the dosimeter. Out-of-plane rayline bending was observed in slices of the dosimeter that featured dose gradients perpendicular to the plane of the fan-beam. These errors caused widespread, severe overestimations of dose in image reconstructions due to the higher-than-actual opacity that is perceived by the scanner when light is bent off of the detector array. The ISG filtering routine outperformed AM filtering for both in-plane and out-of-plane rayline errors caused by radiation-induced RI changes. For in-plane rayline errors, streaks in an irradiated region (>7 Gy) were as high as 49% for unfiltered data, 14% for AM, and 6% for ISG. For out-of-plane rayline errors, overestimations of dose in a low-dose region (∼50 cGy) were as high as 13 Gy for unfiltered data, 10 Gy for AM, and 3.1 Gy for ISG. The ISG routine also addressed unrelated artifacts that previously needed to be manually removed in sinogram space. However, the ISG routine blurred reconstructions, causing losses in spatial resolution of ∼5 mm in the plane of the fan-beam and ∼8 mm perpendicular to the fan-beam. This paper reveals a new category of imaging artifacts that can affect the optical CT readout of polyacrylamide gel dosimeters. Investigative scans show that radiation-induced RI changes can cause significant rayline errors when rays confront a prolonged dose gradient that runs perpendicular to their trajectory. In fan-beam optical CT, these errors manifested in two ways: (1) distinct streaking artifacts caused by in-plane rayline bending and (2) severe overestimations of opacity caused by rays bending out of the fan-beam plane and missing the detector array. Although the ISG filtering routine mitigated these errors better than an adaptive-mean filtering routine, it caused unacceptable losses in spatial resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutherland, J. G. H.; Miksys, N.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca
2014-01-15
Purpose: To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Methods: Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxelmore » and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for{sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Results: Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for{sup 103}Pd seeds and smallest but still considerable differences for {sup 131}Cs seeds. Conclusions: Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.« less
Tchebichef moment based restoration of Gaussian blurred images.
Kumar, Ahlad; Paramesran, Raveendran; Lim, Chern-Loon; Dass, Sarat C
2016-11-10
With the knowledge of how edges vary in the presence of a Gaussian blur, a method that uses low-order Tchebichef moments is proposed to estimate the blur parameters: sigma (σ) and size (w). The difference between the Tchebichef moments of the original and the reblurred images is used as feature vectors to train an extreme learning machine for estimating the blur parameters (σ,w). The effectiveness of the proposed method to estimate the blur parameters is examined using cross-database validation. The estimated blur parameters from the proposed method are used in the split Bregman-based image restoration algorithm. A comparative analysis of the proposed method with three existing methods using all the images from the LIVE database is carried out. The results show that the proposed method in most of the cases performs better than the three existing methods in terms of the visual quality evaluated using the structural similarity index.
Processing of configural and componential information in face-selective cortical areas.
Zhao, Mintao; Cheung, Sing-Hang; Wong, Alan C-N; Rhodes, Gillian; Chan, Erich K S; Chan, Winnie W L; Hayward, William G
2014-01-01
We investigated how face-selective cortical areas process configural and componential face information and how race of faces may influence these processes. Participants saw blurred (preserving configural information), scrambled (preserving componential information), and whole faces during fMRI scan, and performed a post-scan face recognition task using blurred or scrambled faces. The fusiform face area (FFA) showed stronger activation to blurred than to scrambled faces, and equivalent responses to blurred and whole faces. The occipital face area (OFA) showed stronger activation to whole than to blurred faces, which elicited similar responses to scrambled faces. Therefore, the FFA may be more tuned to process configural than componential information, whereas the OFA similarly participates in perception of both. Differences in recognizing own- and other-race blurred faces were correlated with differences in FFA activation to those faces, suggesting that configural processing within the FFA may underlie the other-race effect in face recognition.
Effective 3-D shape discrimination survives retinal blur.
Norman, J Farley; Beers, Amanda M; Holmin, Jessica S; Boswell, Alexandria M
2010-08-01
A single experiment evaluated observers' ability to visually discriminate 3-D object shape, where the 3-D structure was defined by motion, texture, Lambertian shading, and occluding contours. The observers' vision was degraded to varying degrees by blurring the experimental stimuli, using 2.0-, 2.5-, and 3.0-diopter convex lenses. The lenses reduced the observers' acuity from -0.091 LogMAR (in the no-blur conditions) to 0.924 LogMAR (in the conditions with the most blur; 3.0-diopter lenses). This visual degradation, although producing severe reductions in visual acuity, had only small (but significant) effects on the observers' ability to discriminate 3-D shape. The observers' shape discrimination performance was facilitated by the objects' rotation in depth, regardless of the presence or absence of blur. Our results indicate that accurate global shape discrimination survives a considerable amount of retinal blur.
Photographic Image Restoration
NASA Technical Reports Server (NTRS)
Hite, Gerald E.
1991-01-01
Deblurring capabilities would significantly improve the Flight Science Support Office's ability to monitor the effects of lift-off on the shuttle and landing on the orbiter. A deblurring program was written and implemented to extract information from blurred images containing a straight line or edge and to use that information to deblur the image. The program was successfully applied to an image blurred by improper focussing and two blurred by different amounts of blurring. In all cases, the reconstructed modulation transfer function not only had the same zero contours as the Fourier transform of the blurred image but the associated point spread function also had structure not easily described by simple parameterizations. The difficulties posed by the presence of noise in the blurred image necessitated special consideration. An amplitude modification technique was developed for the zero contours of the modulation transfer function at low to moderate frequencies and a smooth filter was used to suppress high frequency noise.
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
Suryakumar, Rajaraman; Meyers, Jason P; Irving, Elizabeth L; Bobier, William R
2007-02-01
Retinal blur and disparity are two different sensory signals known to cause a change in accommodative response. These inputs have differing neurological correlates that feed into a final common pathway. The purpose of this study was to investigate the dynamic properties of monocular blur driven accommodation and binocular disparity driven vergence-accommodation (VA) in human subjects. The results show that when response amplitudes are matched, blur accommodation and VA share similar dynamic properties.
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
Sub-Lexical Phonological and Semantic Processing of Semantic Radicals: A Primed Naming Study
ERIC Educational Resources Information Center
Zhou, Lin; Peng, Gang; Zheng, Hong-Ying; Su, I-Fan; Wang, William S.-Y.
2013-01-01
Most sinograms (i.e., Chinese characters) are phonograms (phonetic compounds). A phonogram is composed of a semantic radical and a phonetic radical, with the former usually implying the meaning of the phonogram, and the latter providing cues to its pronunciation. This study focused on the sub-lexical processing of semantic radicals which are…
Simulated disparity and peripheral blur interact during binocular fusion.
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2014-07-17
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual’s aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. © 2014 ARVO.
Simulated disparity and peripheral blur interact during binocular fusion
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2014-01-01
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual's aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. PMID:25034260
[Fuzzy logic in urology. How to reason in inaccurate terms].
Vírseda Chamorro, Miguel; Salinas Casado, Jesus; Vázquez Alba, David
2004-05-01
The Occidental thinking is basically binary, based on opposites. The classic logic constitutes a systematization of these thinking. The methods of pure sciences such as physics are based on systematic measurement, analysis and synthesis. Nature is described by deterministic differential equations this way. Medical knowledge does not adjust well to deterministic equations of physics so that probability methods are employed. However, this method is not free of problems, both theoretical and practical, so that it is not often possible even to know with certainty the probabilities of most events. On the other hand, the application of binary logic to medicine in general, and to urology particularly, finds serious difficulties such as the imprecise character of the definition of most diseases and the uncertainty associated with most medical acts. These are responsible for the fact that many medical recommendations are made using a literary language which is inaccurate, inconsistent and incoherent. The blurred logic is a way of reasoning coherently using inaccurate concepts. This logic was proposed by Lofti Zadeh in 1965 and it is based in two principles: the theory of blurred conjuncts and the use of blurred rules. A blurred conjunct is one the elements of which have a degree of belonging between 0 and 1. Each blurred conjunct is associated with an inaccurate property or linguistic variable. Blurred rules use the principles of classic logic adapted to blurred conjuncts taking the degree of belonging of each element to the blurred conjunct of reference as the value of truth. Blurred logic allows to do coherent urologic recommendations (i.e. what patient is the performance of PSA indicated in?, what to do in the face of an elevated PSA?), or to perform diagnosis adapted to the uncertainty of diagnostic tests (e.g. data obtained from pressure flow studies in females).
Edge roughness evaluation method for quantifying at-size beam blur in electron-beam lithography
NASA Astrophysics Data System (ADS)
Yoshizawa, Masaki; Moriya, Shigeru
2000-07-01
At-size beam blur at any given pattern size of an electron beam (EB) direct writer, HL800D, was quantified using the new edge roughness evaluation (ERE) method to optimize the electron-optical system. We characterized the two-dimensional beam-blur dependence on the electron deflection length of the EB direct writer. The results indicate that the beam blur ranged from 45 nm to 56 nm in a deflection field 2520 micrometer square. The new ERE method is based on the experimental finding that line edge roughness of a resist pattern is inversely proportional to the slope of the Gaussian-distributed quasi-beam-profile (QBP) proposed in this paper. The QBP includes effects of the beam blur, electron forward scattering, acid diffusion in chemically amplified resist (CAR), the development process, and aperture mask quality. The application the ERE method to investigating the beam-blur fluctuation demonstrates the validity of the ERE method in characterizing the electron-optical column conditions of EB projections such as SCALPEL and PREVAIL.
Photographic image enhancement
NASA Technical Reports Server (NTRS)
Hite, Gerald E.
1990-01-01
Deblurring capabilities would significantly improve the scientific return from Space Shuttle crew-acquired images of the Earth and the safety of Space Shuttle missions. Deblurring techniques were developed and demonstrated on two digitized images that were blurred in different ways. The first was blurred by a Gaussian blurring function analogous to that caused by atmospheric turbulence, while the second was blurred by improper focussing. It was demonstrated, in both cases, that the nature of the blurring (Gaussian and Airy) and the appropriate parameters could be obtained from the Fourier transformation of their images. The difficulties posed by the presence of noise necessitated special consideration. It was demonstrated that a modified Wiener frequency filter judiciously constructed to avoid over emphasis of frequency regions dominated by noise resulted in substantially improved images. Several important areas of future research were identified. Two areas of particular promise are the extraction of blurring information directly from the spatial images and improved noise abatement form investigations of select spatial regions and the elimination of spike noise.
Automatic detection of blurred images in UAV image sets
NASA Astrophysics Data System (ADS)
Sieberth, Till; Wackrow, Rene; Chandler, Jim H.
2016-12-01
Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm.
Storying Literacies, Reimagining Classrooms: Teaching, Research, and Writing as Blurred Translating
ERIC Educational Resources Information Center
McManimon, Shannon K.
2014-01-01
I theorize teaching and researching as practices of "blurred translating" that center antioppressive education (Kumashiro, 2002) and storytelling (e.g., Frank, 2010; Zipes, 1995, 2004). Based in listening, research and teaching as blurred translating are relational, contextual, and ongoing processes oriented toward transformation and…
Blur and the School Library Media Specialist.
ERIC Educational Resources Information Center
Barron, Daniel D.
1999-01-01
Discusses the concept of "Blur" (described in "Blur: The Speed of Change in the Connected Economy") and what the technology-based, expanded connectivity means for K-12 educators and information specialists. Reviews online and print resources that deal with the rapid development of technology and its effects on society. (AEF)
Detection of blur artifacts in histopathological whole-slide images of endomyocardial biopsies.
Hang Wu; Phan, John H; Bhatia, Ajay K; Cundiff, Caitlin A; Shehata, Bahig M; Wang, May D
2015-01-01
Histopathological whole-slide images (WSIs) have emerged as an objective and quantitative means for image-based disease diagnosis. However, WSIs may contain acquisition artifacts that affect downstream image feature extraction and quantitative disease diagnosis. We develop a method for detecting blur artifacts in WSIs using distributions of local blur metrics. As features, these distributions enable accurate classification of WSI regions as sharp or blurry. We evaluate our method using over 1000 portions of an endomyocardial biopsy (EMB) WSI. Results indicate that local blur metrics accurately detect blurry image regions.
Image thumbnails that represent blur and noise.
Samadani, Ramin; Mauer, Timothy A; Berfanger, David M; Clark, James H
2010-02-01
The information about the blur and noise of an original image is lost when a standard image thumbnail is generated by filtering and subsampling. Image browsing becomes difficult since the standard thumbnails do not distinguish between high-quality and low-quality originals. In this paper, an efficient algorithm with a blur-generating component and a noise-generating component preserves the local blur and the noise of the originals. The local blur is rapidly estimated using a scale-space expansion of the standard thumbnail and subsequently used to apply a space-varying blur to the thumbnail. The noise is estimated and rendered by using multirate signal transformations that allow most of the processing to occur at the lower spatial sampling rate of the thumbnail. The new thumbnails provide a quick, natural way for users to identify images of good quality. A subjective evaluation shows the new thumbnails are more representative of their originals for blurry images. The noise generating component improves the results for noisy images, but degrades the results for textured images. The blur generating component of the new thumbnails may always be used to advantage. The decision to use the noise generating component of the new thumbnails should be based on testing with the particular image mix expected for the application.
Managment of thoracic empyema.
Sherman, M M; Subramanian, V; Berger, R L
1977-04-01
Over a ten year period, 102 patients with thoracic empyemata were treated at Boston City Hospital. Only three patients died from the pleural infection while twenty-six succumbed to the associated diseases. Priniciples of management include: (1) thoracentesis; (2) antibiotics; (3) closed-tube thoracostomy; (4) sinogram; (5) open drainage; (6) empyemectomy and decortication in selected patients; and (7) bronchoscopy and barium swallow when the etiology is uncertain.
Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.
de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph
2008-01-01
The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).
The use of wavelet filters for reducing noise in posterior fossa Computed Tomography images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pita-Machado, Reinado; Perez-Diaz, Marlen, E-mail: mperez@uclv.edu.cu; Lorenzo-Ginori, Juan V., E-mail: mperez@uclv.edu.cu
Wavelet transform based de-noising like wavelet shrinkage, gives the good results in CT. This procedure affects very little the spatial resolution. Some applications are reconstruction methods, while others are a posteriori de-noising methods. De-noising after reconstruction is very difficult because the noise is non-stationary and has unknown distribution. Therefore, methods which work on the sinogram-space don’t have this problem, because they always work over a known noise distribution at this point. On the other hand, the posterior fossa in a head CT is a very complex region for physicians, because it is commonly affected by artifacts and noise which aremore » not eliminated during the reconstruction procedure. This can leads to some false positive evaluations. The purpose of our present work is to compare different wavelet shrinkage de-noising filters to reduce noise, particularly in images of the posterior fossa within CT scans in the sinogram-space. This work describes an experimental search for the best wavelets, to reduce Poisson noise in Computed Tomography (CT) scans. Results showed that de-noising with wavelet filters improved the quality of posterior fossa region in terms of an increased CNR, without noticeable structural distortions.« less
Effects of Different Levels of Refractive Blur on Nighttime Pedestrian Visibility.
Wood, Joanne M; Marszalek, Ralph; Carberry, Trent; Lacherez, Philippe; Collins, Michael J
2015-07-01
The aim of this study was to systematically investigate the effect of different levels of refractive blur and driver age on nighttime pedestrian recognition and determine whether clothing that has been shown to improve pedestrian conspicuity is robust to the effects of blur. Nighttime pedestrian recognition was measured for 24 visually normal participants (12 younger mean = 24.9 ± 4.5 years and 12 older adults mean = 77.6 ± 5.7 years) for three levels of binocular blur (+0.50 diopter [D], +1.00 D, +2.00 D) compared with baseline (optimal refractive correction). Pedestrians walked in place on a closed road circuit and wore one of three clothing conditions: everyday clothing, a retro-reflective vest, and retro-reflective tape positioned on the extremities in a configuration that conveyed biological motion (known as "biomotion"); the order of conditions was randomized among participants. Pedestrian recognition distances were recorded for each blur and pedestrian clothing combination while participants drove an instrumented vehicle around a closed road course. The recognition distances for pedestrians were significantly reduced (P < 0.05) by all levels of blur compared with baseline. Pedestrians wearing biomotion clothing were recognized at significantly longer distances than for the other clothing configurations in all blur conditions. However, these effects were smaller for the older adults, who had much shorter recognition distances for all conditions tested. In summary, even small amounts of blur had a significant detrimental effect on nighttime pedestrian recognition. Biomotion retro-reflective clothing was effective, even under moderately degraded visibility conditions, for both young and older drivers.
No-reference multiscale blur detection tool for content based image retrieval
NASA Astrophysics Data System (ADS)
Ezekiel, Soundararajan; Stocker, Russell; Harrity, Kyle; Alford, Mark; Ferris, David; Blasch, Erik; Gorniak, Mark
2014-06-01
In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important component of digital image analysis. To assess image quality for these mobile products, a standard image is required as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be used to measure the quality of the images. However, these methods are not possible if there is no reference image. In our approach, a discrete-wavelet transformation is applied to the blurred image, which decomposes into the approximate image and three detail sub-images, namely horizontal, vertical, and diagonal images. We then focus on noise-measuring the detail images and blur-measuring the approximate image to assess the image quality. We then compute noise mean and noise ratio from the detail images, and blur mean and blur ratio from the approximate image. The Multi-scale Blur Detection (MBD) metric provides both an assessment of the noise and blur content. These values are weighted based on a linear regression against full-reference y values. From these statistics, we can compare to normal useful image statistics for image quality without needing a reference image. We then test the validity of our obtained weights by R2 analysis as well as using them to estimate image quality of an image with a known quality measure. The result shows that our method provides acceptable results for images containing low to mid noise levels and blur content.
Preliminary Validation of the Work-Family Integration-Blurring Scale
ERIC Educational Resources Information Center
Desrochers, Stephan; Hilton, Jeanne M.; Larwood, Laurie
2005-01-01
Several studies of telecommuting and working at home have alluded to the blurring line between work and family that can result from such highly integrated work-family arrangements. However, little is known about working parents' perceptions of the integration and blurring of their work and family roles. In this study, the authors created and…
Blur adaptation: contrast sensitivity changes and stimulus extent.
Venkataraman, Abinaya Priya; Winter, Simon; Unsbo, Peter; Lundström, Linda
2015-05-01
A prolonged exposure to foveal defocus is well known to affect the visual functions in the fovea. However, the effects of peripheral blur adaptation on foveal vision, or vice versa, are still unclear. In this study, we therefore examined the changes in contrast sensitivity function from baseline, following blur adaptation to small as well as laterally extended stimuli in four subjects. The small field stimulus (7.5° visual field) was a 30min video of forest scenery projected on a screen and the large field stimulus consisted of 7-tiles of the 7.5° stimulus stacked horizontally. Both stimuli were used for adaptation with optical blur (+2.00D trial lens) as well as for clear control conditions. After small field blur adaptation foveal contrast sensitivity improved in the mid spatial frequency region. However, these changes neither spread to the periphery nor occurred for the large field blur adaptation. To conclude, visual performance after adaptation is dependent on the lateral extent of the adaptation stimulus. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Identification of Piecewise Linear Uniform Motion Blur
NASA Astrophysics Data System (ADS)
Patanukhom, Karn; Nishihara, Akinori
A motion blur identification scheme is proposed for nonlinear uniform motion blurs approximated by piecewise linear models which consist of more than one linear motion component. The proposed scheme includes three modules that are a motion direction estimator, a motion length estimator and a motion combination selector. In order to identify the motion directions, the proposed scheme is based on a trial restoration by using directional forward ramp motion blurs along different directions and an analysis of directional information via frequency domain by using a Radon transform. Autocorrelation functions of image derivatives along several directions are employed for estimation of the motion lengths. A proper motion combination is identified by analyzing local autocorrelation functions of non-flat component of trial restored results. Experimental examples of simulated and real world blurred images are given to demonstrate a promising performance of the proposed scheme.
NASA Astrophysics Data System (ADS)
Jeffs, Brian D.; Christou, Julian C.
1998-09-01
This paper addresses post processing for resolution enhancement of sequences of short exposure adaptive optics (AO) images of space objects. The unknown residual blur is removed using Bayesian maximum a posteriori blind image restoration techniques. In the problem formulation, both the true image and the unknown blur psf's are represented by the flexible generalized Gaussian Markov random field (GGMRF) model. The GGMRF probability density function provides a natural mechanism for expressing available prior information about the image and blur. Incorporating such prior knowledge in the deconvolution optimization is crucial for the success of blind restoration algorithms. For example, space objects often contain sharp edge boundaries and geometric structures, while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits smoothed, random , texture-like features on a peaked central core. By properly choosing parameters, GGMRF models can accurately represent both the blur psf and the object, and serve to regularize the deconvolution problem. These two GGMRF models also serve as discriminator functions to separate blur and object in the solution. Algorithm performance is demonstrated with examples from synthetic AO images. Results indicate significant resolution enhancement when applied to partially corrected AO images. An efficient computational algorithm is described.
Formula for the rms blur circle radius of Wolter telescope based on aberration theory
NASA Technical Reports Server (NTRS)
Shealy, David L.; Saha, Timo T.
1990-01-01
A formula for the rms blur circle for Wolter telescopes has been derived using the transverse ray aberration expressions of Saha (1985), Saha (1984), and Saha (1986). The resulting formula for the rms blur circle radius over an image plane and a formula for the surface of best focus based on third-, fifth-, and seventh-order aberration theory predict results in good agreement with exact ray tracing. It has also been shown that one of the two terms in the empirical formula of VanSpeybroeck and Chase (1972), for the rms blur circle radius of a Wolter I telescope can be justified by the aberration theory results. Numerical results are given comparing the rms blur radius and the surface of best focus vs the half-field angle computed by skew ray tracing and from analytical formulas for grazing incidence Wolter I-II telescopes and a normal incidence Cassegrain telescope.
Blurred Star Image Processing for Star Sensors under Dynamic Conditions
Zhang, Weina; Quan, Wei; Guo, Lei
2012-01-01
The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large angular rate. The adaptive threshold is adopted for denoising the star image when the angular rate is in the dynamic range. Then, the mathematical model of motion blur is deduced so as to restore the blurred star map due to large angular rate. Simulation results validate the effectiveness of the proposed method, which is suitable for blurred star image processing and practical for attitude determination of satellites under dynamic conditions. PMID:22778666
Multichannel blind deconvolution of spatially misaligned images.
Sroubek, Filip; Flusser, Jan
2005-07-01
Existing multichannel blind restoration techniques assume perfect spatial alignment of channels, correct estimation of blur size, and are prone to noise. We developed an alternating minimization scheme based on a maximum a posteriori estimation with a priori distribution of blurs derived from the multichannel framework and a priori distribution of original images defined by the variational integral. This stochastic approach enables us to recover the blurs and the original image from channels severely corrupted by noise. We observe that the exact knowledge of the blur size is not necessary, and we prove that translation misregistration up to a certain extent can be automatically removed in the restoration process.
Slight Blurring in Newer Image from Mars Orbiter
2018-02-09
These two frames were taken of the same place on Mars by the same orbiting camera before (left) and after some images from the camera began showing unexpected blur. The images are from the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. They show a patch of ground about 500 feet or 150 meters wide in Gusev Crater. The one on the left, from HiRISE observation ESP_045173_1645, was taken March 16, 2016. The one on the right was taken Jan. 9, 2018. Gusev Crater, at 15 degrees south latitude and 176 degrees east longitude, is the landing site of NASA's Spirit Mars rover in 2004 and a candidate landing site for a rover to be launched in 2020. HiRISE images provide important information for evaluating potential landing sites. The smallest boulders with measurable diameters in the left image are about 3 feet (90 centimeters) wide. In the blurred image, the smallest measurable are about double that width. As of early 2018, most full-resolution images from HiRISE are not blurred, and the cause of the blur is still under investigation. Even before blurred images were first seen, in 2017, observations with HiRISE commonly used a technique that covers more ground area at half the resolution. This shows features smaller than can be distinguished with any other camera orbiting Mars, and little blurring has appeared in these images. https://photojournal.jpl.nasa.gov/catalog/PIA22215
Blur and the perception of depth at occlusions.
Zannoli, Marina; Love, Gordon D; Narain, Rahul; Banks, Martin S
2016-01-01
The depth ordering of two surfaces, one occluding the other, can in principle be determined from the correlation between the occlusion border's blur and the blur of the two surfaces. If the border is blurred, the blurrier surface is nearer; if the border is sharp, the sharper surface is nearer. Previous research has found that observers do not use this informative cue. We reexamined this finding. Using a multiplane display, we confirmed the previous finding: Our observers did not accurately judge depth order when the blur was rendered and the stimulus presented on one plane. We then presented the same simulated scenes on multiple planes, each at a different focal distance, so the blur was created by the optics of the eye. Performance was now much better, which shows that depth order can be reliably determined from blur information but only when the optical effects are similar to those in natural viewing. We asked what the critical differences were in the single- and multiplane cases. We found that chromatic aberration provides useful information but accommodative microfluctuations do not. In addition, we examined how image formation is affected by occlusions and observed some interesting phenomena that allow the eye to see around and through occluding objects and may allow observers to estimate depth in da Vinci stereopsis, where one eye's view is blocked. Finally, we evaluated how accurately different rendering and displaying techniques reproduce the retinal images that occur in real occlusions. We discuss implications for computer graphics.
Contributions of Optical and Non-Optical Blur to Variation in Visual Acuity
McAnany, J. Jason; Shahidi, Mahnaz; Applegate, Raymond A.; Zelkha, Ruth; Alexander, Kenneth R.
2011-01-01
Purpose To determine the relative contributions of optical and non-optical sources of intrinsic blur to variations in visual acuity (VA) among normally sighted subjects. Methods Best-corrected VA of sixteen normally sighted subjects was measured using briefly presented (59 ms) tumbling E optotypes that were either unblurred or blurred through convolution with Gaussian functions of different widths. A standard model of intrinsic blur was used to estimate each subject’s equivalent intrinsic blur (σint) and VA for the unblurred tumbling E (MAR0). For 14 subjects, a radially averaged optical point spread function due to higher-order aberrations was derived by Shack-Hartmann aberrometry and fit with a Gaussian function. The standard deviation of the best-fit Gaussian function defined optical blur (σopt). An index of non-optical blur (η) was defined as: 1-σopt/σint. A control experiment was conducted on 5 subjects to evaluate the effect of stimulus duration on MAR0 and σint. Results Log MAR0 for the briefly presented E was correlated significantly with log σint (r = 0.95, p < 0.01), consistent with previous work. However, log MAR0 was not correlated significantly with log σopt (r = 0.46, p = 0.11). For subjects with log MAR0 equivalent to approximately 20/20 or better, log MAR0 was independent of log η, whereas for subjects with larger log MAR0 values, log MAR0 was proportional to log η. The control experiment showed a statistically significant effect of stimulus duration on log MAR0 (p < 0.01) but a non-significant effect on σint (p = 0.13). Conclusions The relative contributions of optical and non-optical blur to VA varied among the subjects, and were related to the subject’s VA. Evaluating optical and non-optical blur may be useful for predicting changes in VA following procedures that improve the optics of the eye in patients with both optical and non-optical sources of VA loss. PMID:21460756
There is more to accommodation of the eye than simply minimizing retinal blur
Marín-Franch, I.; Del Águila-Carrasco, A. J.; Bernal-Molina, P.; Esteve-Taboada, J. J.; López-Gil, N.; Montés-Micó, R.; Kruger, P. B.
2017-01-01
Eyes of children and young adults change their optical power to focus nearby objects at the retina. But does accommodation function by trial and error to minimize blur and maximize contrast as is generally accepted? Three experiments in monocular and monochromatic vision were performed under two conditions while aberrations were being corrected. In the first condition, feedback was available to the eye from both optical vergence and optical blur. In the second, feedback was only available from target blur. Accommodation was less precise for the second condition, suggesting that it is more than a trial-and-error function. Optical vergence itself seems to be an important cue for accommodation. PMID:29082097
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...
2017-01-28
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Diagnostic features of Alzheimer's disease extracted from PET sinograms
NASA Astrophysics Data System (ADS)
Sayeed, A.; Petrou, M.; Spyrou, N.; Kadyrov, A.; Spinks, T.
2002-01-01
Texture analysis of positron emission tomography (PET) images of the brain is a very difficult task, due to the poor signal to noise ratio. As a consequence, very few techniques can be implemented successfully. We use a new global analysis technique known as the Trace transform triple features. This technique can be applied directly to the raw sinograms to distinguish patients with Alzheimer's disease (AD) from normal volunteers. FDG-PET images of 18 AD and 10 normal controls obtained from the same CTI ECAT-953 scanner were used in this study. The Trace transform triple feature technique was used to extract features that were invariant to scaling, translation and rotation, referred to as invariant features, as well as features that were sensitive to rotation but invariant to scaling and translation, referred to as sensitive features in this study. The features were used to classify the groups using discriminant function analysis. Cross-validation tests using stepwise discriminant function analysis showed that combining both sensitive and invariant features produced the best results, when compared with the clinical diagnosis. Selecting the five best features produces an overall accuracy of 93% with sensitivity of 94% and specificity of 90%. This is comparable with the classification accuracy achieved by Kippenhan et al (1992), using regional metabolic activity.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Mann, David L; Abernethy, Bruce; Farrow, Damian
2010-07-01
Coupled interceptive actions are understood to be the result of neural processing-and visual information-which is distinct from that used for uncoupled perceptual responses. To examine the visual information used for action and perception, skilled cricket batters anticipated the direction of balls bowled toward them using a coupled movement (an interceptive action that preserved the natural coupling between perception and action) or an uncoupled (verbal) response, in each of four different visual blur conditions (plano, +1.00, +2.00, +3.00). Coupled responses were found to be better than uncoupled ones, with the blurring of vision found to result in different effects for the coupled and uncoupled response conditions. Low levels of visual blur did not affect coupled anticipation, a finding consistent with the comparatively poorer visual information on which online interceptive actions are proposed to rely. In contrast, some evidence was found to suggest that low levels of blur may enhance the uncoupled verbal perception of movement.
Image and Video Quality Assessment Using LCD: Comparisons with CRT Conditions
NASA Astrophysics Data System (ADS)
Tourancheau, Sylvain; Callet, Patrick Le; Barba, Dominique
In this paper, the impact of display on quality assessment is addressed. Subjective quality assessment experiments have been performed on both LCD and CRT displays. Two sets of still images and two sets of moving pictures have been assessed using either an ACR or a SAMVIQ protocol. Altogether, eight experiments have been led. Results are presented and discussed, some differences are pointed out. Concerning moving pictures, these differences seem to be mainly due to LCD moving artefacts such as motion blur. LCD motion blur has been measured objectively and with psycho-physics experiments. A motion-blur metric based on the temporal characteristics of LCD can be defined. A prediction model have been then designed which predict the differences of perceived quality between CRT and LCD. This motion-blur-based model enables the estimation of perceived quality on LCD with respect to the perceived quality on CRT. Technical solutions to LCD motion blur can thus be evaluated on natural contents by this mean.
Effects of blurring and vertical misalignment on visual fatigue of stereoscopic displays
NASA Astrophysics Data System (ADS)
Baek, Sangwook; Lee, Chulhee
2015-03-01
In this paper, we investigate two error issues in stereo images, which may produce visual fatigue. When two cameras are used to produce 3D video sequences, vertical misalignment can be a problem. Although this problem may not occur in professionally produced 3D programs, it is still a major issue in many low-cost 3D programs. Recently, efforts have been made to produce 3D video programs using smart phones or tablets, which may present the vertical alignment problem. Also, in 2D-3D conversion techniques, the simulated frame may have blur effects, which can also introduce visual fatigue in 3D programs. In this paper, to investigate the relationship between these two errors (vertical misalignment and blurring in one image), we performed a subjective test using simulated 3D video sequences that include stereo video sequences with various vertical misalignments and blurring in a stereo image. We present some analyses along with objective models to predict the degree of visual fatigue from vertical misalignment and blurring.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
Three-dimensional FLASH Laser Radar Range Estimation via Blind Deconvolution
2009-10-01
scene can result in errors due to several factors including the optical spatial impulse response, detector blurring, photon noise , timing jitter, and...estimation error include spatial blur, detector blurring, noise , timing jitter, and inter-sample targets. Unlike previous research, this paper ac- counts...for pixel coupling by defining the range image mathematical model as a 2D convolution between the system spatial impulse response and the object (target
Effects of Scene Modulation Image Blur and Noise Upon Human Target Acquisition Performance.
1997-06-01
AFRL-HE-WP-TR-1998-0012 UNITED STATES AIR FORCE RESEARCH LABORATORY EFFECTS OF SCENE MODULATION IMAGE BLUR AND NOISE UPON HUMAN TARGET...COVERED INTERIM (July 1996 - August 1996) TITLE AND SUBTITLE Effects of Scene Modulation Image Blur and Noise Upon Human Target Acquisition...dilemma in image transmission and display is that we must compromise between die conflicting constraints of dynamic range and noise . Three target
Uddin, Muhammad Shahin; Halder, Kalyan Kumar; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain
2016-11-01
Ultrasound (US) imaging is a widely used clinical diagnostic tool in medical imaging techniques. It is a comparatively safe, economical, painless, portable, and noninvasive real-time tool compared to the other imaging modalities. However, the image quality of US imaging is severely affected by the presence of speckle noise and blur during the acquisition process. In order to ensure a high-quality clinical diagnosis, US images must be restored by reducing their speckle noise and blur. In general, speckle noise is modeled as a multiplicative noise following a Rayleigh distribution and blur as a Gaussian function. Hereto, we propose an intelligent estimator based on artificial neural networks (ANNs) to estimate the variances of noise and blur, which, in turn, are used to obtain an image without discernible distortions. A set of statistical features computed from the image and its complex wavelet sub-bands are used as input to the ANN. In the proposed method, we solve the inverse Rayleigh function numerically for speckle reduction and use the Richardson-Lucy algorithm for de-blurring. The performance of this method is compared with that of the traditional methods by applying them to a synthetic, physical phantom and clinical data, which confirms better restoration results by the proposed method.
Co-production in community mental health services: blurred boundaries or a game of pretend?
Kirkegaard, Sine; Andersen, Ditte
2018-06-01
The concept of co-production suggests a collaborative production of public welfare services, across boundaries of participant categories, for example professionals, service users, peer-workers and volunteers. While co-production has been embraced in most European countries, the way in which it is translated into everyday practice remains understudied. Drawing on ethnographic data from Danish community mental health services, we attempt to fill this gap by critically investigating how participants interact in an organisational set-up with blurred boundaries between participant categories. In particular, we clarify under what circumstances the blurred boundaries emerge as believable. Theoretically, we combine Lamont and Molnár's (2002) distinction between symbolic boundaries and social boundaries with Goffman's (1974) microanalysis of "principles of convincingness". The article presents three findings: (1) co-production is employed as a symbolic resource for blurring social boundaries; (2) the believability of blurred boundaries is worked up through participants' access to resources of validation, knowledge and authority; and (3) incongruence between symbolic and social boundaries institutionalises practices where participants merely act 'as if' boundaries are blurred. Clarification of the principles of convincingness contributes to a general discussion of how co-production frames the everyday negotiation of symbolic and social boundaries in public welfare services. © 2018 Foundation for the Sociology of Health & Illness.
Accommodation Responds to Optical Vergence and Not Defocus Blur Alone.
Del Águila-Carrasco, Antonio J; Marín-Franch, Iván; Bernal-Molina, Paula; Esteve-Taboada, José J; Kruger, Philip B; Montés-Micó, Robert; López-Gil, Norberto
2017-03-01
To determine whether changes in wavefront spherical curvature (optical vergence) are a directional cue for accommodation. Nine subjects participated in this experiment. The accommodation response to a monochromatic target was measured continuously with a custom-made adaptive optics system while astigmatism and higher-order aberrations were corrected in real time. There were two experimental open-loop conditions: vergence-driven condition, where the deformable mirror provided sinusoidal changes in defocus at the retina between -1 and +1 diopters (D) at 0.2 Hz; and blur-driven condition, in which the level of defocus at the retina was always 0 D, but a sinusoidal defocus blur between -1 and +1 D at 0.2 Hz was simulated in the target. Right before the beginning of each trial, the target was moved to an accommodative demand of 2 D. Eight out of nine subjects showed sinusoidal responses for the vergence-driven condition but not for the blur-driven condition. Their average (±SD) gain for the vergence-driven condition was 0.50 (±0.28). For the blur-driven condition, average gain was much smaller at 0.07 (±0.03). The ninth subject showed little to no response for both conditions, with average gain <0.08. Vergence-driven condition gain was significantly different from blur-driven condition gain (P = 0.004). Accommodation responds to optical vergence, even without feedback, and not to changes in defocus blur alone. These results suggest the presence of a retinal mechanism that provides a directional cue for accommodation from optical vergence.
Effects of blur and repeated testing on sensitivity estimates with frequency doubling perimetry.
Artes, Paul H; Nicolela, Marcelo T; McCormick, Terry A; LeBlanc, Raymond P; Chauhan, Balwantray C
2003-02-01
To investigate the effect of blur and repeated testing on sensitivity with frequency doubling technology (FDT) perimetry. One eye of 12 patients with glaucoma (mean deviation [MD] mean, -2.5 dB, range +0.5 to -4.3 dB) and 11 normal control subjects underwent six consecutive tests with the FDT N30 threshold program in each of two sessions. In session 1, blur was induced by trial lenses (-6.00, -3.00, 0.00, +3.00, and +6.00 D, in random order). In session 2, only the effects of repeated testing were evaluated. The MD and pattern standard deviation (PSD) indices were evaluated as functions of blur and of test order. By correcting the data of session 1 for the reduction of sensitivity with repeated testing (session 2), the effect of blur on FDT sensitivities was established, and its clinical consequences evaluated on total- and pattern-deviation probability maps. FDT sensitivities decreased with blur (by <0.5 dB/D) and with repeated testing (by approximately 2 dB between the first and sixth tests). Blur and repeated testing independently led to larger numbers of locations with significant total and pattern deviation. Sensitivity reductions were similar in normal control subjects and patients with glaucoma, at central and peripheral test locations and at locations with high and low sensitivities. However, patients with glaucoma showed larger deterioration in the total-deviation-probability maps. To optimize the performance of the device, refractive errors should be corrected and immediate retesting avoided. Further research is needed to establish the cause of sensitivity loss with repeated FDT testing.
Eye growth and myopia development: Unifying theory and Matlab model.
Hung, George K; Mahadas, Kausalendra; Mohammad, Faisal
2016-03-01
The aim of this article is to present an updated unifying theory of the mechanisms underlying eye growth and myopia development. A series of model simulation programs were developed to illustrate the mechanism of eye growth regulation and myopia development. Two fundamental processes are presumed to govern the relationship between physiological optics and eye growth: genetically pre-programmed signaling and blur feedback. Cornea/lens is considered to have only a genetically pre-programmed component, whereas eye growth is considered to have both a genetically pre-programmed and a blur feedback component. Moreover, based on the Incremental Retinal-Defocus Theory (IRDT), the rate of change of blur size provides the direction for blur-driven regulation. The various factors affecting eye growth are shown in 5 simulations: (1 - unregulated eye growth): blur feedback is rendered ineffective, as in the case of form deprivation, so there is only genetically pre-programmed eye growth, generally resulting in myopia; (2 - regulated eye growth): blur feedback regulation demonstrates the emmetropization process, with abnormally excessive or reduced eye growth leading to myopia and hyperopia, respectively; (3 - repeated near-far viewing): simulation of large-to-small change in blur size as seen in the accommodative stimulus/response function, and via IRDT as well as nearwork-induced transient myopia (NITM), leading to the development of myopia; (4 - neurochemical bulk flow and diffusion): release of dopamine from the inner plexiform layer of the retina, and the subsequent diffusion and relay of neurochemical cascade show that a decrease in dopamine results in a reduction of proteoglycan synthesis rate, which leads to myopia; (5 - Simulink model): model of genetically pre-programmed signaling and blur feedback components that allows for different input functions to simulate experimental manipulations that result in hyperopia, emmetropia, and myopia. These model simulation programs (available upon request) can provide a useful tutorial for the general scientist and serve as a quantitative tool for researchers in eye growth and myopia. Copyright © 2016 Elsevier Ltd. All rights reserved.
Linear Reconstruction of Non-Stationary Image Ensembles Incorporating Blur and Noise Models
1998-03-01
for phase distortions due to noise which leads to less deblurring as noise increases [41]. In contrast, the vector Wiener filter incorporates some a...AFIT/DS/ENG/98- 06 Linear Reconstruction of Non-Stationary Image Ensembles Incorporating Blur and Noise Models DISSERTATION Stephen D. Ford Captain...Dissertation 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS LINEAR RECONSTRUCTION OF NON-STATIONARY IMAGE ENSEMBLES INCORPORATING BLUR AND NOISE MODELS 6. AUTHOR(S
Restoration of motion blurred images
NASA Astrophysics Data System (ADS)
Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.
2017-08-01
Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.
Parallelization of a blind deconvolution algorithm
NASA Astrophysics Data System (ADS)
Matson, Charles L.; Borelli, Kathy J.
2006-09-01
Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.
Stade, Björn; Seelow, Dominik; Thomsen, Ingo; Krawczak, Michael; Franke, Andre
2014-01-01
Next Generation Sequencing (NGS) of whole exomes or genomes is increasingly being used in human genetic research and diagnostics. Sharing NGS data with third parties can help physicians and researchers to identify causative or predisposing mutations for a specific sample of interest more efficiently. In many cases, however, the exchange of such data may collide with data privacy regulations. GrabBlur is a newly developed tool to aggregate and share NGS-derived single nucleotide variant (SNV) data in a public database, keeping individual samples unidentifiable. In contrast to other currently existing SNV databases, GrabBlur includes phenotypic information and contact details of the submitter of a given database entry. By means of GrabBlur human geneticists can securely and easily share SNV data from resequencing projects. GrabBlur can ease the interpretation of SNV data by offering basic annotations, genotype frequencies and in particular phenotypic information - given that this information was shared - for the SNV of interest. GrabBlur facilitates the combination of phenotypic and NGS data (VCF files) via a local interface or command line operations. Data submissions may include HPO (Human Phenotype Ontology) terms, other trait descriptions, NGS technology information and the identity of the submitter. Most of this information is optional and its provision at the discretion of the submitter. Upon initial intake, GrabBlur merges and aggregates all sample-specific data. If a certain SNV is rare, the sample-specific information is replaced with the submitter identity. Generally, all data in GrabBlur are highly aggregated so that they can be shared with others while ensuring maximum privacy. Thus, it is impossible to reconstruct complete exomes or genomes from the database or to re-identify single individuals. After the individual information has been sufficiently "blurred", the data can be uploaded into a publicly accessible domain where aggregated genotypes are provided alongside phenotypic information. A web interface allows querying the database and the extraction of gene-wise SNV information. If an interesting SNV is found, the interrogator can get in contact with the submitter to exchange further information on the carrier and clarify, for example, whether the latter's phenotype matches with phenotype of their own patient.
Hernández, Cristina; Zapata, Miguel A; Losada, Eladio; Villarroel, Marta; García-Ramírez, Marta; García-Arumí, José; Simó, Rafael
2010-07-01
To evaluate whether intensive insulin therapy leads to changes in macular biometrics (volume and thickness) in newly diagnosed diabetic patients with acute hyperglycaemia and its relationship with serum levels of vascular endothelial growth factor (VEGF) and its soluble receptor (sFlt-1). Twenty-six newly diagnosed diabetic patients admitted to our hospital to initiate intensive insulin treatment were prospectively recruited. Examinations were performed on admission (day 1) and during follow-up (days 3, 10 and 21) and included a questionnaire regarding the presence of blurred vision, standardized refraction measurements and optical coherence tomography. Plasma VEGF and sFlt-1 were assessed by ELISA at baseline and during follow-up. At study entry seven patients (26.9%) complained of blurred vision and five (19.2%) developed burred vision during follow-up. Macular volume and thickness increased significantly (p = 0.008 and p = 0.04, respectively) in the group with blurred vision at day 3 and returned to the baseline value at 10 days. This pattern was present in 18 out of the 24 eyes from patients with blurred vision. By contrast, macular biometrics remained unchanged in the group without blurred vision. We did not detect any significant changes in VEGF levels during follow-up. By contrast, a significant reduction of sFlt-1 was observed in those patients with blurred vision at day 3 (p = 0.03) with normalization by day 10. Diabetic patients with blurred vision after starting insulin therapy present a significant transient increase in macular biometrics which is associated with a decrease in circulating sFlt-1. Copyright (c) 2010 John Wiley & Sons, Ltd.
Doyle, Lesley; Saunders, Kathryn J; Little, Julie-Anne
2017-01-10
Individuals with Down syndrome (DS) often exhibit hypoaccommodation alongside accurate vergence. This study investigates the sensitivity of the two systems to retinal disparity and blur cues, establishing the relationship between the two in terms of accommodative-convergence to accommodation (AC/A) and convergence-accommodation to convergence (CA/C) ratios. An objective photorefraction system measured accommodation and vergence under binocular conditions and when retinal disparity and blur cues were removed. Participants were aged 6-16 years (DS n = 41, controls n = 76). Measures were obtained from 65.9% of participants with DS and 100% of controls. Accommodative and vergence responses were reduced with the removal of one or both cues in controls (p < 0.007). For participants with DS, removal of blur was less detrimental to accommodative responses than removal of disparity; accommodative responses being significantly better when all cues were available or when blur was removed in comparison to when proximity was the only available cue. AC/A ratios were larger and CA/C ratios smaller in participants with DS (p < 0.00001). This study demonstrates that retinal disparity is the main driver to both systems in DS and illustrates the diminished influence of retinal blur. High AC/A and low CA/C ratios in combination with disparity-driven responses suggest prioritisation of vergence over accurate accommodation.
Doyle, Lesley; Saunders, Kathryn J.; Little, Julie-Anne
2017-01-01
Individuals with Down syndrome (DS) often exhibit hypoaccommodation alongside accurate vergence. This study investigates the sensitivity of the two systems to retinal disparity and blur cues, establishing the relationship between the two in terms of accommodative-convergence to accommodation (AC/A) and convergence-accommodation to convergence (CA/C) ratios. An objective photorefraction system measured accommodation and vergence under binocular conditions and when retinal disparity and blur cues were removed. Participants were aged 6–16 years (DS n = 41, controls n = 76). Measures were obtained from 65.9% of participants with DS and 100% of controls. Accommodative and vergence responses were reduced with the removal of one or both cues in controls (p < 0.007). For participants with DS, removal of blur was less detrimental to accommodative responses than removal of disparity; accommodative responses being significantly better when all cues were available or when blur was removed in comparison to when proximity was the only available cue. AC/A ratios were larger and CA/C ratios smaller in participants with DS (p < 0.00001). This study demonstrates that retinal disparity is the main driver to both systems in DS and illustrates the diminished influence of retinal blur. High AC/A and low CA/C ratios in combination with disparity-driven responses suggest prioritisation of vergence over accurate accommodation. PMID:28071728
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Quantifying how the combination of blur and disparity affects the perceived depth
NASA Astrophysics Data System (ADS)
Wang, Junle; Barkowsky, Marcus; Ricordel, Vincent; Le Callet, Patrick
2011-03-01
The influence of a monocular depth cue, blur, on the apparent depth of stereoscopic scenes will be studied in this paper. When 3D images are shown on a planar stereoscopic display, binocular disparity becomes a pre-eminent depth cue. But it induces simultaneously the conflict between accommodation and vergence, which is often considered as a main reason for visual discomfort. If we limit this visual discomfort by decreasing the disparity, the apparent depth also decreases. We propose to decrease the (binocular) disparity of 3D presentations, and to reinforce (monocular) cues to compensate the loss of perceived depth and keep an unaltered apparent depth. We conducted a subjective experiment using a twoalternative forced choice task. Observers were required to identify the larger perceived depth in a pair of 3D images with/without blur. By fitting the result to a psychometric function, we obtained points of subjective equality in terms of disparity. We found that when blur is added to the background of the image, the viewer can perceive larger depth comparing to the images without any blur in the background. The increase of perceived depth can be considered as a function of the relative distance between the foreground and background, while it is insensitive to the distance between the viewer and the depth plane at which the blur is added.
Optical research of biomaterials of Sorbulak
NASA Astrophysics Data System (ADS)
Esyrev, O. V.; Kupchishin, A. A.; Kupchishin, A. I.; Voronova, N. A.
2016-02-01
Within the framework of optical research it was established that on the unpolluted samples of sedge stems occurs structuring of material, whereas on contaminated and irradiated blurring of its structure takes place. Sampling of sedges and rushes for research was carried out in areas near the first dam Sorbulak. For comparison, samples of same materials were taken far away from populated areas. Irradiation was carried out with high-energy electrons with energy of 2 MeV and integral dose of 3·105 Gr. Irradiation leads to a more pronounced structuredness of material. There is a significant difference in the structural elements (epidermis, vascular bundles, parenchymal cells, etc.). There are traced dark spots and bands associated with the presence of huge amounts of heavy metals against the background of a green matrix.
Luminance cues constrain chromatic blur discrimination in natural scene stimuli.
Sharman, Rebecca J; McGraw, Paul V; Peirce, Jonathan W
2013-03-22
Introducing blur into the color components of a natural scene has very little effect on its percept, whereas blur introduced into the luminance component is very noticeable. Here we quantify the dominance of luminance information in blur detection and examine a number of potential causes. We show that the interaction between chromatic and luminance information is not explained by reduced acuity or spatial resolution limitations for chromatic cues, the effective contrast of the luminance cue, or chromatic and achromatic statistical regularities in the images. Regardless of the quality of chromatic information, the visual system gives primacy to luminance signals when determining edge location. In natural viewing, luminance information appears to be specialized for detecting object boundaries while chromatic information may be used to determine surface properties.
Forward light scatter analysis of the eye in a spatially-resolved double-pass optical system.
Nam, Jayoung; Thibos, Larry N; Bradley, Arthur; Himebaugh, Nikole; Liu, Haixia
2011-04-11
An optical analysis is developed to separate forward light scatter of the human eye from the conventional wavefront aberrations in a double pass optical system. To quantify the separate contributions made by these micro- and macro-aberrations, respectively, to the spot image blur in the Shark-Hartmann aberrometer, we develop a metric called radial variance for spot blur. We prove an additivity property for radial variance that allows us to distinguish between spot blurs from macro-aberrations and micro-aberrations. When the method is applied to tear break-up in the human eye, we find that micro-aberrations in the second pass accounts for about 87% of the double pass image blur in the Shack-Hartmann wavefront aberrometer under our experimental conditions. © 2011 Optical Society of America
Multi-Stage Target Tracking with Drift Correction and Position Prediction
NASA Astrophysics Data System (ADS)
Chen, Xin; Ren, Keyan; Hou, Yibin
2018-04-01
Most existing tracking methods are hard to combine accuracy and performance, and do not consider the shift between clarity and blur that often occurs. In this paper, we propound a multi-stage tracking framework with two particular modules: position prediction and corrective measure. We conduct tracking based on correlation filter with a corrective measure module to increase both performance and accuracy. Specifically, a convolutional network is used for solving the blur problem in realistic scene, training methodology that training dataset with blur images generated by the three blur algorithms. Then, we propose a position prediction module to reduce the computation cost and make tracker more capable of fast motion. Experimental result shows that our tracking method is more robust compared to others and more accurate on the benchmark sequences.
LCD motion blur reduction: a signal processing approach.
Har-Noy, Shay; Nguyen, Truong Q
2008-02-01
Liquid crystal displays (LCDs) have shown great promise in the consumer market for their use as both computer and television displays. Despite their many advantages, the inherent sample-and-hold nature of LCD image formation results in a phenomenon known as motion blur. In this work, we develop a method for motion blur reduction using the Richardson-Lucy deconvolution algorithm in concert with motion vector information from the scene. We further refine our approach by introducing a perceptual significance metric that allows us to weight the amount of processing performed on different regions in the image. In addition, we analyze the role of motion vector errors in the quality of our resulting image. Perceptual tests indicate that our algorithm reduces the amount of perceivable motion blur in LCDs.
Using compressive sensing to recover images from PET scanners with partial detector rings.
Valiollahzadeh, SeyyedMajid; Clark, John W; Mawlawi, Osama
2015-01-01
Most positron emission tomography/computed tomography (PET/CT) scanners consist of tightly packed discrete detector rings to improve scanner efficiency. The authors' aim was to use compressive sensing (CS) techniques in PET imaging to investigate the possibility of decreasing the number of detector elements per ring (introducing gaps) while maintaining image quality. A CS model based on a combination of gradient magnitude and wavelet domains (wavelet-TV) was developed to recover missing observations in PET data acquisition. The model was designed to minimize the total variation (TV) and L1-norm of wavelet coefficients while constrained by the partially observed data. The CS model also incorporated a Poisson noise term that modeled the observed noise while suppressing its contribution by penalizing the Poisson log likelihood function. Three experiments were performed to evaluate the proposed CS recovery algorithm: a simulation study, a phantom study, and six patient studies. The simulation dataset comprised six disks of various sizes in a uniform background with an activity concentration of 5:1. The simulated image was multiplied by the system matrix to obtain the corresponding sinogram and then Poisson noise was added. The resultant sinogram was masked to create the effect of partial detector removal and then the proposed CS algorithm was applied to recover the missing PET data. In addition, different levels of noise were simulated to assess the performance of the proposed algorithm. For the phantom study, an IEC phantom with six internal spheres each filled with F-18 at an activity-to-background ratio of 10:1 was used. The phantom was imaged twice on a RX PET/CT scanner: once with all detectors operational (baseline) and once with four detector blocks (11%) turned off at each of 0 ˚, 90 ˚, 180 ˚, and 270° (partially sampled). The partially acquired sinograms were then recovered using the proposed algorithm. For the third test, PET images from six patient studies were investigated using the same strategy of the phantom study. The recovered images using WTV and TV as well as the partially sampled images from all three experiments were then compared with the fully sampled images (the baseline). Comparisons were done by calculating the mean error (%bias), root mean square error (RMSE), contrast recovery (CR), and SNR of activity concentration in regions of interest drawn in the background as well as the disks, spheres, and lesions. For the simulation study, the mean error, RMSE, and CR for the WTV (TV) recovered images were 0.26% (0.48%), 2.6% (2.9%), 97% (96%), respectively, when compared to baseline. For the partially sampled images, these results were 22.5%, 45.9%, and 64%, respectively. For the simulation study, the average SNR for the baseline was 41.7 while for WTV (TV), recovered image was 44.2 (44.0). The phantom study showed similar trends with 5.4% (18.2%), 15.6% (18.8%), and 78% (60%), respectively, for the WTV (TV) images and 33%, 34.3%, and 69% for the partially sampled images. For the phantom study, the average SNR for the baseline was 14.7 while for WTV (TV) recovered image was 13.7 (11.9). Finally, the average of these values for the six patient studies for the WTV-recovered, TV, and partially sampled images was 1%, 7.2%, 92% and 1.3%, 15.1%, 87%, and 27%, 25.8%, 45%, respectively. CS with WTV is capable of recovering PET images with good quantitative accuracy from partially sampled data. Such an approach can be used to potentially reduce the cost of scanners while maintaining good image quality.
Using compressive sensing to recover images from PET scanners with partial detector rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valiollahzadeh, SeyyedMajid, E-mail: sv4@rice.edu; Clark, John W.; Mawlawi, Osama
2015-01-15
Purpose: Most positron emission tomography/computed tomography (PET/CT) scanners consist of tightly packed discrete detector rings to improve scanner efficiency. The authors’ aim was to use compressive sensing (CS) techniques in PET imaging to investigate the possibility of decreasing the number of detector elements per ring (introducing gaps) while maintaining image quality. Methods: A CS model based on a combination of gradient magnitude and wavelet domains (wavelet-TV) was developed to recover missing observations in PET data acquisition. The model was designed to minimize the total variation (TV) and L1-norm of wavelet coefficients while constrained by the partially observed data. The CSmore » model also incorporated a Poisson noise term that modeled the observed noise while suppressing its contribution by penalizing the Poisson log likelihood function. Three experiments were performed to evaluate the proposed CS recovery algorithm: a simulation study, a phantom study, and six patient studies. The simulation dataset comprised six disks of various sizes in a uniform background with an activity concentration of 5:1. The simulated image was multiplied by the system matrix to obtain the corresponding sinogram and then Poisson noise was added. The resultant sinogram was masked to create the effect of partial detector removal and then the proposed CS algorithm was applied to recover the missing PET data. In addition, different levels of noise were simulated to assess the performance of the proposed algorithm. For the phantom study, an IEC phantom with six internal spheres each filled with F-18 at an activity-to-background ratio of 10:1 was used. The phantom was imaged twice on a RX PET/CT scanner: once with all detectors operational (baseline) and once with four detector blocks (11%) turned off at each of 0 °, 90 °, 180 °, and 270° (partially sampled). The partially acquired sinograms were then recovered using the proposed algorithm. For the third test, PET images from six patient studies were investigated using the same strategy of the phantom study. The recovered images using WTV and TV as well as the partially sampled images from all three experiments were then compared with the fully sampled images (the baseline). Comparisons were done by calculating the mean error (%bias), root mean square error (RMSE), contrast recovery (CR), and SNR of activity concentration in regions of interest drawn in the background as well as the disks, spheres, and lesions. Results: For the simulation study, the mean error, RMSE, and CR for the WTV (TV) recovered images were 0.26% (0.48%), 2.6% (2.9%), 97% (96%), respectively, when compared to baseline. For the partially sampled images, these results were 22.5%, 45.9%, and 64%, respectively. For the simulation study, the average SNR for the baseline was 41.7 while for WTV (TV), recovered image was 44.2 (44.0). The phantom study showed similar trends with 5.4% (18.2%), 15.6% (18.8%), and 78% (60%), respectively, for the WTV (TV) images and 33%, 34.3%, and 69% for the partially sampled images. For the phantom study, the average SNR for the baseline was 14.7 while for WTV (TV) recovered image was 13.7 (11.9). Finally, the average of these values for the six patient studies for the WTV-recovered, TV, and partially sampled images was 1%, 7.2%, 92% and 1.3%, 15.1%, 87%, and 27%, 25.8%, 45%, respectively. Conclusions: CS with WTV is capable of recovering PET images with good quantitative accuracy from partially sampled data. Such an approach can be used to potentially reduce the cost of scanners while maintaining good image quality.« less
Blind estimation of blur in hyperspectral images
NASA Astrophysics Data System (ADS)
Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir
2017-10-01
Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present on the original image, blind restoration methods can only be considered. By blind, we mean absolutely no knowledge neither of the blur point spread function (PSF) nor the original latent channel and the noise level. In this study, we address the blind restoration of the degraded channels component-wise, according to a sequential scheme. For each degraded channel, the sequential scheme estimates the blur point spread function (PSF) in a first stage and deconvolves the degraded channel in a second and final stage by means of using the PSF previously estimated. We propose a new component-wise blind method for estimating effectively and accurately the blur point spread function. This method follows recent approaches suggesting the detection, selection and use of sufficiently salient edges in the current processed channel for supporting the regularized blur PSF estimation. Several modifications are beneficially introduced in our work. A new selection of salient edges through thresholding adequately the cumulative distribution of their corresponding gradient magnitudes is introduced. Besides, quasi-automatic and spatially adaptive tuning of the involved regularization parameters is considered. To prove applicability and higher efficiency of the proposed method, we compare it against the method it originates from and four representative edge-sparsifying regularized methods of the literature already assessed in a previous work. Our attention is mainly paid to the objective analysis (via ݈l1-norm) of the blur PSF error estimation accuracy. The tests are performed on a synthetic hyperspectral image. This synthetic hyperspectral image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic spatial distribution of reference spectral signatures to recover after synthetic degradation. The synthetic hyperspectral image has been successively degraded with eight real blurs taken from the literature, each of a different support size. Conclusions, practical recommendations and perspectives are drawn from the results experimentally obtained.
MR-assisted PET Motion Correction for eurological Studies in an Integrated MR-PET Scanner
Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B.; Michel, Christian J.; El Fakhri, Georges; Schmand, Matthias; Sorensen, A. Gregory
2011-01-01
Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MR data can be used for motion tracking. In this work, a novel data processing and rigid-body motion correction (MC) algorithm for the MR-compatible BrainPET prototype scanner is described and proof-of-principle phantom and human studies are presented. Methods To account for motion, the PET prompts and randoms coincidences as well as the sensitivity data are processed in the line or response (LOR) space according to the MR-derived motion estimates. After sinogram space rebinning, the corrected data are summed and the motion corrected PET volume is reconstructed from these sinograms and the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed and motion estimates were obtained using two high temporal resolution MR-based motion tracking techniques. Results After accounting for the physical mismatch between the two scanners, perfectly co-registered MR and PET volumes are reproducibly obtained. The MR output gates inserted in to the PET list-mode allow the temporal correlation of the two data sets within 0.2 s. The Hoffman phantom volume reconstructed processing the PET data in the LOR space was similar to the one obtained processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the novel MC algorithm. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 seconds and 20 ms, respectively. Substantially improved PET images with excellent delineation of specific brain structures were obtained after applying the MC using these MR-based estimates. Conclusion A novel MR-based MC algorithm was developed for the integrated MR-PET scanner. High temporal resolution MR-derived motion estimates (obtained while simultaneously acquiring anatomical or functional MR data) can be used for PET MC. An MR-based MC has the potential to improve PET as a quantitative method, increasing its reliability and reproducibility which could benefit a large number of neurological applications. PMID:21189415
Validation of CT dose-reduction simulation
Massoumzadeh, Parinaz; Don, Steven; Hildebolt, Charles F.; Bae, Kyongtae T.; Whiting, Bruce R.
2009-01-01
The objective of this research was to develop and validate a custom computed tomography dose-reduction simulation technique for producing images that have an appearance consistent with the same scan performed at a lower mAs (with fixed kVp, rotation time, and collimation). Synthetic noise is added to projection (sinogram) data, incorporating a stochastic noise model that includes energy-integrating detectors, tube-current modulation, bowtie beam filtering, and electronic system noise. Experimental methods were developed to determine the parameters required for each component of the noise model. As a validation, the outputs of the simulations were compared to measurements with cadavers in the image domain and with phantoms in both the sinogram and image domain, using an unbiased root-mean-square relative error metric to quantify agreement in noise processes. Four-alternative forced-choice (4AFC) observer studies were conducted to confirm the realistic appearance of simulated noise, and the effects of various system model components on visual noise were studied. The “just noticeable difference (JND)” in noise levels was analyzed to determine the sensitivity of observers to changes in noise level. Individual detector measurements were shown to be normally distributed (p>0.54), justifying the use of a Gaussian random noise generator for simulations. Phantom tests showed the ability to match original and simulated noise variance in the sinogram domain to within 5.6%±1.6% (standard deviation), which was then propagated into the image domain with errors less than 4.1%±1.6%. Cadaver measurements indicated that image noise was matched to within 2.6%±2.0%. More importantly, the 4AFC observer studies indicated that the simulated images were realistic, i.e., no detectable difference between simulated and original images (p=0.86) was observed. JND studies indicated that observers’ sensitivity to change in noise levels corresponded to a 25% difference in dose, which is far larger than the noise accuracy achieved by simulation. In summary, the dose-reduction simulation tool demonstrated excellent accuracy in providing realistic images. The methodology promises to be a useful tool for researchers and radiologists to explore dose reduction protocols in an effort to produce diagnostic images with radiation dose “as low as reasonably achievable.” PMID:19235386
Normal, nearsightedness, and farsightedness (image)
... Nearsightedness results in blurred vision when the visual image is focused in front of the retina, rather ... blurred. Farsightedness is the result of the visual image being focused behind the retina rather than directly ...
Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor
Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki
2015-01-01
This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760
Carasso, Alfred S
2013-01-01
Identifying sources of ground water pollution, and deblurring nanoscale imagery as well as astronomical galaxy images, are two important applications involving numerical computation of parabolic equations backward in time. Surprisingly, very little is known about backward continuation in nonlinear parabolic equations. In this paper, an iterative procedure originating in spectroscopy in the 1930’s, is adapted into a useful tool for solving a wide class of 2D nonlinear backward parabolic equations. In addition, previously unsuspected difficulties are uncovered that may preclude useful backward continuation in parabolic equations deviating too strongly from the linear, autonomous, self adjoint, canonical model. This paper explores backward continuation in selected 2D nonlinear equations, by creating fictitious blurred images obtained by using several sharp images as initial data in these equations, and capturing the corresponding solutions at some positive time T. Successful backward continuation from t=T to t = 0, would recover the original sharp image. Visual recognition provides meaningful evaluation of the degree of success or failure in the reconstructed solutions. Instructive examples are developed, illustrating the unexpected influence of certain types of nonlinearities. Visually and statistically indistinguishable blurred images are presented, with vastly different deblurring results. These examples indicate that how an image is nonlinearly blurred is critical, in addition to the amount of blur. The equations studied represent nonlinear generalizations of Brownian motion, and the blurred images may be interpreted as visually expressing the results of novel stochastic processes. PMID:26401430
Carasso, Alfred S
2013-01-01
Identifying sources of ground water pollution, and deblurring nanoscale imagery as well as astronomical galaxy images, are two important applications involving numerical computation of parabolic equations backward in time. Surprisingly, very little is known about backward continuation in nonlinear parabolic equations. In this paper, an iterative procedure originating in spectroscopy in the 1930's, is adapted into a useful tool for solving a wide class of 2D nonlinear backward parabolic equations. In addition, previously unsuspected difficulties are uncovered that may preclude useful backward continuation in parabolic equations deviating too strongly from the linear, autonomous, self adjoint, canonical model. This paper explores backward continuation in selected 2D nonlinear equations, by creating fictitious blurred images obtained by using several sharp images as initial data in these equations, and capturing the corresponding solutions at some positive time T. Successful backward continuation from t=T to t = 0, would recover the original sharp image. Visual recognition provides meaningful evaluation of the degree of success or failure in the reconstructed solutions. Instructive examples are developed, illustrating the unexpected influence of certain types of nonlinearities. Visually and statistically indistinguishable blurred images are presented, with vastly different deblurring results. These examples indicate that how an image is nonlinearly blurred is critical, in addition to the amount of blur. The equations studied represent nonlinear generalizations of Brownian motion, and the blurred images may be interpreted as visually expressing the results of novel stochastic processes.
Consecutive Short-Scan CT for Geological Structure Analog Models with Large Size on In-Situ Stage.
Yang, Min; Zhang, Wen; Wu, Xiaojun; Wei, Dongtao; Zhao, Yixin; Zhao, Gang; Han, Xu; Zhang, Shunli
2016-01-01
For the analysis of interior geometry and property changes of a large-sized analog model during a loading or other medium (water or oil) injection process with a non-destructive way, a consecutive X-ray computed tomography (XCT) short-scan method is developed to realize an in-situ tomography imaging. With this method, the X-ray tube and detector rotate 270° around the center of the guide rail synchronously by switching positive and negative directions alternately on the way of translation until all the needed cross-sectional slices are obtained. Compared with traditional industrial XCTs, this method well solves the winding problems of high voltage cables and oil cooling service pipes during the course of rotation, also promotes the convenience of the installation of high voltage generator and cooling system. Furthermore, hardware costs are also significantly decreased. This kind of scanner has higher spatial resolution and penetrating ability than medical XCTs. To obtain an effective sinogram which matches rotation angles accurately, a structural similarity based method is applied to elimination of invalid projection data which do not contribute to the image reconstruction. Finally, on the basis of geometrical symmetry property of fan-beam CT scanning, a whole sinogram filling a full 360° range is produced and a standard filtered back-projection (FBP) algorithm is performed to reconstruct artifacts-free images.
Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa
2009-01-01
Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769
NASA Astrophysics Data System (ADS)
Phaterpekar, Siddhesh Nitin
The scope of this article is to cover the synthesis and quality control procedures involved in production of Fludeoxyglucose (18F--FDG). The article also describes the cyclotron production of 18F radioisotope and gives a brief overview on operations and working of a fixed energy medical cyclotron. The quality control procedures for FDG involve radiochemical and radionuclidic purity tests, pH tests, chemical purity tests, sterility tests, endotoxin tests. Each of these procedures were carried out for multiple batches of FDG with a passing rate of 95% among 20 batches. The article also covers the quality assurance steps for the Siemens MicroPET Focus 220 Scanner using a Jaszczak phantom. We have carried out spatial resolution tests on the scanner, with an average transaxial resolution of 1.775mm with 2-3mm offset. Tests involved detector efficiency, blank scan sinograms and transmission sinograms. A series of radioactivity distribution tests are also carried out on a uniform phantom, denoting the variations in radioactivity and uniformity by using cylindrical ROIs in the transverse region of the final image. The purpose of these quality control tests is to make sure the manufactured FDG is biocompatible with the human body. Quality assurance tests are carried on PET scanners for efficient performance, and to make sure the quality of images acquired is according to the radioactivity distribution in the subject of interest.
Differences in children and adolescents' ability of reporting two CVS-related visual problems.
Hu, Liang; Yan, Zheng; Ye, Tiantian; Lu, Fan; Xu, Peng; Chen, Hao
2013-01-01
The present study examined whether children and adolescents can correctly report dry eyes and blurred distance vision, two visual problems associated with computer vision syndrome. Participants are 913 children and adolescents aged 6-17. They were asked to report their visual problems, including dry eyes and blurred distance vision, and received an eye examination, including tear film break-up time (TFBUT) and visual acuity (VA). Inconsistency was found between participants' reports of dry eyes and TFBUT results among all 913 participants as well as for all of four subgroups. In contrast, consistency was found between participants' reports of blurred distance vision and VA results among 873 participants who had never worn glasses as well as for the four subgroups. It was concluded that children and adolescents are unable to report dry eyes correctly; however, they are able to report blurred distance vision correctly. Three practical implications of the findings were discussed. Little is known about children's ability to report their visual problems, an issue critical to diagnosis and treatment of children's computer vision syndrome. This study compared children's self-reports and clinic examination results and found children can correctly report blurred distance vision but not dry eyes.
Estimation of stereovision in conditions of blurring simulation
NASA Astrophysics Data System (ADS)
Krumina, Gunta; Ozolinsh, Maris; Lacis, Ivazs; Lyakhovetskii, Vsevolod
2005-08-01
The aim of this study was to evaluate the simulation of eye pathologies, such as amblyopia and cataracts, to estimate the stereovision in artificial conditions, and to compare the results on the stereothreshold obtained in artificial and real- pathologic conditions. Characteristic of the above-mentioned real-life forms of a reduced vision is a blurred image in one of the eyes. The blurring was simulated by (i) defocusing, (ii) blurred stimuli on the screen, and (iii) occluding of an eye with PLZT or PDLC plates. When comparing the methods, two parameters were used: the subject's visual acuity and the modulation depth of the image. The eye occluder method appeared to systematically provide higher stereothreshold values than the rest of the methods. The PLZT and PDLC plates scattered more in the blue and decreased the contrast of the stimuli when the blurring degree was increased. In the eye occluder method, the stereothreshold increased faster than in the defocusation and monitor stimuli methods when the visual acuity difference was higher than 0.4. It has been shown that the PLZT and PDLC plates are good optical phantoms for the simulation of a cataract, while the defocusation and monitor stimuli methods are more suitable for amblyopia.
Seeing blur: 'motion sharpening' without motion.
Georgeson, Mark A; Hammett, Stephen T
2002-01-01
It is widely supposed that things tend to look blurred when they are moving fast. Previous work has shown that this is true for sharp edges but, paradoxically, blurred edges look sharper when they are moving than when stationary. This is 'motion sharpening'. We show that blurred edges also look up to 50% sharper when they are presented briefly (8-24 ms) than at longer durations (100-500 ms) without motion. This argues strongly against high-level models of sharpening based specifically on compensation for motion blur. It also argues against a recent, low-level, linear filter model that requires motion to produce sharpening. No linear filter model can explain our finding that sharpening was similar for sinusoidal and non-sinusoidal gratings, since linear filters can never distort sine waves. We also conclude that the idea of a 'default' assumption of sharpness is not supported by experimental evidence. A possible source of sharpening is a nonlinearity in the contrast response of early visual mechanisms to fast or transient temporal changes, perhaps based on the magnocellular (M-cell) pathway. Our finding that sharpening is not diminished at low contrast sets strong constraints on the nature of the nonlinearity. PMID:12137571
Fabrication of digital rainbow holograms and 3-D imaging using SEM based e-beam lithography.
Firsov, An; Firsov, A; Loechel, B; Erko, A; Svintsov, A; Zaitsev, S
2014-11-17
Here we present an approach for creating full-color digital rainbow holograms based on mixing three basic colors. Much like in a color TV with three luminescent points per single screen pixel, each color pixel of initial image is presented by three (R, G, B) distinct diffractive gratings in a hologram structure. Change of either duty cycle or area of the gratings are used to provide proper R, G, B intensities. Special algorithms allow one to design rather complicated 3D images (that might even be replacing each other with hologram rotation). The software developed ("RainBow") provides stability of colorization of rotated image by means of equalizing of angular blur from gratings responsible for R, G, B basic colors. The approach based on R, G, B color synthesis allows one to fabricate gray-tone rainbow hologram containing white color what is hardly possible in traditional dot-matrix technology. Budgetary electron beam lithography based on SEM column was used to fabricate practical examples of digital rainbow hologram. The results of fabrication of large rainbow holograms from design to imprinting are presented. Advantages of the EBL in comparison to traditional optical (dot-matrix) technology is considered.
Blur-resistant perimetric stimuli.
Horner, Douglas G; Dul, Mitchell W; Swanson, William H; Liu, Tiffany; Tran, Irene
2013-05-01
To develop perimetric stimuli that are resistant to the effects of peripheral defocus. One eye each was tested on subjects free of eye disease. Experiment 1 assessed spatial frequency, testing 12 subjects at eccentricities from 2 to 7 degrees using blur levels from 0 to 3 diopters (D) for two (Gabor) stimuli (spatial SD, 0.5 degrees; spatial frequencies, 0.5 and 1.0 cycles per degree [cpd]). Experiment 2 assessed stimulus size, testing 12 subjects at eccentricities from 4 to 7 degrees using blur levels 0 to 6 D for two Gaussians with SD of 0.5 and 0.25 degrees and a 0.5-cpd Gabor with SD of 0.5 degrees. Experiment 3 tested 13 subjects at eccentricities from fixation to 27 degrees using blur levels 0 to 6 D for Gabor stimuli at 56 locations; the spatial frequency ranged from 0.14 to 0.50 cpd with location, and SD was scaled accordingly. In experiment 1, blur by 3 D caused a small decline in log contrast sensitivity for the 0.5-cpd stimulus (mean ± SE, 0.09 ± 0.08 log units) and a larger (t = 7.7, p < 0.0001) decline for the 1.0-cpd stimulus (0.37 ± 0.13 log units). In experiment 2, blur by 6 D caused minimal decline for the larger Gaussian, by 0.17 ± 0.16 log units, and larger (t > 4.5, p < 0.001) declines for the smaller Gaussian (0.33 ± 0.16 log units) and the Gabor (0.36 ± 0.18 log units). In experiment 3, blur by 6 D caused declines by 0.27 ± 0.05 log units for eccentricities from 0 to 10 degrees, by 0.20 ± 0.04 log units for eccentricities from 10 to 20 degrees, and 0.13 ± 0.03 log units for eccentricities from 20 to 27 degrees. Experiments 1 and 2 allowed us to design stimuli for experiment 3 that were resistant to effects of peripheral defocus.
Blurred image restoration using knife-edge function and optimal window Wiener filtering.
Wang, Min; Zhou, Shudao; Yan, Wei
2018-01-01
Motion blur in images is usually modeled as the convolution of a point spread function (PSF) and the original image represented as pixel intensities. The knife-edge function can be used to model various types of motion-blurs, and hence it allows for the construction of a PSF and accurate estimation of the degradation function without knowledge of the specific degradation model. This paper addresses the problem of image restoration using a knife-edge function and optimal window Wiener filtering. In the proposed method, we first calculate the motion-blur parameters and construct the optimal window. Then, we use the detected knife-edge function to obtain the system degradation function. Finally, we perform Wiener filtering to obtain the restored image. Experiments show that the restored image has improved resolution and contrast parameters with clear details and no discernible ringing effects.
Blurred image restoration using knife-edge function and optimal window Wiener filtering
Zhou, Shudao; Yan, Wei
2018-01-01
Motion blur in images is usually modeled as the convolution of a point spread function (PSF) and the original image represented as pixel intensities. The knife-edge function can be used to model various types of motion-blurs, and hence it allows for the construction of a PSF and accurate estimation of the degradation function without knowledge of the specific degradation model. This paper addresses the problem of image restoration using a knife-edge function and optimal window Wiener filtering. In the proposed method, we first calculate the motion-blur parameters and construct the optimal window. Then, we use the detected knife-edge function to obtain the system degradation function. Finally, we perform Wiener filtering to obtain the restored image. Experiments show that the restored image has improved resolution and contrast parameters with clear details and no discernible ringing effects. PMID:29377950
Restoration of out-of-focus images based on circle of confusion estimate
NASA Astrophysics Data System (ADS)
Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto
2002-11-01
In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Inventor); Adelstein, Bernard D. (Inventor); Anderson, Mark R. (Inventor); Beutter, Brent R. (Inventor); Ahumada, Albert J., Jr. (Inventor); McCann, Robert S. (Inventor)
2014-01-01
A method and apparatus for reducing the visual blur of an object being viewed by an observer experiencing vibration. In various embodiments of the present invention, the visual blur is reduced through stroboscopic image modulation (SIM). A SIM device is operated in an alternating "on/off" temporal pattern according to a SIM drive signal (SDS) derived from the vibration being experienced by the observer. A SIM device (controlled by a SIM control system) operates according to the SDS serves to reduce visual blur by "freezing" (or reducing an image's motion to a slow drift) the visual image of the viewed object. In various embodiments, the SIM device is selected from the group consisting of illuminator(s), shutter(s), display control system(s), and combinations of the foregoing (including the use of multiple illuminators, shutters, and display control systems).
NASA Astrophysics Data System (ADS)
Guyader, Jean-Marie; Bernardin, Livia; Douglas, Naomi H. M.; Poot, Dirk H. J.; Niessen, Wiro J.; Klein, Stefan
2014-03-01
The apparent diffusion coefficient (ADC) is an imaging biomarker providing quantitative information on the diffusion of water in biological tissues. This measurement could be of relevance in oncology drug development, but it suffers from a lack of reliability. ADC images are computed by applying a voxelwise exponential fitting to multiple diffusion-weighted MR images (DW-MRIs) acquired with different diffusion gradients. In the abdomen, respiratory motion induces misalignments in the datasets, creating visible artefacts and inducing errors in the ADC maps. We propose a multistep post-acquisition motion compensation pipeline based on 3D non-rigid registrations. It corrects for motion within each image and brings all DW-MRIs to a common image space. The method is evaluated on 10 datasets of free-breathing abdominal DW-MRIs acquired from healthy volunteers. Regions of interest (ROIs) are segmented in the right part of the abdomen and measurements are compared in the three following cases: no image processing, Gaussian blurring of the raw DW-MRIs and registration. Results show that both blurring and registration improve the visual quality of ADC images, but compared to blurring, registration yields visually sharper images. Measurement uncertainty is reduced both by registration and blurring. For homogeneous ROIs, blurring and registration result in similar median ADCs, which are lower than without processing. In a ROI at the interface between liver and kidney, registration and blurring yield different median ADCs, suggesting that uncorrected motion introduces a bias. Our work indicates that averaging procedures on the scanner should be avoided, as they remove the opportunity to perform motion correction.
Efficient dense blur map estimation for automatic 2D-to-3D conversion
NASA Astrophysics Data System (ADS)
Vosters, L. P. J.; de Haan, G.
2012-03-01
Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.
An image of the Columbia Plateau from inversion of high-resolution seismic data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lutter, W.J.; Catchings, R.D.; Jarchow, C.M.
1994-08-01
The authors use a method of traveltime inversion of high-resolution seismic data to provide the first reliable images of internal details of the Columbia River Basalt Group (CRBG), the subsurface basalt/sediment interface, and the deeper sediment/basement interface. Velocity structure within the basalts, delineated on the order of 1 km horizontally and 0.2 km vertically, is constrained to within [plus minus]0.1 km/s for most of the seismic profile. Over 5,000 observed traveltimes fit their model with an rms error of 0.018 s. The maximum depth of penetration of the basalt diving waves (truncated by underlying low-velocity sediments) provides a reliable estimatemore » of the depth to the base of the basalt, which agrees with well-log measurements to within 0.05 km (165 ft). The authors use image blurring, calculated from the resolution matrix, to estimate the aspect ratio of images velocity anomaly widths to true widths for velocity features within the basalt. From their calculations of image blurring, they interpret low velocity zones (LVZ) within the basalts at Boylston Mountain and the Whiskey Dick anticline to have widths of 4.5 and 3 km, respectively, within the upper 1.5 km of the model. At greater depth, the widths of these imaged LVZs thin to approximately 2 km or less. They interpret these linear, subparallel, low-velocity zones imaged adjacent to anticlines of the Yakima Fold Belt to be brecciated fault zones. These fault zones dip to the south at angles between 15 to 45 degrees.« less
Comparison of Motion Blur Measurement Methods
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
2008-01-01
Motion blur is a significant display property for which accurate, valid measurement methods are needed. Recent measurements of a set of eight displays by a set of six measurement devices provide an opportunity to evaluate techniques of measurement and of the analysis of those measurements.
Indoor Spatial Updating with Reduced Visual Information
Legge, Gordon E.; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M.
2016-01-01
Purpose Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Methods Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. Results With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. Discussion If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment. PMID:26943674
Indoor Spatial Updating with Reduced Visual Information.
Legge, Gordon E; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M
2016-01-01
Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment.
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-01-01
Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831
Photographic simulation of off-axis blurring due to chromatic aberration in spectacle lenses.
Doroslovački, Pavle; Guyton, David L
2015-02-01
Spectacle lens materials of high refractive index (nd) tend to have high chromatic dispersion (low Abbé number [V]), which may contribute to visual blurring with oblique viewing. A patient who noted off-axis blurring with new high-refractive-index spectacle lenses prompted us to do a photographic simulation of the off-axis aberrations in 3 readily available spectacle lens materials, CR-39 (nd = 1.50), polyurethane (nd = 1.60), and polycarbonate (nd = 1.59). Both chromatic and monochromatic aberrations were found to cause off-axis image degradation. Chromatic aberration was more prominent in the higher-index materials (especially polycarbonate), whereas the lower-index CR-39 had more astigmatism of oblique incidence. It is important to consider off-axis aberrations when a patient complains of otherwise unexplained blurred vision with a new pair of spectacle lenses, especially given the increasing promotion of high-refractive-index materials with high chromatic dispersion. Copyright © 2015 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.
A novel SURE-based criterion for parametric PSF estimation.
Xue, Feng; Blu, Thierry
2015-02-01
We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.
Preprocessing of SAR interferometric data using anisotropic diffusion filter
NASA Astrophysics Data System (ADS)
Sartor, Kenneth; Allen, Josef De Vaughn; Ganthier, Emile; Tenali, Gnana Bhaskar
2007-04-01
The most commonly used smoothing algorithms for complex data processing are blurring functions (i.e., Hanning, Taylor weighting, Gaussian, etc.). Unfortunately, the filters so designed blur the edges in a Synthetic Aperture Radar (SAR) scene, reduce the accuracy of features, and blur the fringe lines in an interferogram. For the Digital Surface Map (DSM) extraction, the blurring of these fringe lines causes inaccuracies in the height of the unwrapped terrain surface. Our goal here is to perform spatially non-uniform smoothing to overcome the above mentioned disadvantages. This is achieved by using a Complex Anisotropic Non-Linear Diffuser (CANDI) filter that is a spatially varying. In particular, an appropriate choice of the convection function in the CANDI filter is able to accomplish the non-uniform smoothing. This boundary sharpening intra-region smoothing filter acts on interferometric SAR (IFSAR) data with noise to produce an interferogram with significantly reduced noise contents and desirable local smoothing. Results of CANDI filtering will be discussed and compared with those obtained by using the standard filters on simulated data.
Face imagery is based on featural representations.
Lobmaier, Janek S; Mast, Fred W
2008-01-01
The effect of imagery on featural and configural face processing was investigated using blurred and scrambled faces. By means of blurring, featural information is reduced; by scrambling a face into its constituent parts configural information is lost. Twenty-four participants learned ten faces together with the sound of a name. In following matching-to-sample tasks participants had to decide whether an auditory presented name belonged to a visually presented scrambled or blurred face in two experimental conditions. In the imagery condition, the name was presented prior to the visual stimulus and participants were required to imagine the corresponding face as clearly and vividly as possible. In the perception condition name and test face were presented simultaneously, thus no facilitation via mental imagery was possible. Analyses of the hit values showed that in the imagery condition scrambled faces were recognized significantly better than blurred faces whereas there was no such effect for the perception condition. The results suggest that mental imagery activates featural representations more than configural representations.
Visual uncertainty influences the extent of an especial skill.
Czyż, S H; Kwon, O-S; Marzec, J; Styrkowiec, P; Breslin, G
2015-12-01
An especial skill in basketball emerges through highly repetitive practice at the 15 ft free throw line. The extent of the role vision plays in the emergence of an especial skill is unknown. We examined the especial skills of ten skilled basketball players in normal and blurred vision conditions where participants wore corrective lenses. As such, we selectively manipulated visual information without affecting the participants' explicit knowledge that they were shooting free throws. We found that shot efficiency was significantly lower in blurred vision conditions as expected, and that the concave shape of shot proficiency function in normal vision conditions became approximately linear in blurred vision conditions. By applying a recently proposed generalization model of especial skills, we suggest that the linearity of shot proficiency function reflects the participants' lesser dependence on especial skill in blurred vision conditions. The findings further characterize the role of visual context in the emergence of an especial skill. Copyright © 2015 Elsevier B.V. All rights reserved.
Human Age Estimation Method Robust to Camera Sensor and/or Face Movement
Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung
2015-01-01
Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282
Real-time deblurring of handshake blurred images on smartphones
NASA Astrophysics Data System (ADS)
Pourreza-Shahri, Reza; Chang, Chih-Hsiang; Kehtarnavaz, Nasser
2015-02-01
This paper discusses an Android app for the purpose of removing blur that is introduced as a result of handshakes when taking images via a smartphone. This algorithm utilizes two images to achieve deblurring in a computationally efficient manner without suffering from artifacts associated with deconvolution deblurring algorithms. The first image is the normal or auto-exposure image and the second image is a short-exposure image that is automatically captured immediately before or after the auto-exposure image is taken. A low rank approximation image is obtained by applying singular value decomposition to the auto-exposure image which may appear blurred due to handshakes. This approximation image does not suffer from blurring while incorporating the image brightness and contrast information. The eigenvalues extracted from the low rank approximation image are then combined with those from the shortexposure image. It is shown that this deblurring app is computationally more efficient than the adaptive tonal correction algorithm which was previously developed for the same purpose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, T; Dong, X; Petrongolo, M
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less
Blur-resistant Perimetric Stimuli
Horner, Douglas G.; Dul, Mitchell W.; Swanson, William H.; Liu, Tiffany; Tran, Irene
2013-01-01
Purpose To develop perimetric stimuli which are resistant to the effects of peripheral defocus. Methods One eye each was tested on subjects free of eye disease. Experiment 1 assessed spatial frequency, testing 12 subjects at eccentricities from 2° to 7°, using blur levels from 0 D to 3 D for two (Gabor) stimuli (spatial standard deviation (SD) = 0.5°, spatial frequencies of 0.5 and 1.0 cpd). Experiment 2 assessed stimulus size, testing 12 subjects at eccentricities from 4° to 7°, using blur levels 0 D to 6 D, for two Gaussians with SDs of 0.5° and 0.25° and a 0.5 cpd Gabor with SD of 0.5°. Experiment 3 tested 13 subjects at eccentricities from fixation to 27°, using blur levels 0 D to 6 D, for Gabor stimuli at 56 locations; the spatial frequency ranged from 0.14 to 0.50 cpd with location, and SD was scaled accordingly. Results In experiment 1, blur by 3 D caused a small decline in log contrast sensitivity (CS) for the 0.5 cpd stimulus (mean ± SE = −0.09 ± 0.08 log unit) and a larger (t = 7.7, p <0.0001) decline for the 1.0 cpd stimulus (0.37 ± 0.13 log unit). In experiment 2, blur by 6 D caused minimal decline for the larger Gaussian, by −0.17 ± 0.16 log unit, and larger (t >4.5, p < 0.001) declines for the smaller Gaussian (−0.33 ± 0.16 log unit) and the Gabor (−0.36 ± 0.18 log unit). In experiment 3, blur by 6 D caused declines by 0.27 ± 0.05 log unit for eccentricities from 0° to 10°, by 0.20 ± 0.04 log unit for eccentricities from 10° to 20° and 0.13 ± 0.03 log unit for eccentricities from 20°–27°. Conclusions Experiments 1 & 2 allowed us to design stimuli for Experiment 3 that were resistant to effects of peripheral defocus. PMID:23584488
Single image super-resolution reconstruction algorithm based on eage selection
NASA Astrophysics Data System (ADS)
Zhang, Yaolan; Liu, Yijun
2017-05-01
Super-resolution (SR) has become more important, because it can generate high-quality high-resolution (HR) images from low-resolution (LR) input images. At present, there are a lot of work is concentrated on developing sophisticated image priors to improve the image quality, while taking much less attention to estimating and incorporating the blur model that can also impact the reconstruction results. We present a new reconstruction method based on eager selection. This method takes full account of the factors that affect the blur kernel estimation and accurately estimating the blur process. When comparing with the state-of-the-art methods, our method has comparable performance.
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor); Ahumada, Albert J. (Inventor)
2014-01-01
A method of measuring motion blur is disclosed comprising obtaining a moving edge temporal profile r(sub 1)(k) of an image of a high-contrast moving edge, calculating the masked local contrast m(sub1)(k) for r(sub 1)(k) and the masked local contrast m(sub 2)(k) for an ideal step edge waveform r(sub 2)(k) with the same amplitude as r(sub 1)(k), and calculating the measure or motion blur Psi as a difference function, The masked local contrasts are calculated using a set of convolution kernels scaled to simulate the performance of the human visual system, and Psi is measured in units of just-noticeable differences.
Blurring of the public/private divide: the Canadian chapter.
Flood, Colleen M; Thomas, Bryan
2010-06-01
Blurring of public/private divide is occurring in different ways around the world, with differential effects in terms of access and equity. In Canada, one pathway towards privatization has received particular attention: duplicative private insurance, allowing those with the financial means to bypass queues in the public system. We assess recent legal and policy developments on this front, but also describe other trends towards the blurring of public and private in Canada: the reliance on mandated private insurance for pharmaceutical coverage; provincial governments' reliance on public-private partnerships to finance hospitals; and the incorporation of for-profit clinics within the public health care system.
Dynamic accommodation with simulated targets blurred with high order aberrations
Gambra, Enrique; Wang, Yinan; Yuan, Jing; Kruger, Philip B.; Marcos, Susana
2010-01-01
High order aberrations have been suggested to play a role in determining the direction of accommodation. We have explored the effect of retinal blur induced by high order aberrations on dynamic accommodation by measuring the accommodative response to sinusoidal variations in accommodative demand (1–3 D). The targets were blurred with 0.3 and 1 μm (for a 3-mm pupil) of defocus, coma, trefoil and spherical aberration. Accommodative gain decreased significantly when 1-μm of aberration was induced. We found a strong correlation between the relative accommodative gain (and phase lag) and the contrast degradation imposed on the target at relevant spatial frequencies. PMID:20600230
WE-FG-207B-04: Noise Suppression for Energy-Resolved CT Via Variance Weighted Non-Local Filtration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harms, J; Zhu, L
Purpose: The photon starvation problem is exacerbated in energy-resolved CT, since the detected photons are shared by multiple energy channels. Using pixel similarity-based non-local filtration, we aim to produce accurate and high-resolution energy-resolved CT images with significantly reduced noise. Methods: Averaging CT images reconstructed from different energy channels reduces noise at the price of losing spectral information, while conventional denoising techniques inevitably degrade image resolution. Inspired by the fact that CT images of the same object at different energies share the same structures, we aim to reduce noise of energy-resolved CT by averaging only pixels of similar materials - amore » non-local filtration technique. For each CT image, an empirical exponential model is used to calculate the material similarity between two pixels based on their CT values and the similarity values are organized in a matrix form. A final similarity matrix is generated by averaging these similarity matrices, with weights inversely proportional to the estimated total noise variance in the sinogram of different energy channels. Noise suppression is achieved for each energy channel via multiplying the image vector by the similarity matrix. Results: Multiple scans on a tabletop CT system are used to simulate 6-channel energy-resolved CT, with energies ranging from 75 to 125 kVp. On a low-dose acquisition at 15 mA of the Catphan©600 phantom, our method achieves the same image spatial resolution as a high-dose scan at 80 mA with a noise standard deviation (STD) lower by a factor of >2. Compared with another non-local noise suppression algorithm (ndiNLM), the proposed algorithms obtains images with substantially improved resolution at the same level of noise reduction. Conclusion: We propose a noise-suppression method for energy-resolved CT. Our method takes full advantage of the additional structural information provided by energy-resolved CT and preserves image values at each energy level. Research reported in this publication was supported by the National Institute Of Biomedical Imaging And Bioengineering of the National Institutes of Health under Award Number R21EB019597. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
Blind image deblurring based on trained dictionary and curvelet using sparse representation
NASA Astrophysics Data System (ADS)
Feng, Liang; Huang, Qian; Xu, Tingfa; Li, Shao
2015-04-01
Motion blur is one of the most significant and common artifacts causing poor image quality in digital photography, in which many factors resulted. In imaging process, if the objects are moving quickly in the scene or the camera moves in the exposure interval, the image of the scene would blur along the direction of relative motion between the camera and the scene, e.g. camera shake, atmospheric turbulence. Recently, sparse representation model has been widely used in signal and image processing, which is an effective method to describe the natural images. In this article, a new deblurring approach based on sparse representation is proposed. An overcomplete dictionary learned from the trained image samples via the KSVD algorithm is designed to represent the latent image. The motion-blur kernel can be treated as a piece-wise smooth function in image domain, whose support is approximately a thin smooth curve, so we employed curvelet to represent the blur kernel. Both of overcomplete dictionary and curvelet system have high sparsity, which improves the robustness to the noise and more satisfies the observer's visual demand. With the two priors, we constructed restoration model of blurred images and succeeded to solve the optimization problem with the help of alternating minimization technique. The experiment results prove the method can preserve the texture of original images and suppress the ring artifacts effectively.
Reading Motivation and Reading Engagement: Clarifying Commingled Conceptions
ERIC Educational Resources Information Center
Unrau, Norman J.; Quirk, Matthew
2014-01-01
The constructs of motivation for reading and reading engagement have frequently become blurred and ambiguous in both research and discussions of practice. To address this commingling of constructs, the authors provide a concise review of the literature on motivation for reading and reading engagement and illustrate the blurring of those concepts…
The "Blur" of Federal Information and Services: Implications for University Libraries.
ERIC Educational Resources Information Center
Lippincott, Joan K.; Cheverie, Joan F.
1999-01-01
Discusses the interrelation of product content with associated services, or "blurring" (Davis and Meyer) and its relation to federal information and services. Highlights include the federal role in facilitating use of government-collected information; infrastructure and policy issues; and implications for university library reference services,…
Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography
NASA Astrophysics Data System (ADS)
Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.
2016-10-01
With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.
Ostenson, Jason; Robison, Ryan K; Zwart, Nicholas R; Welch, E Brian
2017-09-01
Magnetic resonance fingerprinting (MRF) pulse sequences often employ spiral trajectories for data readout. Spiral k-space acquisitions are vulnerable to blurring in the spatial domain in the presence of static field off-resonance. This work describes a blurring correction algorithm for use in spiral MRF and demonstrates its effectiveness in phantom and in vivo experiments. Results show that image quality of T1 and T2 parametric maps is improved by application of this correction. This MRF correction has negligible effect on the concordance correlation coefficient and improves coefficient of variation in regions of off-resonance relative to uncorrected measurements. Copyright © 2017 Elsevier Inc. All rights reserved.
An iterative algorithm for L1-TV constrained regularization in image restoration
NASA Astrophysics Data System (ADS)
Chen, K.; Loli Piccolomini, E.; Zama, F.
2015-11-01
We consider the problem of restoring blurred images affected by impulsive noise. The adopted method restores the images by solving a sequence of constrained minimization problems where the data fidelity function is the ℓ1 norm of the residual and the constraint, chosen as the image Total Variation, is automatically adapted to improve the quality of the restored images. Although this approach is general, we report here the case of vectorial images where the blurring model involves contributions from the different image channels (cross channel blur). A computationally convenient extension of the Total Variation function to vectorial images is used and the results reported show that this approach is efficient for recovering nearly optimal images.
Motion-blur-compensated structural health monitoring system for tunnels at a speed of 100 km/h
NASA Astrophysics Data System (ADS)
Hayakawa, Tomohiko; Ishikawa, Masatoshi
2017-04-01
High quality images of tunnel surfaces are necessary for visual judgment of abnormal parts. Hence, we propose a monitoring system from a vehicle, which is motion-blur-compensated by the back and forth motion of a galvanometer mirror to offset the vehicle speed, prolong exposure time, and take sharp images including detailed textures. As experimental result of the vehicle-mounted system, we confirmed significant improvements in image quality for a few millimeter-sized ordered black-and-white stripes and cracks, by means of motion blur compensation and prolonged exposure time, under the maximum speed allowed in Japan in a standard tunnel of a highway.
ERIC Educational Resources Information Center
Cartun, Ashley; Penuel, William R.; West-Puckett, Stephanie
2017-01-01
In participatory cultures, the lines between producers and consumers of text are blurred, and communities emerge that are based on shared interest and peer support. Although literacy scholarship has mostly focused on youth engagement and literacy practices within online participatory cultures, scholars in the learning sciences investigate these…
Blurring the Boundaries of Public and Private Education in Brazil
ERIC Educational Resources Information Center
Akkari, Abdeljalil
2013-01-01
A typical analysis of the privatization of education in Latin America focuses on private sector development at the expense of public education. In this paper, I propose a different view that will highlight the blurring of boundaries between public and private education in Brazil. This confusion perpetuates the historical duality of the education…
Video surveillance with speckle imaging
Carrano, Carmen J [Livermore, CA; Brase, James M [Pleasanton, CA
2007-07-17
A surveillance system looks through the atmosphere along a horizontal or slant path. Turbulence along the path causes blurring. The blurring is corrected by speckle processing short exposure images recorded with a camera. The exposures are short enough to effectively freeze the atmospheric turbulence. Speckle processing is used to recover a better quality image of the scene.
Model-based quantification of image quality
NASA Technical Reports Server (NTRS)
Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.
1989-01-01
In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.
Local blur analysis and phase error correction method for fringe projection profilometry systems.
Rao, Li; Da, Feipeng
2018-05-20
We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.
Motion-Blurred Particle Image Restoration for On-Line Wear Monitoring
Peng, Yeping; Wu, Tonghai; Wang, Shuo; Kwok, Ngaiming; Peng, Zhongxiao
2015-01-01
On-line images of wear debris contain important information for real-time condition monitoring, and a dynamic imaging technique can eliminate particle overlaps commonly found in static images, for instance, acquired using ferrography. However, dynamic wear debris images captured in a running machine are unavoidably blurred because the particles in lubricant are in motion. Hence, it is difficult to acquire reliable images of wear debris with an adequate resolution for particle feature extraction. In order to obtain sharp wear particle images, an image processing approach is proposed. Blurred particles were firstly separated from the static background by utilizing a background subtraction method. Second, the point spread function was estimated using power cepstrum to determine the blur direction and length. Then, the Wiener filter algorithm was adopted to perform image restoration to improve the image quality. Finally, experiments were conducted with a large number of dynamic particle images to validate the effectiveness of the proposed method and the performance of the approach was also evaluated. This study provides a new practical approach to acquire clear images for on-line wear monitoring. PMID:25856328
Quantitative assessment of image motion blur in diffraction images of moving biological cells
NASA Astrophysics Data System (ADS)
Wang, He; Jin, Changrong; Feng, Yuanming; Qi, Dandan; Sa, Yu; Hu, Xin-Hua
2016-02-01
Motion blur (MB) presents a significant challenge for obtaining high-contrast image data from biological cells with a polarization diffraction imaging flow cytometry (p-DIFC) method. A new p-DIFC experimental system has been developed to evaluate the MB and its effect on image analysis using a time-delay-integration (TDI) CCD camera. Diffraction images of MCF-7 and K562 cells have been acquired with different speed-mismatch ratios and compared to characterize MB quantitatively. Frequency analysis of the diffraction images shows that the degree of MB can be quantified by bandwidth variations of the diffraction images along the motion direction. The analytical results were confirmed by the p-DIFC image data acquired at different speed-mismatch ratios and used to validate a method of numerical simulation of MB on blur-free diffraction images, which provides a useful tool to examine the blurring effect on diffraction images acquired from the same cell. These results provide insights on the dependence of diffraction image on MB and allow significant improvement on rapid biological cell assay with the p-DIFC method.
An Aggregated Method for Determining Railway Defects and Obstacle Parameters
NASA Astrophysics Data System (ADS)
Loktev, Daniil; Loktev, Alexey; Stepanov, Roman; Pevzner, Viktor; Alenov, Kanat
2018-03-01
The method of combining algorithms of image blur analysis and stereo vision to determine the distance to objects (including external defects of railway tracks) and the speed of moving objects-obstacles is proposed. To estimate the deviation of the distance depending on the blur a statistical approach, logarithmic, exponential and linear standard functions are used. The statistical approach includes a method of estimating least squares and the method of least modules. The accuracy of determining the distance to the object, its speed and direction of movement is obtained. The paper develops a method of determining distances to objects by analyzing a series of images and assessment of depth using defocusing using its aggregation with stereoscopic vision. This method is based on a physical effect of dependence on the determined distance to the object on the obtained image from the focal length or aperture of the lens. In the calculation of the blur spot diameter it is assumed that blur occurs at the point equally in all directions. According to the proposed approach, it is possible to determine the distance to the studied object and its blur by analyzing a series of images obtained using the video detector with different settings. The article proposes and scientifically substantiates new and improved existing methods for detecting the parameters of static and moving objects of control, and also compares the results of the use of various methods and the results of experiments. It is shown that the aggregate method gives the best approximation to the real distances.
Li, Guang; Luo, Shouhua; Yan, Yuling; Gu, Ning
2015-01-01
The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion.
2015-01-01
Background The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. Methods In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. Results By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. Conclusions The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion. PMID:25602532
NASA Astrophysics Data System (ADS)
Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno
2015-09-01
For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images. Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode.
Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno
2015-01-01
For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. Methods We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 seconds. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.04375 mAs, were investigated. Both the analytical FDK algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. Results With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images. Conclusions Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode. PMID:26352168
NASA Astrophysics Data System (ADS)
Solomon, Justin; Ba, Alexandre; Diao, Andrew; Lo, Joseph; Bier, Elianna; Bochud, François; Gehm, Michael; Samei, Ehsan
2016-03-01
In x-ray computed tomography (CT), task-based image quality studies are typically performed using uniform background phantoms with low-contrast signals. Such studies may have limited clinical relevancy for modern non-linear CT systems due to possible influence of background texture on image quality. The purpose of this study was to design and implement anatomically informed textured phantoms for task-based assessment of low-contrast detection. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find the CLB parameters that were most reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, a cylinder phantom (165 mm in diameter and 30 mm height) was designed, containing 20 low-contrast spherical signals (6 mm in diameter at targeted contrast levels of ~3.2, 5.2, 7.2, 10, and 14 HU, 4 repeats per signal). The phantom was voxelized and input into a commercial multi-material 3D printer (Object Connex 350), with custom software for voxel-based printing. Using principles of digital half-toning and dithering, the 3D printer was programmed to distribute two base materials (VeroWhite and TangoPlus, nominal voxel size of 42x84x30 microns) to achieve the targeted spatial distribution of x-ray attenuation properties. The phantom was used for task-based image quality assessment of a clinically available iterative reconstruction algorithm (Sinogram Affirmed Iterative Reconstruction, SAFIRE) using a channelized Hotelling observer paradigm. Images of the textured phantom and a corresponding uniform phantom were acquired at six dose levels and observer model performance was estimated for each condition (5 contrasts x 6 doses x 2 reconstructions x 2 backgrounds = 120 total conditions). Based on the observer model results, the dose reduction potential of SAFIRE was computed and compared between the uniform and textured phantom. The dose reduction potential of SAFIRE was found to be 23% based on the uniform phantom and 17% based on the textured phantom. This discrepancy demonstrates the need to consider background texture when assessing non-linear reconstruction algorithms.
NASA Astrophysics Data System (ADS)
Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin
2018-02-01
Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model
Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET
NASA Astrophysics Data System (ADS)
Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan
2016-02-01
Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.
The Role of Clarity and Blur in Guiding Visual Attention in Photographs
ERIC Educational Resources Information Center
Enns, James T.; MacDonald, Sarah C.
2013-01-01
Visual artists and photographers believe that a viewer's gaze can be guided by selective use of image clarity and blur, but there is little systematic research. In this study, participants performed several eye-tracking tasks with the same naturalistic photographs, including recognition memory for the entire photo, as well as recognition memory…
Patent Donations: Making Use of the Gift of Technology
ERIC Educational Resources Information Center
Talnack, G. Marie
2010-01-01
The lines between basic and applied research and the sectors of the U.S. economy responsible for each type have begun to blur. No better case for the blurring of these lines and the benefits of technology transfer among research institutions can be provided than the recent phenomenon of corporate patent donations to non-profit research…
The effect of monocular target blur on simulated telerobotic manipulation
NASA Technical Reports Server (NTRS)
Liu, Andrew; Stark, Lawrence
1991-01-01
A simulation involving three types of telerobotic tasks that require information about the spatial position of objects is reported. This is similar to the results of psychophysical experiments examining the effect of blur on stereoacuity. It is suggested that other psychophysical experimental results could be used to predict operator performance for other telerobotic tasks. It is demonstrated that refractive errors in the helmet-mounted stereo display system can affect performance in the three types of telerobotic tasks. The results of two sets of experiments indicate that monocular target blur of two diopters or more degrades stereo display performance to the level of monocular displays. This indicates that moderate levels of visual degradation that affect the operator's stereoacuity may eliminate the performance advantage of stereo displays.
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
Combined invariants to similarity transformation and to blur using orthogonal Zernike moments
Beijing, Chen; Shu, Huazhong; Zhang, Hui; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis
2011-01-01
The derivation of moment invariants has been extensively investigated in the past decades. In this paper, we construct a set of invariants derived from Zernike moments which is simultaneously invariant to similarity transformation and to convolution with circularly symmetric point spread function (PSF). Two main contributions are provided: the theoretical framework for deriving the Zernike moments of a blurred image and the way to construct the combined geometric-blur invariants. The performance of the proposed descriptors is evaluated with various PSFs and similarity transformations. The comparison of the proposed method with the existing ones is also provided in terms of pattern recognition accuracy, template matching and robustness to noise. Experimental results show that the proposed descriptors perform on the overall better. PMID:20679028
Enacting Work Space in the Flow: Sensemaking about Mobile Practices and Blurring Boundaries
ERIC Educational Resources Information Center
Davis, Loni
2013-01-01
An increasing portion of the contemporary workforce is using mobile devices to create new kinds of work-space flows characterized by emergence, liquidity, and the blurring of all kinds of boundaries. This changes the traditional notion of the term "workplace." The present study focuses on how people enact and make sense of new work space…
1. "X15 RUN UP AREA 230." A somewhat blurred, very ...
1. "X-15 RUN UP AREA 230." A somewhat blurred, very low altitude low oblique view to the northwest. This view predates construction of observation bunkers. Photo no. "14,696 58 A-AFFTC 17 NOV 58." - Edwards Air Force Base, X-15 Engine Test Complex, Rogers Dry Lake, east of runway between North Base & South Base, Boron, Kern County, CA
A noncoherent optical analog image processor.
Swindell, W
1970-11-01
The description of a machine that performs a variety of image processing operations is given, together with a theoretical discussion of its operation. Spatial processing is performed by corrective convolution techniques. Density processing is achieved by means of an electrical transfer function generator included in the video circuit. Examples of images processed for removal of image motion blur, defocus, and atmospheric seeing blur are shown.
ERIC Educational Resources Information Center
Dann, Ruth
2014-01-01
This paper explores assessment and learning in a way that blurs their boundaries. The notion of assessment "as" learning (AaL) is offered as an aspect of formative assessment (assessment for learning). It considers how pupils self-regulate their own learning, and in so doing make complex decisions about how they use feedback and engage…
Contour sensitive saliency and depth application in image retargeting
NASA Astrophysics Data System (ADS)
Lu, Hongju; Yue, Pengfei; Zhao, Yanhui; Liu, Rui; Fu, Yuanbin; Zheng, Yuanjie; Cui, Jia
2018-04-01
Image retargeting technique requires important information preservation and less edge distortion during increasing/decreasing image size. The major existed content-aware methods perform well. However, there are two problems should be improved: the slight distortion appeared at the object edges and the structure distortion in the nonsalient area. According to psychological theories, people evaluate image quality based on multi-level judgments and comparison between different areas, both image content and image structure. The paper proposes a new standard: the structure preserving in non-salient area. After observation and image analysis, blur (slight blur) is generally existed at the edge of objects. The blur feature is used to estimate the depth cue, named blur depth descriptor. It can be used in the process of saliency computation for balanced image retargeting result. In order to keep the structure information in nonsalient area, the salient edge map is presented in Seam Carving process, instead of field-based saliency computation. The derivative saliency from x- and y-direction can avoid the redundant energy seam around salient objects causing structure distortion. After the comparison experiments between classical approaches and ours, the feasibility of our algorithm is proved.
Photoresist and stochastic modeling
NASA Astrophysics Data System (ADS)
Hansen, Steven G.
2018-01-01
Analysis of physical modeling results can provide unique insights into extreme ultraviolet stochastic variation, which augment, and sometimes refute, conclusions based on physical intuition and even wafer experiments. Simulations verify the primacy of "imaging critical" counting statistics (photons, electrons, and net acids) and the image/blur-dependent dose sensitivity in describing the local edge or critical dimension variation. But the failure of simple counting when resist thickness is varied highlights a limitation of this exact analytical approach, so a calibratable empirical model offers useful simplicity and convenience. Results presented here show that a wide range of physical simulation results can be well matched by an empirical two-parameter model based on blurred image log-slope (ILS) for lines/spaces and normalized ILS for holes. These results are largely consistent with a wide range of published experimental results; however, there is some disagreement with the recently published dataset of De Bisschop. The present analysis suggests that the origin of this model failure is an unexpected blurred ILS:dose-sensitivity relationship failure in that resist process. It is shown that a photoresist mechanism based on high photodecomposable quencher loading and high quencher diffusivity can give rise to pitch-dependent blur, which may explain the discrepancy.
Deblurring for spatial and temporal varying motion with optical computing
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Xue, Dongfeng; Hui, Zhao
2016-05-01
A way to estimate and remove spatially and temporally varying motion blur is proposed, which is based on an optical computing system. The translation and rotation motion can be independently estimated from the joint transform correlator (JTC) system without iterative optimization. The inspiration comes from the fact that the JTC system is immune to rotation motion in a Cartesian coordinate system. The work scheme of the JTC system is designed to keep switching between the Cartesian coordinate system and polar coordinate system in different time intervals with the ping-pang handover. In the ping interval, the JTC system works in the Cartesian coordinate system to obtain a translation motion vector with optical computing speed. In the pang interval, the JTC system works in the polar coordinate system. The rotation motion is transformed to the translation motion through coordinate transformation. Then the rotation motion vector can also be obtained from JTC instantaneously. To deal with continuous spatially variant motion blur, submotion vectors based on the projective motion path blur model are proposed. The submotion vectors model is more effective and accurate at modeling spatially variant motion blur than conventional methods. The simulation and real experiment results demonstrate its overall effectiveness.
NASA Astrophysics Data System (ADS)
Chang, Wen-Li
2010-01-01
We investigate the influence of blurred ways on pattern recognition of a Barabási-Albert scale-free Hopfield neural network (SFHN) with a small amount of errors. Pattern recognition is an important function of information processing in brain. Due to heterogeneous degree of scale-free network, different blurred ways have different influences on pattern recognition with same errors. Simulation shows that among partial recognition, the larger loading ratio (the number of patterns to average degree P/langlekrangle) is, the smaller the overlap of SFHN is. The influence of directed (large) way is largest and the directed (small) way is smallest while random way is intermediate between them. Under the ratio of the numbers of stored patterns to the size of the network P/N is less than 0. 1 conditions, there are three families curves of the overlap corresponding to directed (small), random and directed (large) blurred ways of patterns and these curves are not associated with the size of network and the number of patterns. This phenomenon only occurs in the SFHN. These conclusions are benefit for understanding the relation between neural network structure and brain function.
Focus information is used to interpret binocular images
Hoffman, David M.; Banks, Martin S.
2011-01-01
Focus information—blur and accommodation—is highly correlated with depth in natural viewing. We examined the use of focus information in solving the binocular correspondence problem and in interpreting monocular occlusions. We presented transparent scenes consisting of two planes. Observers judged the slant of the farther plane, which was seen through the nearer plane. To do this, they had to solve the correspondence problem. In one condition, the two planes were presented with sharp rendering on one image plane, as is done in conventional stereo displays. In another condition, the planes were presented on two image planes at different focal distances, simulating focus information in natural viewing. Depth discrimination performance improved significantly when focus information was correct, which shows that the visual system utilizes the information contained in depth-of-field blur in solving binocular correspondence. In a second experiment, we presented images in which one eye could see texture behind an occluder that the other eye could not see. When the occluder's texture was sharp along with the occluded texture, binocular rivalry was prominent. When the occluded and occluding textures were presented with different blurs, rivalry was significantly reduced. This shows that blur aids the interpretation of scene layout near monocular occlusions. PMID:20616139
Forward model with space-variant of source size for reconstruction on X-ray radiographic image
NASA Astrophysics Data System (ADS)
Liu, Jin; Liu, Jun; Jing, Yue-feng; Xiao, Bo; Wei, Cai-hua; Guan, Yong-hong; Zhang, Xuan
2018-03-01
The Forward Imaging Technique is a method to solve the inverse problem of density reconstruction in radiographic imaging. In this paper, we introduce the forward projection equation (IFP model) for the radiographic system with areal source blur and detector blur. Our forward projection equation, based on X-ray tracing, is combined with the Constrained Conjugate Gradient method to form a new method for density reconstruction. We demonstrate the effectiveness of the new technique by reconstructing density distributions from simulated and experimental images. We show that for radiographic systems with source sizes larger than the pixel size, the effect of blur on the density reconstruction is reduced through our method and can be controlled within one or two pixels. The method is also suitable for reconstruction of non-homogeneousobjects.
Comparisons of NIF convergent ablation simulations with radiograph data.
Olson, R E; Hicks, D G; Meezan, N B; Koch, J A; Landen, O L
2012-10-01
A technique for comparing simulation results directly with radiograph data from backlit capsule implosion experiments will be discussed. Forward Abel transforms are applied to the kappa*rho profiles of the simulation. These provide the transmission ratio (optical depth) profiles of the simulation. Gaussian and top hat blurs are applied to the simulated transmission ratio profiles in order to account for the motion blurring and imaging slit resolution of the experimental measurement. Comparisons between the simulated transmission ratios and the radiograph data lineouts are iterated until a reasonable backlighter profile is obtained. This backlighter profile is combined with the blurred, simulated transmission ratios to obtain simulated intensity profiles that can be directly compared with the radiograph data. Examples will be shown from recent convergent ablation (backlit implosion) experiments at the NIF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen
Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less
Registration of Large Motion Blurred CMOS Images
2017-08-28
raju@ee.iitm.ac.in - Institution : Indian Institute of Technology (IIT) Madras, India - Mailing Address : Room ESB 307c, Dept. of Electrical ...AFRL-AFOSR-JP-TR-2017-0066 Registration of Large Motion Blurred CMOS Images Ambasamudram Rajagopalan INDIAN INSTITUTE OF TECHNOLOGY MADRAS Final...NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) INDIAN INSTITUTE OF TECHNOLOGY MADRAS SARDAR PATEL ROAD Chennai, 600036
SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Rottmann, J; Berbeco, R
2014-06-01
Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less
The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross
2014-06-15
Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less
Postural stability changes in the elderly with cataract simulation and refractive blur.
Anand, Vijay; Buckley, John G; Scally, Andy; Elliott, David B
2003-11-01
To determine the influence of cataractous and refractive blur on postural stability and limb-load asymmetry (LLA) and to establish how postural stability changes with the spatial frequency and contrast of the visual stimulus. Thirteen elderly subjects (mean age, 70.76 +/- 4.14 [SD] years) with no history of falls and normal vision were recruited. Postural stability was determined as the root mean square [RMS] of the center of pressure (COP) signal in the anterior-posterior (A-P) and medial-lateral directions and LLA was determined as the ratio of the average body weight placed on the more-loaded limb to the less-loaded limb, recorded during a 30-second period. Data were collected under normal standing conditions and with somatosensory system input disrupted. Measurements were repeated with four visual targets with high (8 cyc/deg) or low (2 cyc/deg) spatial frequency and high (Weber contrast, approximately 95%) or low (Weber contrast, approximately 25%) contrast. Postural stability was measured under conditions of binocular refractive blur of 0, 1, 2, 4, and 8 D and with cataract simulation. The data were analyzed in a population-averaged linear model. The cataract simulation caused significant increases in postural instability equivalent to that caused by 8-D blur conditions, and its effect was greater when the input from the somatosensory system was disrupted. High spatial frequency targets increased postural instability. Refractive blur, cataract simulation, or eye closure had no effect on LLA. Findings indicate that cataractous and refractive blur increase postural instability, and show why the elderly, many of whom have poor vision along with musculoskeletal and central nervous system degeneration, are at greater risk of falling. Findings also highlight that changes in contrast sensitivity rather than resolution changes are responsible for increasing postural instability. Providing low spatial frequency information in certain environments may be useful in maintaining postural stability. Correcting visual impairment caused by uncorrected refractive error and cataracts could be a useful intervention strategy to help prevent falls and fall-related injuries in the elderly.
Accommodation and vergence response gains to different near cues characterize specific esotropias.
Horwood, Anna M; Riddell, Patricia M
2013-09-01
To describe preliminary findings of how the profile of the use of blur, disparity, and proximal cues varies between non-strabismic groups and those with different types of esotropia. This was a case control study. A remote haploscopic photorefractor measured simultaneous convergence and accommodation to a range of targets containing all combinations of binocular disparity, blur, and proximal (looming) cues. Thirteen constant esotropes, 16 fully accommodative esotropes, and 8 convergence excess esotropes were compared with age- and refractive error-matched controls and 27 young adult emmetropic controls. All wore full refractive correction if not emmetropic. Response AC/A and CA/C ratios were also assessed. Cue use differed between the groups. Even esotropes with constant suppression and no binocular vision (BV) responded to disparity in cues. The constant esotropes with weak BV showed trends for more stable responses and better vergence and accommodation than those without any BV. The accommodative esotropes made less use of disparity cues to drive accommodation (p = 0.04) and more use of blur to drive vergence (p = 0.008) than controls. All esotropic groups failed to show the strong bias for better responses to disparity cues found in the controls, with convergence excess esotropes favoring blur cues. AC/A and CA/C ratios existed in an inverse relationship in the different groups. Accommodative lag of > 1.0 D at 33 cm was common (46%) in the pooled esotropia groups compared with 11% in typical children (p = 0.05). Esotropic children use near cues differently from matched non-esotropic children in ways characteristic to their deviations. Relatively higher weighting for blur cues was found in accommodative esotropia compared to matched controls.
Dynamic accommodation responses following adaptation to defocus.
Cufflin, Matthew P; Mallen, Edward A H
2008-10-01
Adaptation to defocus is known to influence the subjective sensitivity to blur in both emmetropes and myopes. Blur is a major contributing factor in the closed-loop dynamic accommodation response. Previous investigations have examined the magnitude of the accommodation response following blur adaptation. We have investigated whether a period of blur adaptation influences the dynamic accommodation response to step and sinusoidal changes in target vergence. Eighteen subjects (six emmetropes, six early onset myopes, and six late onset myopes) underwent 30 min of adaptation to 0.00 D (control), +1.00 D or +3.00 D myopic defocus. Following this adaptation period, accommodation responses to a 2.00 D step change and 2.00 D sinusoidal change (0.2 Hz) in target vergence were recorded continuously using an autorefractor. Adaptation to defocus failed to influence accommodation latency times, but did influence response times to a step change in target vergence. Adaptation to both +1.00 and +3.00 D induced significant increases in response times (p = 0.002 and p = 0.012, respectively) and adaptation to +3.00 D increased the change in accommodation response magnitude (p = 0.014) for a 2.00 D step change in demand. Blur adaptation also significantly increased the peak-to-peak phase lag for accommodation responses to a sinusoidally oscillating target, although failed to influence the accommodation gain. These changes in accommodative response were equivalent across all refractive groups. Adaptation to a degraded stimulus causes an increased level of accommodation for dynamic targets moving towards an observer and increases response times and phase lags. It is suggested that the contrast constancy theory may explain these changes in dynamic behavior.
Accommodation and vergence response gains to different near cues characterize specific esotropias
Horwood, Anna M; Riddell, Patricia M
2015-01-01
Aim To describe preliminary findings of how the profile of the use of blur, disparity and proximal cues varies between non-strabismic groups and those with different types of esotropia. Design Case control study Methodology A remote haploscopic photorefractor measured simultaneous convergence and accommodation to a range of targets containing all combinations of binocular disparity, blur and proximal (looming) cues. 13 constant esotropes, 16 fully accommodative esotropes, and 8 convergence excess esotropes were compared with age and refractive error matched controls, and 27 young adult emmetropic controls. All wore full refractive correction if not emmetropic. Response AC/A and CA/C ratios were also assessed. Results Cue use differed between the groups. Even esotropes with constant suppression and no binocular vision (BV) responded to disparity in cues. The constant esotropes with weak BV showed trends for more stable responses and better vergence and accommodation than those without any BV. The accommodative esotropes made less use of disparity cues to drive accommodation (p=0.04) and more use of blur to drive vergence (p=0.008) than controls. All esotropic groups failed to show the strong bias for better responses to disparity cues found in the controls, with convergence excess esotropes favoring blur cues. AC/A and CA/C ratios existed in an inverse relationship in the different groups. Accommodative lag of >1.0D at 33cm was common (46%) in the pooled esotropia groups compared with 11% in typical children (p=0.05). Conclusion Esotropic children use near cues differently from matched non-esotropic children in ways characteristic to their deviations. Relatively higher weighting for blur cues was found in accommodative esotropia compared to matched controls. PMID:23978142
Aust, Ulrike; Braunöder, Elisabeth
2015-02-01
The present experiment investigated pigeons' and humans' processing styles-local or global-in an exemplar-based visual categorization task in which category membership of every stimulus had to be learned individually, and in a rule-based task in which category membership was defined by a perceptual rule. Group Intact was trained with the original pictures (providing both intact local and global information), Group Scrambled was trained with scrambled versions of the same pictures (impairing global information), and Group Blurred was trained with blurred versions (impairing local information). Subsequently, all subjects were tested for transfer to the 2 untrained presentation modes. Humans outperformed pigeons regarding learning speed and accuracy as well as transfer performance and showed good learning irrespective of group assignment, whereas the pigeons of Group Blurred needed longer to learn the training tasks than the pigeons of Groups Intact and Scrambled. Also, whereas humans generalized equally well to any novel presentation mode, pigeons' transfer from and to blurred stimuli was impaired. Both species showed faster learning and, for the most part, better transfer in the rule-based than in the exemplar-based task, but there was no evidence of the used processing mode depending on the type of task (exemplar- or rule-based). Whereas pigeons relied on local information throughout, humans did not show a preference for either processing level. Additional tests with grayscale versions of the training stimuli, with versions that were both blurred and scrambled, and with novel instances of the rule-based task confirmed and further extended these findings. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Inoue, Makoto; Noda, Toru; Mihashi, Toshifumi; Ohnuma, Kazuhiko; Bissen-Miyajima, Hiroko; Hirakata, Akito
2011-04-01
To evaluate the quality of the image of a grating target placed in a model eye viewed through multifocal intraocular lenses (IOLs). Laboratory investigation. Refractive (NXG1 or PY60MV) or diffractive (ZM900 or SA60D3) multifocal IOLs were placed in a fluid-filled model eye with human corneal aberrations. A United States Air Force resolution target was placed on the posterior surface of the model eye. A flat contact lens or a wide-field contact lens was placed on the cornea. The contrasts of the gratings were evaluated under endoillumination and compared to those obtained through a monofocal IOL. The grating images were clear when viewed through the flat contact lens and through the central far-vision zone of the NXG1 and PY60MV, although those through the near-vision zone were blurred and doubled. The images observed through the central area of the ZM900 with flat contact lens were slightly defocused but the images in the periphery were very blurred. The contrast decreased significantly in low frequencies (P<.001). The images observed through the central diffractive zone of the SA60D3 were slightly blurred, although the images in the periphery were clearer than that of the ZM900. The images were less blurred in all of the refractive and diffractive IOLs with the wide-field contact lens. Refractive and diffractive multifocal IOLs blur the grating target but less with the wide-angle viewing system. The peripheral multifocal optical zone may be more influential on the quality of the images with contact lens system. Copyright © 2011 Elsevier Inc. All rights reserved.
Banjak, Hussein; Grenier, Thomas; Epicier, Thierry; Koneti, Siddardha; Roiban, Lucian; Gay, Anne-Sophie; Magnin, Isabelle; Peyrin, Françoise; Maxim, Voichita
2018-06-01
Fast tomography in Environmental Transmission Electron Microscopy (ETEM) is of a great interest for in situ experiments where it allows to observe 3D real-time evolution of nanomaterials under operating conditions. In this context, we are working on speeding up the acquisition step to a few seconds mainly with applications on nanocatalysts. In order to accomplish such rapid acquisitions of the required tilt series of projections, a modern 4K high-speed camera is used, that can capture up to 100 images per second in a 2K binning mode. However, due to the fast rotation of the sample during the tilt procedure, noise and blur effects may occur in many projections which in turn would lead to poor quality reconstructions. Blurred projections make classical reconstruction algorithms inappropriate and require the use of prior information. In this work, a regularized algebraic reconstruction algorithm named SIRT-FISTA-TV is proposed. The performance of this algorithm using blurred data is studied by means of a numerical blur introduced into simulated images series to mimic possible mechanical instabilities/drifts during fast acquisitions. We also present reconstruction results from noisy data to show the robustness of the algorithm to noise. Finally, we show reconstructions with experimental datasets and we demonstrate the interest of fast tomography with an ultra-fast acquisition performed under environmental conditions, i.e. gas and temperature, in the ETEM. Compared to classically used SIRT and SART approaches, our proposed SIRT-FISTA-TV reconstruction algorithm provides higher quality tomograms allowing easier segmentation of the reconstructed volume for a better final processing and analysis. Copyright © 2018 Elsevier B.V. All rights reserved.
Anand, Vijay; Buckley, John G; Scally, Andy; Elliott, David B
2003-07-01
To determine the influence of refractive blur on postural stability during somatosensory and vestibular system perturbation and dual tasking. Fifteen healthy, elderly subjects (mean age, 71 +/- 5 years), who had no history of falls and had normal vision, were recruited. Postural stability during standing was assessed using a force platform, and was determined as the root mean square (RMS) of the center of pressure (COP) signal in the anterior-posterior (A-P) and medial-lateral directions collected over a 30-second period. Data were collected under normal standing conditions and with somatosensory and vestibular system perturbations. Measurements were repeated with an additional physical and/or cognitive task. Postural stability was measured under conditions of binocular refractive blur of 0, 1, 2, 4, and 8 D and with eyes closed. The data were analyzed with a population-averaged linear model. The greatest increases in postural instability were due to disruptions of the somatosensory and vestibular systems. Increasing refractive blur caused increasing postural instability, and its effect was greater when the input from the other sensory systems was disrupted. Performing an additional cognitive and physical task increased A-P RMS COP further. All these detrimental effects on postural stability were cumulative. The findings highlight the multifactorial nature of postural stability and indicate why the elderly, many of whom have poor vision and musculoskeletal and central nervous system degeneration, are at greater risk of falling. The findings also highlight that standing instability in both normal and perturbed conditions was significantly increased with refractive blur. Correcting visual impairment caused by uncorrected refractive error could be a useful intervention strategy to help prevent falls and fall-related injuries in the elderly.
NASA Astrophysics Data System (ADS)
Qiu, Xiang; Dai, Ming; Yin, Chuan-li
2017-09-01
Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.
The interactive processes of accommodation and vergence.
Semmlow, J L; Bérard, P V; Vercher, J L; Putteman, A; Gauthier, G M
1994-01-01
A near target generates two different, though related stimuli: image disparity and image blur. Fixation of that near target evokes three motor responses: the so-called oculomotor "near triad". It has long been known that both disparity and blur stimuli are each capable of independently generating all three responses, and a recent theory of near triad control (the Dual Interactive Theory) describes how these stimulus components normally work together in the aid of near vision. However, this theory also indicates that when the system becomes unbalanced, as in high AC/A ratios of some accommodative esotropes, the two components will become antagonistic. In this situation, the interaction between the blur and disparity driven components exaggerates the imbalance created in the vergence motor output. Conversely, there is enhanced restoration when the AC/A ratio is effectively reduced surgically.
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis
NASA Astrophysics Data System (ADS)
Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.
2016-07-01
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.
NASA Astrophysics Data System (ADS)
Wang, Jing; Wang, Su; Li, Lihong; Fan, Yi; Lu, Hongbing; Liang, Zhengrong
2008-10-01
Computed tomography colonography (CTC) or CT-based virtual colonoscopy (VC) is an emerging tool for detection of colonic polyps. Compared to the conventional fiber-optic colonoscopy, VC has demonstrated the potential to become a mass screening modality in terms of safety, cost, and patient compliance. However, current CTC delivers excessive X-ray radiation to the patient during data acquisition. The radiation is a major concern for screening application of CTC. In this work, we performed a simulation study to demonstrate a possible ultra low-dose CT technique for VC. The ultra low-dose abdominal CT images were simulated by adding noise to the sinograms of the patient CTC images acquired with normal dose scans at 100 mA s levels. The simulated noisy sinogram or projection data were first processed by a Karhunen-Loeve domain penalized weighted least-squares (KL-PWLS) restoration method and then reconstructed by a filtered backprojection algorithm for the ultra low-dose CT images. The patient-specific virtual colon lumen was constructed and navigated by a VC system after electronic colon cleansing of the orally-tagged residue stool and fluid. By the KL-PWLS noise reduction, the colon lumen can successfully be constructed and the colonic polyp can be detected in an ultra low-dose level below 50 mA s. Polyp detection can be found more easily by the KL-PWLS noise reduction compared to the results using the conventional noise filters, such as Hanning filter. These promising results indicate the feasibility of an ultra low-dose CTC pipeline for colon screening with less-stressful bowel preparation by fecal tagging with oral contrast.
Nam, S B; Jeong, D W; Choo, K S; Nam, K J; Hwang, J-Y; Lee, J W; Kim, J Y; Lim, S J
2017-12-01
To compare the image quality of computed tomography angiography (CTA) reconstructed by sinogram-affirmed iterative reconstruction (SAFIRE) with that of advanced modelled iterative reconstruction (ADMIRE) in children with congenital heart disease (CHD). Thirty-one children (8.23±13.92 months) with CHD who underwent CTA were enrolled. Images were reconstructed using SAFIRE (strength 5) and ADMIRE (strength 5). Objective image qualities (attenuation, noise) were measured in the great vessels and heart chambers. Two radiologists independently calculated the contrast-to-noise ratio (CNR) by measuring the intensity and noise of the myocardial walls. Subjective noise, diagnostic confidence, and sharpness at the level prior to the first branch of the main pulmonary artery were also graded by the two radiologists independently. The objective image noise of ADMIRE was significantly lower than that of SAFIRE in the right atrium, right ventricle, and myocardial wall (p<0.05); however, there were no significant differences observed in the attenuations among the four chambers and great vessels, except in the pulmonary arteries (p>0.05). The mean CNR values were 21.56±10.80 for ADMIRE and 18.21±6.98 for SAFIRE, which were significantly different (p<0.05). In addition, the diagnostic confidence of ADMIRE was significantly lower than that of SAFIRE (p<0.05), while the subjective image noise and sharpness of ADMIRE were not significantly different (p>0.05). CTA using ADMIRE was superior to SAFIRE when comparing the objective and subjective image quality in children with CHD. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
A simulation of orientation dependent, global changes in camera sensitivity in ECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bieszk, J.A.; Hawman, E.G.; Malmin, R.E.
1984-01-01
ECT promises the abilities to: 1) observe radioisotope distributions in a patient without the summation of overlying activity to reduce contrast, and 2) measure quantitatively these distributions to further and more accurately assess organ function. Ideally, camera-based ECT systems should have a performance that is independent of camera orientation or gantry angle. This study is concerned with ECT quantitation errors that can arise from angle-dependent variations of camera sensitivity. Using simulated phantoms representative of heart and liver sections, the effects of sensitivity changes on reconstructed images were assessed both visually and quantitatively based on ROI sums. The sinogram for eachmore » test image was simulated with 128 linear digitization and 180 angular views. The global orientation-dependent sensitivity was modelled by applying an angular sensitivity dependence to the sinograms of the test images. Four sensitivity variations were studied. Amplitudes of 0% (as a reference), 5%, 10%, and 25% with a costheta dependence were studied as well as a cos2theta dependence with a 5% amplitude. Simulations were done with and without Poisson noise to: 1) determine trends in the quantitative effects as a function of the magnitude of the variation, and 2) to see how these effects are manifested in studies having statistics comparable to clinical cases. For the most realistic sensitivity variation (costheta, 5% ampl.), the ROIs chosen in the present work indicated changes of <0.5% in the noiseless case and <5% for the case with Poisson noise. The effects of statistics appear to dominate any effects due to global, sinusoidal, orientation-dependent sensitivity changes in the cases studied.« less
Nagayama, Y; Nakaura, T; Oda, S; Tsuji, A; Urata, J; Furusawa, M; Tanoue, S; Utsunomiya, D; Yamashita, Y
2018-02-01
To perform an intra-individual investigation of the usefulness of a contrast medium (CM) and radiation dose-reduction protocol using single-source computed tomography (CT) combined with 100 kVp and sinogram-affirmed iterative reconstruction (SAFIRE) for whole-body CT (WBCT; chest-abdomen-pelvis CT) in oncology patients. Forty-three oncology patients who had undergone WBCT under both 120 and 100 kVp protocols at different time points (mean interscan intervals: 98 days) were included retrospectively. The CM doses for the 120 and 100 kVp protocols were 600 and 480 mg iodine/kg, respectively; 120 kVp images were reconstructed with filtered back-projection (FBP), whereas 100 kVp images were reconstructed with FBP (100 kVp-F) and the SAFIRE (100 kVp-S). The size-specific dose estimate (SSDE), iodine load and image quality of each protocol were compared. The SSDE and iodine load of 100 kVp protocol were 34% and 21%, respectively, lower than of 120 kVp protocol (SSDE: 10.6±1.1 versus 16.1±1.8 mGy; iodine load: 24.8±4versus 31.5±5.5 g iodine, p<0.01). Contrast enhancement, objective image noise, contrast-to-noise-ratio, and visual score of 100 kVp-S were similar to or better than of 120 kVp protocol. Compared with the 120 kVp protocol, the combined use of 100 kVp and SAFIRE in WBCT for oncology assessment with an SSCT facilitated substantial reduction in the CM and radiation dose while maintaining image quality. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Myelin Associated Inhibitors: A Link Between Injury-Induced and Experience-Dependent Plasticity
Akbik, Feras; Cafferty, William B. J.; Strittmatter, Stephen M.
2011-01-01
SUMMARY In the adult, both neurologic recovery and anatomical growth after a CNS injury are limited. Two classes of growth inhibitors, myelin associated inhibitors (MAIs) and extracellular matrix associated inhibitors, limit both functional recovery and anatomical rearrangements in animal models of spinal cord injury. Here we focus on how MAIs limit a wide spectrum of growth that includes regeneration, sprouting, and plasticity in both the intact and lesioned CNS. Three classic myelin associated inhibitors, Nogo-A, MAG, and OMgp, signal through their common receptors, Nogo-66 Receptor-1 (NgR1) and Paired-Immunoglobulin-like-Receptor-1 (PirB), to regulate cytoskeletal dynamics and inhibit growth. Initially described as inhibitors of axonal regeneration, subsequent work has demonstrated that MAIs also limit activity and experience-dependent plasticity in the intact, adult CNS. MAIs therefore represent a point of convergence for plasticity that limits anatomical rearrangements regardless of the inciting stimulus, blurring the distinction between injury studies and more “basic” plasticity studies. PMID:21699896
ERIC Educational Resources Information Center
Williams, Sandra; Willis, Rachel
2017-01-01
This article considers children's engagement with the "Ologies", a series of postmodern texts that blur the boundaries between fact and fiction. It follows on from a text-based analysis of the series published in this journal (22(3) 2015). Data collected from 9-12 year olds demonstrate how actual readers took up the invitation offered by…
Camera Geolocation From Mountain Images
2015-09-17
be reliably extracted from query images. However, in real-life scenarios the skyline in a query image may be blurred or invisible , due to occlusions...extracted from multiple mountain ridges is critical to reliably geolocating challenging real-world query images with blurred or invisible mountain skylines...Buddemeier, A. Bissacco, F. Brucher, T. Chua, H. Neven, and J. Yagnik, “Tour the world: building a web -scale landmark recognition engine,” in Proc. of
ERIC Educational Resources Information Center
Plath, Hans-Eberhard
In Germany and elsewhere, the literature on current and future work requirements rarely discusses the effects of globalization, internationalization, computerization, and other factors from the point of view of workers. Some have suggested that a blurring of limits will be one of the main changes in work in the future. This blurring will involve…
Registration of Large Motion Blurred Images
2016-05-09
in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS
Optimization and Comparison of Different Digital Mammographic Tomosynthesis Reconstruction Methods
2008-04-01
physical measurements of impulse response analysis, modulation transfer function (MTF) and noise power spectrum (NPS). (Months 5- 12). This task has...and 2 impulse -added: projection images with simulated impulse and the I /r2 shading difference. Other system blur and noise issues are not...blur, and suppressed high frequency noise . Point-by-point BP rather than traditional SAA should be considered as the basis of further deblurring
Filtering, Coding, and Compression with Malvar Wavelets
1993-12-01
speech coding techniques being investigated by the military (38). Imagery: Space imagery often requires adaptive restoration to deblur out-of-focus...and blurred image, find an estimate of the ideal image using a priori information about the blur, noise , and the ideal image" (12). The research for...recording can be described as the original signal convolved with impulses , which appear as echoes in the seismic event. The term deconvolution indicates
MER Surface Phase; Blurring the Line Between Fault Protection and What is Supposed to Happen
NASA Technical Reports Server (NTRS)
Reeves, Glenn E.
2008-01-01
An assessment on the limitations of communication with MER rovers and how such constraints drove the system design, flight software and fault protection architecture, blurring the line between traditional fault protection and expected nominal behavior, and requiring the most novel autonomous and semi-autonomous elements of the vehicle software including communication, surface mobility, attitude knowledge acquisition, fault protection, and the activity arbitration service.
Horwood, Anna M; Riddell, Patricia M
2015-01-01
Accurate co-ordination of accommodation and convergence is necessary to view near objects and develop fine motor co-ordination. We used a remote haploscopic videorefraction paradigm to measure longitudinal changes in simultaneous ocular accommodation and vergence to targets at different depths, and to all combinations of blur, binocular disparity, and change-in-size (“proximity”) cues. Infants were followed longitudinally and compared to older children and young adults, with the prediction that sensitivity to different cues would change during development. Mean infant responses to the most naturalistic condition were similar to those of adults from 6-7 weeks (accommodation) and 8-9 weeks (vergence). Proximity cues influenced responses most in infants less than 14 weeks of age, but sensitivity declined thereafter. Between 12-28 weeks of age infants were equally responsive to all three cues, while in older children and adults manipulation of disparity resulted in the greatest changes in response. Despite rapid development of visual acuity (thus increasing availability of blur cues), responses to blur were stable throughout development. Our results suggest that during much of infancy, vergence and accommodation responses are not dependent on the development of specific depth cues, but make use of any cues available to drive appropriate changes in response. PMID:24344547
Horwood, Anna M; Riddell, Patricia M
2009-01-01
Binocular disparity, blur, and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3 and 2 m. By separating the three main near cues, we can explore their relative weighting in three-, two-, one-, and zero-cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable interparticipant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development, and emmetropization.
Horwood, Anna M; Riddell, Patricia M
2015-01-01
Binocular disparity, blur and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3m and 2m. By separating the three main near cues we can explore their relative weighting in three, two, one and zero cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable inter-participant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development and emmetropisation. PMID:19301186
Digital Image Quality And Interpretability: Database And Hardcopy Studies
NASA Astrophysics Data System (ADS)
Snyder, H. L.; Maddox, M. E.; Shedivy, D. I.; Turpin, J. A.; Burke, J. J.; Strickland, R. N.
1982-02-01
Two hundred fifty transparencies, displaying a new digital database consisting of 25 degraded versions (5 blur levels x 5 noise levels) of each of 10 digitized, first-generation positive transparencies, were used in two experiments involving 15 trained military photointer-preters. Each image is 86 mm square and represents 40962 8-bit pixels. In the "interpretation" experiment, each photointerpreter (judge) spent approximately two days extracting essential elements of information (EEls) from one degraded version of each scene at a constant Gaussian blur level (FWHM = 40, 84, or 322 Am). In the scaling experiment, each judge assigned a numerical value to each of the 250 images, according to its perceived position on a 10-point NATO-standardized scale (0 = useless through 9 = nearly perfect), to the nearest 0.1 unit. Eighty-eight of the 100 possible values were used by the judges, indicating that 62 categories, based on the Shannon-Wiener measure of information, are needed to scale these hardcopy images. The overall correlation between the scaling and interpretation results was 0.9. Though the main effect of blur was not statistically significant in the interpretation experiment, that of noise was significant, and all main factors (blur, noise, scene, order of battle) and most interactions were statistically significant in the scaling experiment.
Quality Metrics Of Digitally Derived Imagery And Their Relation To Interpreter Performance
NASA Astrophysics Data System (ADS)
Burke, James J.; Snyder, Harry L.
1981-12-01
Two hundred-fifty transparencies, displaying a new digital database consisting of 25 degraded versions (5 blur levels x 5 noise levels) of each of 10 digitized, first-generation positive transparencies, were used in two experiments involving 15 trained military photo-interpreters. Each image is 86 mm square and represents 40962 8-bit pixels. In the "interpretation" experiment, each photo-interpreter (judge) spent approximately two days extracting Essential Elements of Information (EEI's) from one degraded version of each scene at a constant blur level (FWHM = 40, 84 or 322 μm). In the scaling experiment, each judge assigned a numerical value to each of the 250 images, according to its perceived position on a 10-point NATO-standardized scale (0 = useless through 9 = nearly perfect), to the nearest 0.1 unit. Eighty-eight of the 100 possible values were used by the judges, indicating that 62 categories are needed to scale these hardcopy images. The overall correlation between the scaling and interpretation results was 0.9. Though the main effect of blur was not significant (p = 0.146) in the interpretation experiment, that of noise was significant (p = 0.005), and all main factors (blur, noise, scene, order of battle) and most interactions were statistically significant in the scaling experiment.
Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Wang, Qi; Gao, Jing
2015-01-01
Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently. PMID:25849350
Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Wang, Qi; Gao, Jing
2015-01-01
Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently.
Blurring of emotional and non-emotional memories by taxing working memory during recall.
van den Hout, Marcel A; Eidhof, Marloes B; Verboom, Jesse; Littel, Marianne; Engelhard, Iris M
2014-01-01
Memories that are recalled while working memory (WM) is taxed, e.g., by making eye movements (EM), become blurred during the recall + EM and later recall, without EM. This may help to explain the effects of Eye Movement and Desensitisation and Reprocessing (EMDR) in the treatment of post-traumatic stress disorder (PTSD) in which patients make EM during trauma recall. Earlier experimental studies on recall + EM have focused on emotional memories. WM theory suggests that recall + EM is superior to recall only but is silent about effects of memory emotionality. Based on the emotion and memory literature, we examined whether recall + EM has superior effects in blurring emotional memories relative to neutral memories. Healthy volunteers recalled negative or neutral memories, matched for vividness, while visually tracking a dot that moved horizontally ("recall + EM") or remained stationary ("recall only"). Compared to a pre-test, a post-test (without concentrating on the dot) replicated earlier findings: negative memories are rated as less vivid after "recall + EM" but not after "recall only". This was not found for neutral memories. Emotional memories are more taxing than neutral memories, which may explain the findings. Alternatively, transient arousal induced by recall of aversive memories may promote reconsolidation of the blurred memory image that is provoked by EM.
Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts
Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.
2013-01-01
To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080
Framework for Processing Videos in the Presence of Spatially Varying Motion Blur
2014-04-18
international journals. Expected impact The related problems of image restoration, registration, dehazing, and superresolution , all in the presence of blurring...real-time, it can be very valuable for applications involving aerial surveillance. Our work on superresolution will be especially valuable while...unified approach to superresolution and multichannel blind decon- volution,” Trans. Img. Proc., vol. 16, no. 9, pp. 2322–2332, Sept. 2007. 5, 5.2.1
Laser Illuminated Imaging: Multiframe Beam Tilt Tracking and Deconvolution Algorithm
2013-03-01
same way with atmospheric turbulence resulting in tilt, blur and other higher order distortions on the returned image. Using the Fourier shift...of the target image with distortions such as speckle, blurring and defocus mitigated via a multiframe processing strategy. Atmospheric turbulence ...propagating a beam in a turbulent atmosphere with a beam width at the target is smaller than the field of view (FOV) of the receiver optics. 1.2
Image Restoration by Spline Functions
1976-08-31
motion degradation, over- determined model. 71 Figure 4-7. Singular values for motion blur. 72 Figure 5-1. Models for film-grain noise and filtering. 85...Figure 5-2. Filtering of signal dependent noisy images. 86 Figure 5-3. Filtering of image lines degraded by film- grain noise . 87 Figure 5-4...phenomena. Fhese phenomena include such imperfect imaging cir- cumstances as defocus, motion blur, optical aberrations, and noise D1I r> . Phe pioneers
Post-Processing of Low Dose Mammography Images
2002-05-01
method of restoring images in the presence of blur as well as noise ” (12:276). The deblurring and denoising characteristics make Wiener filtering...independent noise . The signal dependant scatter noise can be modeled as blur in the mammography image. A Wiener filter with deblurring characteristics can...centered on. This method is used to eradicate noise impulses with high 26 pixel values (2:7). For the research at hand, the median filter would
Are nurses blurring their identity by extending or delegating roles?
Harmer, Victoria
Nursing may be going through an identity crisis. The Department of Health commissioned research identifying where nurses stand within society (Maben and Griffiths, 2008), 'with the stimulus for the report being the sense that nursing had lost its way' (Maben and Griffiths, 2008). The professional identity of nursing appears to be unclear and an area where confusion and conflicting opinions are invisible. This, combined with the extension of roles that many nurses have accepted in recent years, may have allowed a blurring of boundaries between healthcare professions, which has resulted in a blurring of the professional identity of the nurse. Perhaps, while nursing was busily extending, expanding or delegating more traditional nursing duties, it lost its way. To this end, this article concentrates on identifying what professional identity means, then investigates changing roles and role extension nurses are undertaking, referring to relevant literature.
A Comparative Study of Different Deblurring Methods Using Filters
NASA Astrophysics Data System (ADS)
Srimani, P. K.; Kavitha, S.
2011-12-01
This paper attempts to undertake the study of Restored Gaussian Blurred Images by using four types of techniques of deblurring image viz., Wiener filter, Regularized filter, Lucy Richardson deconvolution algorithm and Blind deconvolution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image. The same is applied to the scanned image of seven months baby in the womb and they are compared with one another, so as to choose the best technique for restored or deblurring image. This paper also attempts to undertake the study of restored blurred image using Regualr Filter(RF) with no information about the Point Spread Function (PSF) by using the same four techniques after executing the guess of the PSF. The number of iterations and the weight threshold of it to choose the best guesses for restored or deblurring image of these techniques are determined.
Deblurring in digital tomosynthesis by iterative self-layer subtraction
NASA Astrophysics Data System (ADS)
Youn, Hanbean; Kim, Jee Young; Jang, SunYoung; Cho, Min Kook; Cho, Seungryong; Kim, Ho Kyung
2010-04-01
Recent developments in large-area flat-panel detectors have made tomosynthesis technology revisited in multiplanar xray imaging. However, the typical shift-and-add (SAA) or backprojection reconstruction method is notably claimed by a lack of sharpness in the reconstructed images because of blur artifact which is the superposition of objects which are out of planes. In this study, we have devised an intuitive simple method to reduce the blur artifact based on an iterative approach. This method repeats a forward and backward projection procedure to determine the blur artifact affecting on the plane-of-interest (POI), and then subtracts it from the POI. The proposed method does not include any Fourierdomain operations hence excluding the Fourier-domain-originated artifacts. We describe the concept of the self-layer subtractive tomosynthesis and demonstrate its performance with numerical simulation and experiments. Comparative analysis with the conventional methods, such as the SAA and filtered backprojection methods, is addressed.
NASA Astrophysics Data System (ADS)
Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry
2018-04-01
Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.
Sharpening of Hierarchical Visual Feature Representations of Blurred Images.
Abdelhack, Mohamed; Kamitani, Yukiyasu
2018-01-01
The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.
Fresnel Lenses for Wide-Aperture Optical Receivers
NASA Technical Reports Server (NTRS)
Hemmati, Hamid
2004-01-01
Wide-aperture receivers for freespace optical communication systems would utilize Fresnel lenses instead of conventional telescope lenses, according to a proposal. Fresnel lenses weigh and cost much less than conventional lenses having equal aperture widths. Plastic Fresnel lenses are commercially available in diameters up to 5 m large enough to satisfy requirements for aperture widths of the order of meters for collecting sufficient light in typical long-distance free-space optical communication systems. Fresnel lenses are not yet suitable for high-quality diffraction-limited imaging, especially in polychromatic light. However, optical communication systems utilize monochromatic light, and there is no requirement for high-quality imaging; instead, the basic requirement for an optical receiver is to collect the incoming monochromatic light over a wide aperture and concentrate the light onto a photodetector. Because of lens aberrations and diffraction, the light passing through any lens is focused to a blur circle rather than to a point. Calculations for some representative cases of wide-aperture non-diffraction-limited Fresnel lenses have shown that it should be possible to attain blur-circle diameters of less than 2 mm. Preferably, the blur-circle diameter should match the width of the photodetector. For most high-bandwidth communication applications, the required photodetector diameters would be about 1 mm. In a less-preferable case in which the blur circle was wider than a single photodetector, it would be possible to occupy the blur circle with an array of photodetectors. As an alternative to using a single large Fresnel lens, one could use an array of somewhat smaller lenses to synthesize the equivalent aperture area. Such a configuration might be preferable in a case in which a single Fresnel lens of the requisite large size would be impractical to manufacture, and the blur circle could not be made small enough. For example one could construct a square array of four 5-m-diameter Fresnel lenses to obtain the same light-collecting area as that of a single 10-m-diameter lens. In that case (see figure), the light collected by each Fresnel lens could be collimated, the collimated beams from the four Fresnel lenses could be reflected onto a common offaxis paraboloidal reflector, and the paraboloidal reflector would focus the four beams onto a single photodetector. Alternatively, detected signal from each detector behind each lens would be digitized before summing the signals.
2007-05-01
general, off axis imaging can cause distortion and astigmatism in the image if proper precautions are not taken. In this case, the lens selection... astigmatism into the optical system. This astigmatism takes the form of a blurring in each image directed away from the optical axis. This blurring...is non-trivial and makes particle identification nearly impossible. Images of particles from two of the off axis cameras with the astigmatism present
United States Air Force Summer Faculty Research Program (1983). Technical Report. Volume 2
1983-12-01
filters are given below: (1) Inverse filter - Based on the model given in Eq. (2) and the criterion of minimizing the norm (i.e., power ) of the...and compared based on their performances In machine classification under a variety of blur and noise conditions. These filters are analyzed to...criteria based on various assumptions of the Image models* In practice filter performance varies with the type of image, the blur and the noise conditions
2015-05-19
reported by U.S. Army aviators using NVG for night flights (Glick and Moser, 1974). It was initially, and incorrectly, called “brown eye syndrome ...112 FREQUENCY Never Rarely Occasionally Often Eye irritation Eye pain Blurred vision Dry eye ... Eye pain Blurred vision Dry eye Light sensitivity j. Since your last contact lens review, did you experience any of the following
A-law/Mu-law Dynamic Range Compression Deconvolution (Preprint)
2008-02-04
noise filtering via the spectrum proportionality filter, and second the signal deblurring via the inverse filter. In this process for regions when...is the joint image of motion impulse response and the noisy blurred image with signal to noise ratio 5, 6(A’) is the gray level recovered image...joint image of motion impulse response and the noisy blurred image with signal to noise ratio 5, (A’) the gray level recovered image using the A-law
2011-03-04
global travel, tourism and trade, and blurred lines of demarcation between zoonotic VBI reservoirs and human populations increase vector exposure. Urban...Unprecedented levels of global travel, tourism and trade, and blurred lines of demarcation between zoonotic VBI reservoirs and human populations...made in 2009 to enhance or establish hospi- tal-based febrile illness surveillance platforms in Azer- baijan, Bolivia, Cambodia, Ecuador , Georgia
Fielden, Samuel W.; Meyer, Craig H.
2014-01-01
Purpose The major hurdle to widespread adoption of spiral trajectories has been their poor off-resonance performance. Here we present a self-correcting spiral k-space trajectory that avoids much of the well-known spiral blurring during data acquisition. Theory and Methods In comparison with a traditional spiral-out trajectory, the spiral-in/out trajectory has improved off-resonance performance. By combining two spiral-in/out acquisitions, one rotated 180° in k-space compared to the other, multi-shot spiral-in/out artifacts are eliminated. A phantom was scanned with the center frequency manually tuned 20, 40, 80, and 160 Hz off-resonance with both a spiral-out gradient echo sequence and the redundant spiral-in/out sequence. The phantom was also imaged in an oblique orientation in order to demonstrate improved concomitant gradient field performance of the sequence, and was additionally incorporated into a spiral turbo spin echo sequence for brain imaging. Results Phantom studies with manually-tuned off-resonance agree well with theoretical calculations, showing that moderate off-resonance is well-corrected by this acquisition scheme. Blur due to concomitant fields is reduced, and good results are obtained in vivo. Conclusion The redundant spiral-in/out trajectory results in less image blur for a given readout length than a traditional spiral-out scan, reducing the need for complex off-resonance correction algorithms. PMID:24604539
High quality image-pair-based deblurring method using edge mask and improved residual deconvolution
NASA Astrophysics Data System (ADS)
Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting
2017-04-01
Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.
A pilot trial of tele-ophthalmology for diagnosis of chronic blurred vision.
Tan, Johnson Choon Hwai; Poh, Eugenie Wei Ting; Srinivasan, Sanjay; Lim, Tock Han
2013-02-01
We evaluated the accuracy of tele-ophthalmology in diagnosing the major causes of chronic blurring of vision. Thirty consecutive patients attending a primary eye-care facility in Singapore (the Ang Mo Kio Polyclinic, AMKP) with the symptom of chronic blurred vision were recruited. An ophthalmic technician was trained to perform Snellen acuity; auto-refraction; intraocular pressure measurement; red-colour perimetry; video recordings of extraocular movement, cover tests and pupillary reactions; and anterior segment and fundus photography. Digital information was transmitted to a tertiary hospital in Singapore (the Tan Tock Seng Hospital) via a tele-ophthalmology system for teleconsultation with an ophthalmologist. The diagnoses were compared with face-to-face consultation by another ophthalmologist at the AMKP. A user experience questionnaire was administered at the end of the consultation. Using face-to-face consultation as the gold standard, tele-ophthalmology achieved 100% sensitivity and specificity in diagnosing media opacity (n = 29), maculopathy (n = 23) and keratopathy (n = 30) of any type; and 100% sensitivity and 92% specificity in diagnosing optic neuropathy of any type (n = 24). The majority of the patients (97%) were satisfied with the tele-ophthalmology workflow and consultation. The tele-ophthalmology system was able to detect causes of chronic blurred vision accurately. It has the potential to deliver high-accuracy diagnostic eye support to remote areas if suitably trained ophthalmic technicians are available.
Fielden, Samuel W; Meyer, Craig H
2015-02-01
The major hurdle to widespread adoption of spiral trajectories has been their poor off-resonance performance. Here we present a self-correcting spiral k-space trajectory that avoids much of the well-known spiral blurring during data acquisition. In comparison with a traditional spiral-out trajectory, the spiral-in/out trajectory has improved off-resonance performance. By combining two spiral-in/out acquisitions, one rotated 180° in k-space compared with the other, multishot spiral-in/out artifacts are eliminated. A phantom was scanned with the center frequency manually tuned 20, 40, 80, and 160 Hz off-resonance with both a spiral-out gradient echo sequence and the redundant spiral-in/out sequence. The phantom was also imaged in an oblique orientation in order to demonstrate improved concomitant gradient field performance of the sequence. Additionally, the trajectory was incorporated into a spiral turbo spin echo sequence for brain imaging. Phantom studies with manually tuned off-resonance agree well with theoretical calculations, showing that moderate off-resonance is well-corrected by this acquisition scheme. Blur due to concomitant fields is reduced, and good results are obtained in vivo. The redundant spiral-in/out trajectory results in less image blur for a given readout length than a traditional spiral-out scan, reducing the need for complex off-resonance correction algorithms. © 2014 Wiley Periodicals, Inc.
Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K
2017-01-01
The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output.
Sterile Fluid Collections in Acute Pancreatitis: Catheter Drainage Versus Simple Aspiration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walser, Eric M.; Nealon, William H.; Marroquin, Santiago
2006-02-15
Purpose. To compare the clinical outcome of needle aspiration versus percutaneous catheter drainage of sterile fluid collections in patients with acute pancreatitis. Methods. We reviewed the clinical and imaging data of patients with acute pancreatic fluid collections from 1998 to 2003. Referral for fluid sampling was based on elevated white blood cell count and fevers. Those patients with culture-negative drainages or needle aspirations were included in the study. Fifteen patients had aspiration of 10-20 ml fluid only (group A) and 22 patients had catheter placement for chronic evacuation of fluid (group C). We excluded patients with grossly purulent collections andmore » chronic pseudocysts. We also recorded the number of sinograms and catheter changes and duration of catheter drainage. The CT severity index, Ranson scores, and maximum diameter of abdominal fluid collections were calculated for all patients at presentation. The total length of hospital stay (LOS), length of hospital stay after the drainage or aspiration procedure (LOS-P), and conversions to percutaneous and/or surgical drainage were recorded as well as survival. Results. The CT severity index and acute Ransom scores were not different between the two groups (p = 0.15 and p = 0.6, respectively). When 3 crossover patients from group A to group C were accounted for, the duration of hospitalization did not differ significantly, with a mean LOS and LOS-P of 33.8 days and 27.9 days in group A and 41.5 days and 27.6 days in group C, respectively (p = 0.57 and 0.98, respectively). The 60-day mortality was 2 of 15 (13%) in group A and 2 of 22 (9.1%) in group C. Kaplan-Meier survival curves for the two groups were not significantly different (p 0.3). Surgical or percutaneous conversions occurred significantly more often in group A (7/15, 47%) than surgical conversions in group C (4/22, 18%) (p 0.03). Patients undergoing catheter drainage required an average of 2.2 sinograms/tube changes and kept catheters in for an average of 52 days. Aspirates turned culture-positive in 13 of 22 patients (59%) who had chronic catheterization. In group A, 3 of the 7 patients converted to percutaneous or surgical drainage had infected fluid at the time of conversion (total positive culture rate in group A 3/15 or 20%). Conclusions. There is no apparent clinical benefit for catheter drainage of sterile fluid collections arising in acute pancreatitis as the length of hospital stay and mortality were similar between patients undergoing aspiration versus catheter drainage. However, almost half of patients treated with simple aspiration will require surgical or percutaneous drainage at some point. Disadvantages of chronic catheter drainage include a greater than 50% rate of bacterial colonization and the need for multiple sinograms and tube changes over an average duration of about 2 months.« less
MAP Reconstruction for Fourier Rebinned TOF-PET Data
Bai, Bing; Lin, Yanguang; Zhu, Wentao; Ren, Ran; Li, Quanzheng; Dahlbom, Magnus; DiFilippo, Frank; Leahy, Richard M.
2014-01-01
Time-of-flight (TOF) information improves signal to noise ratio in Positron Emission Tomography (PET). Computation cost in processing TOF-PET sinograms is substantially higher than for nonTOF data because the data in each line of response is divided among multiple time of flight bins. This additional cost has motivated research into methods for rebinning TOF data into lower dimensional representations that exploit redundancies inherent in TOF data. We have previously developed approximate Fourier methods that rebin TOF data into either 3D nonTOF or 2D nonTOF formats. We refer to these methods respectively as FORET-3D and FORET-2D. Here we describe maximum a posteriori (MAP) estimators for use with FORET rebinned data. We first derive approximate expressions for the variance of the rebinned data. We then use these results to rescale the data so that the variance and mean are approximately equal allowing us to use the Poisson likelihood model for MAP reconstruction. MAP reconstruction from these rebinned data uses a system matrix in which the detector response model accounts for the effects of rebinning. Using these methods we compare performance of FORET-2D and 3D with TOF and nonTOF reconstructions using phantom and clinical data. Our phantom results show a small loss in contrast recovery at matched noise levels using FORET compared to reconstruction from the original TOF data. Clinical examples show FORET images that are qualitatively similar to those obtained from the original TOF-PET data but a small increase in variance at matched resolution. Reconstruction time is reduced by a factor of 5 and 30 using FORET3D+MAP and FORET2D+MAP respectively compared to 3D TOF MAP, which makes these methods attractive for clinical applications. PMID:24504374
Design and performance evaluation of a high resolution IRI-microPET preclinical scanner
NASA Astrophysics Data System (ADS)
Islami rad, S. Z.; Peyvandi, R. Gholipour; lehdarboni, M. Askari; Ghafari, A. A.
2015-05-01
PET for small animal, IRI-microPET, was designed and built at the NSTRI. The scanner is made of four detectors positioned on a rotating gantry at a distance 50 mm from the center. Each detector consists of a 10×10 crystal matrix of 2×2×10 mm3 directly coupled to a PS-PMT. A position encoding circuit for specific PS-PMT has been designed, built and tested with a PD-MFS-2MS/s-8/14 data acquisition board. After implementing reconstruction algorithms (FBP, MLEM and SART) on sinograms, images quality and system performance were evaluated by energy resolution, timing resolution, spatial resolution, scatter fraction, sensitivity, RMS contrast and SNR parameters. The energy spectra were obtained for the crystals with an energy window of 300-700 keV. The energy resolution in 511 keV averaged over all modules, detectors, and crystals, was 23.5%. A timing resolution of 2.4 ns FWHM obtained by coincidence timing spectrum was measured with crystal LYSO. The radial and tangential resolutions for 18F (1.15-mm inner diameter) at the center of the field of view were 1.81 mm and 1.90 mm, respectively. At a radial offset of 5 mm, the FWHM values were 1.96 and 2.06 mm. The system scatter fraction was 7.1% for the mouse phantom. The sensitivity was measured for different energy windows, leading to a sensitivity of 1.74% at the center of FOV. Also, images quality was evaluated by RMS contrast and SNR factors, and the results show that the reconstructed images by MLEM algorithm have the best RMS contrast, and SNR. The IRI-microPET presents high image resolution, low scatter fraction values and improved SNR for animal studies.
Lo, P; Young, S; Kim, H J; Brown, M S; McNitt-Gray, M F
2016-08-01
To investigate the effects of dose level and reconstruction method on density and texture based features computed from CT lung nodules. This study had two major components. In the first component, a uniform water phantom was scanned at three dose levels and images were reconstructed using four conventional filtered backprojection (FBP) and four iterative reconstruction (IR) methods for a total of 24 different combinations of acquisition and reconstruction conditions. In the second component, raw projection (sinogram) data were obtained for 33 lung nodules from patients scanned as a part of their clinical practice, where low dose acquisitions were simulated by adding noise to sinograms acquired at clinical dose levels (a total of four dose levels) and reconstructed using one FBP kernel and two IR kernels for a total of 12 conditions. For the water phantom, spherical regions of interest (ROIs) were created at multiple locations within the water phantom on one reference image obtained at a reference condition. For the lung nodule cases, the ROI of each nodule was contoured semiautomatically (with manual editing) from images obtained at a reference condition. All ROIs were applied to their corresponding images reconstructed at different conditions. For 17 of the nodule cases, repeat contours were performed to assess repeatability. Histogram (eight features) and gray level co-occurrence matrix (GLCM) based texture features (34 features) were computed for all ROIs. For the lung nodule cases, the reference condition was selected to be 100% of clinical dose with FBP reconstruction using the B45f kernel; feature values calculated from other conditions were compared to this reference condition. A measure was introduced, which the authors refer to as Q, to assess the stability of features across different conditions, which is defined as the ratio of reproducibility (across conditions) to repeatability (across repeat contours) of each feature. The water phantom results demonstrated substantial variability among feature values calculated across conditions, with the exception of histogram mean. Features calculated from lung nodules demonstrated similar results with histogram mean as the most robust feature (Q ≤ 1), having a mean and standard deviation Q of 0.37 and 0.22, respectively. Surprisingly, histogram standard deviation and variance features were also quite robust. Some GLCM features were also quite robust across conditions, namely, diff. variance, sum variance, sum average, variance, and mean. Except for histogram mean, all features have a Q of larger than one in at least one of the 3% dose level conditions. As expected, the histogram mean is the most robust feature in their study. The effects of acquisition and reconstruction conditions on GLCM features vary widely, though trending toward features involving summation of product between intensities and probabilities being more robust, barring a few exceptions. Overall, care should be taken into account for variation in density and texture features if a variety of dose and reconstruction conditions are used for the quantification of lung nodules in CT, otherwise changes in quantification results may be more reflective of changes due to acquisition and reconstruction conditions than in the nodule itself.
SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, C; Jin, M; Ouyang, L
2015-06-15
Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less
Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan
2016-01-01
Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407
MO-G-17A-05: PET Image Deblurring Using Adaptive Dictionary Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valiollahzadeh, S; Clark, J; Mawlawi, O
2014-06-15
Purpose: The aim of this work is to deblur PET images while suppressing Poisson noise effects using adaptive dictionary learning (DL) techniques. Methods: The model that relates a blurred and noisy PET image to the desired image is described as a linear transform y=Hm+n where m is the desired image, H is a blur kernel, n is Poisson noise and y is the blurred image. The approach we follow to recover m involves the sparse representation of y over a learned dictionary, since the image has lots of repeated patterns, edges, textures and smooth regions. The recovery is based onmore » an optimization of a cost function having four major terms: adaptive dictionary learning term, sparsity term, regularization term, and MLEM Poisson noise estimation term. The optimization is solved by a variable splitting method that introduces additional variables. We simulated a 128×128 Hoffman brain PET image (baseline) with varying kernel types and sizes (Gaussian 9×9, σ=5.4mm; Uniform 5×5, σ=2.9mm) with additive Poisson noise (Blurred). Image recovery was performed once when the kernel type was included in the model optimization and once with the model blinded to kernel type. The recovered image was compared to the baseline as well as another recovery algorithm PIDSPLIT+ (Setzer et. al.) by calculating PSNR (Peak SNR) and normalized average differences in pixel intensities (NADPI) of line profiles across the images. Results: For known kernel types, the PSNR of the Gaussian (Uniform) was 28.73 (25.1) and 25.18 (23.4) for DL and PIDSPLIT+ respectively. For blinded deblurring the PSNRs were 25.32 and 22.86 for DL and PIDSPLIT+ respectively. NADPI between baseline and DL, and baseline and blurred for the Gaussian kernel was 2.5 and 10.8 respectively. Conclusion: PET image deblurring using dictionary learning seems to be a good approach to restore image resolution in presence of Poisson noise. GE Health Care.« less
Fast-response LCDs for virtual reality applications
NASA Astrophysics Data System (ADS)
Chen, Haiwei; Peng, Fenglin; Gou, Fangwang; Wand, Michael; Wu, Shin-Tson
2017-02-01
We demonstrate a fast-response liquid crystal display (LCD) with an ultra-low-viscosity nematic LC mixture. The measured average motion picture response time is only 6.88 ms, which is comparable to 6.66 ms for an OLED at a 120 Hz frame rate. If we slightly increase the TFT frame rate and/or reduce the backlight duty ratio, image blurs can be further suppressed to unnoticeable level. Potential applications of such an image-blur-free LCD for virtual reality, gaming monitors, and TVs are foreseeable.
Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan
2014-10-01
It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.
External radioactive markers for PET data-driven respiratory gating in positron emission tomography.
Büther, Florian; Ernst, Iris; Hamill, James; Eich, Hans T; Schober, Otmar; Schäfers, Michael; Schäfers, Klaus P
2013-04-01
Respiratory gating is an established approach to overcoming respiration-induced image artefacts in PET. Of special interest in this respect are raw PET data-driven gating methods which do not require additional hardware to acquire respiratory signals during the scan. However, these methods rely heavily on the quality of the acquired PET data (statistical properties, data contrast, etc.). We therefore combined external radioactive markers with data-driven respiratory gating in PET/CT. The feasibility and accuracy of this approach was studied for [(18)F]FDG PET/CT imaging in patients with malignant liver and lung lesions. PET data from 30 patients with abdominal or thoracic [(18)F]FDG-positive lesions (primary tumours or metastases) were included in this prospective study. The patients underwent a 10-min list-mode PET scan with a single bed position following a standard clinical whole-body [(18)F]FDG PET/CT scan. During this scan, one to three radioactive point sources (either (22)Na or (18)F, 50-100 kBq) in a dedicated holder were attached the patient's abdomen. The list mode data acquired were retrospectively analysed for respiratory signals using established data-driven gating approaches and additionally by tracking the motion of the point sources in sinogram space. Gated reconstructions were examined qualitatively, in terms of the amount of respiratory displacement and in respect of changes in local image intensity in the gated images. The presence of the external markers did not affect whole-body PET/CT image quality. Tracking of the markers led to characteristic respiratory curves in all patients. Applying these curves for gated reconstructions resulted in images in which motion was well resolved. Quantitatively, the performance of the external marker-based approach was similar to that of the best intrinsic data-driven methods. Overall, the gain in measured tumour uptake from the nongated to the gated images indicating successful removal of respiratory motion was correlated with the magnitude of the respiratory displacement of the respective tumour lesion, but not with lesion size. Respiratory information can be assessed from list-mode PET/CT through PET data-derived tracking of external radioactive markers. This information can be successfully applied to respiratory gating to reduce motion-related image blurring. In contrast to other previously described PET data-driven approaches, the external marker approach is independent of tumour uptake and thereby applicable even in patients with poor uptake and small tumours.
Eye-lens accommodation load and static trapezius muscle activity.
Richter, H O; Bänziger, T; Forsman, M
2011-01-01
The purpose of this experimental study was to investigate if sustained periods of oculomotor load impacts on neck/scapular area muscle activity. The static trapezius muscle activity was assessed from bipolar surface electromyography, normalized to a submaximal contraction. Twenty-eight subjects with a mean age of 29 (range 19-42, SD 8) viewed a high-contrast fixation target for two 5-min periods through: (1) -3.5 dioptre (D) lenses; and (2) 0 D lenses. The target was placed 5 D away from the individual's near point of accommodation. Each subject's ability to compensate for the added blur was extracted via infrared photorefraction measurements. Subjects whose accommodative response was higher in the -D blur condition (1) showed relatively more static bilateral trapezius muscle activity level. During no blur (2) there were no signs of relationships. The results indicate that sustained eye-lens accommodation at near, during ergonomically unfavourable viewing conditions, could possibly represent a risk factor for trapezius muscle myalgia.
The algorithm of motion blur image restoration based on PSF half-blind estimation
NASA Astrophysics Data System (ADS)
Chen, Da-Ke; Lin, Zhe
2011-08-01
A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.
Comparison of morphological and conventional edge detectors in medical imaging applications
NASA Astrophysics Data System (ADS)
Kaabi, Lotfi; Loloyan, Mansur; Huang, H. K.
1991-06-01
Recently, mathematical morphology has been used to develop efficient image analysis tools. This paper compares the performance of morphological and conventional edge detectors applied to radiological images. Two morphological edge detectors including the dilation residue found by subtracting the original signal from its dilation by a small structuring element, and the blur-minimization edge detector which is defined as the minimum of erosion and dilation residues of the blurred image version, are compared with the linear Laplacian and Sobel and the non-linear Robert edge detectors. Various structuring elements were used in this study: regular 2-dimensional, and 3-dimensional. We utilized two criterions for edge detector's performance classification: edge point connectivity and the sensitivity to the noise. CT/MR and chest radiograph images have been used as test data. Comparison results show that the blur-minimization edge detector, with a rolling ball-like structuring element outperforms other standard linear and nonlinear edge detectors. It is less noise sensitive, and performs the most closed contours.
Ma, Liheng; Bernelli-Zazzera, Franco; Jiang, Guangwen; Wang, Xingshu; Huang, Zongsheng; Qin, Shiqiao
2016-06-10
Under dynamic conditions, the centroiding accuracy of the motion-blurred star image decreases and the number of identified stars reduces, which leads to the degradation of the attitude accuracy of the star sensor. To improve the attitude accuracy, a region-confined restoration method, which concentrates on the noise removal and signal to noise ratio (SNR) improvement of the motion-blurred star images, is proposed for the star sensor under dynamic conditions. A multi-seed-region growing technique with the kinematic recursive model for star image motion is given to find the star image regions and to remove the noise. Subsequently, a restoration strategy is employed in the extracted regions, taking the time consumption and SNR improvement into consideration simultaneously. Simulation results indicate that the region-confined restoration method is effective in removing noise and improving the centroiding accuracy. The identification rate and the average number of identified stars in the experiments verify the advantages of the region-confined restoration method.
Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework
NASA Astrophysics Data System (ADS)
Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy
2014-09-01
We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).
Non-Parametric Blur Map Regression for Depth of Field Extension.
D'Andres, Laurent; Salvador, Jordi; Kochale, Axel; Susstrunk, Sabine
2016-04-01
Real camera systems have a limited depth of field (DOF) which may cause an image to be degraded due to visible misfocus or too shallow DOF. In this paper, we present a blind deblurring pipeline able to restore such images by slightly extending their DOF and recovering sharpness in regions slightly out of focus. To address this severely ill-posed problem, our algorithm relies first on the estimation of the spatially varying defocus blur. Drawing on local frequency image features, a machine learning approach based on the recently introduced regression tree fields is used to train a model able to regress a coherent defocus blur map of the image, labeling each pixel by the scale of a defocus point spread function. A non-blind spatially varying deblurring algorithm is then used to properly extend the DOF of the image. The good performance of our algorithm is assessed both quantitatively, using realistic ground truth data obtained with a novel approach based on a plenoptic camera, and qualitatively with real images.
Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.
Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng
2016-09-07
Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers.
Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.
Harikumar, G; Bresler, Y
1999-01-01
We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.
Shirai, Tomohiro; Barnes, Thomas H
2002-02-01
A liquid-crystal adaptive optics system using all-optical feedback interferometry is applied to partially coherent imaging through a phase disturbance. A theoretical analysis based on the propagation of the cross-spectral density shows that the blurred image due to the phase disturbance can be restored, in principle, irrespective of the state of coherence of the light illuminating the object. Experimental verification of the theory has been performed for two cases when the object to be imaged is illuminated by spatially coherent light originating from a He-Ne laser and by spatially incoherent white light from a halogen lamp. We observed in both cases that images blurred by the phase disturbance were successfully restored, in agreement with the theory, immediately after the adaptive optics system was activated. The origin of the deviation of the experimental results from the theory, together with the effect of the feedback misalignment inherent in our optical arrangement, is also discussed.
Numerical correction of distorted images in full-field optical coherence tomography
NASA Astrophysics Data System (ADS)
Min, Gihyeon; Kim, Ju Wan; Choi, Woo June; Lee, Byeong Ha
2012-03-01
We propose a numerical method which can numerically correct the distorted en face images obtained with a full field optical coherence tomography (FF-OCT) system. It is shown that the FF-OCT image of the deep region of a biological sample is easily blurred or degraded because the sample has a refractive index (RI) much higher than its surrounding medium in general. It is analyzed that the focal plane of the imaging system is segregated from the imaging plane of the coherence-gated system due to the RI mismatch. This image-blurring phenomenon is experimentally confirmed by imaging the chrome pattern of a resolution test target through its glass substrate in water. Moreover, we demonstrate that the blurred image can be appreciably corrected by using the numerical correction process based on the Fresnel-Kirchhoff diffraction theory. The proposed correction method is applied to enhance the image of a human hair, which permits the distinct identification of the melanin granules inside the cortex layer of the hair shaft.
A novel rotational invariants target recognition method for rotating motion blurred images
NASA Astrophysics Data System (ADS)
Lan, Jinhui; Gong, Meiling; Dong, Mingwei; Zeng, Yiliang; Zhang, Yuzhen
2017-11-01
The imaging of the image sensor is blurred due to the rotational motion of the carrier and reducing the target recognition rate greatly. Although the traditional mode that restores the image first and then identifies the target can improve the recognition rate, it takes a long time to recognize. In order to solve this problem, a rotating fuzzy invariants extracted model was constructed that recognizes target directly. The model includes three metric layers. The object description capability of metric algorithms that contain gray value statistical algorithm, improved round projection transformation algorithm and rotation-convolution moment invariants in the three metric layers ranges from low to high, and the metric layer with the lowest description ability among them is as the input which can eliminate non pixel points of target region from degenerate image gradually. Experimental results show that the proposed model can improve the correct target recognition rate of blurred image and optimum allocation between the computational complexity and function of region.
Neurophysiology underlying influence of stimulus reliability on audiovisual integration.
Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R; Pitt, Mark A; Shahin, Antoine J
2018-01-24
We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K.
2017-01-01
Introduction: The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Methods: Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. Results: HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. Conclusion: This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output. PMID:28966838
Enhancing facial features by using clear facial features
NASA Astrophysics Data System (ADS)
Rofoo, Fanar Fareed Hanna
2017-09-01
The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.
Analytical properties of time-of-flight PET data.
Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M
2008-06-07
We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the 'bow-tie' property of the 2D Radon transform to the time-of-flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data.
Analytical properties of time-of-flight PET data
NASA Astrophysics Data System (ADS)
Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M.
2008-06-01
We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the 'bow-tie' property of the 2D Radon transform to the time-of-flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data.
Magnifying Lenses with Weak Achromatic Bends for High-Energy Electron Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walstrom, Peter Lowell
2015-02-27
This memo briefly describes bremsstrahlung background effects in GeV-range electron radiography systems and the use of weak bending magnets to deflect the image to the side of the forward bremsstrahlung spot to reduce background. The image deflection introduces first-order chromatic image blur due to dispersion. Two approaches to eliminating the dispersion effect to first order by use of magnifying lens with achromatic bends are described. Also, higher-order image blur terms caused by weak bends are also discussed, and shown to be negligibly small in most cases of interest.
Computational Imaging in Demanding Conditions
2015-11-18
spatiotemporal domain where such blur is not present. Detailed Accomplishments: ● Removing Atmospheric Turbulence via Space-Invariant Deconvolution: ○ To...given image sequence distorted by atmospheric turbulence . This approach reduces the space and time-varying deblurring problem to a shift invariant...SUBJECT TERMS Image processing, Computational imaging, turbulence , blur, enhancement 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18
On the precision of aero-thermal simulations for TMT
NASA Astrophysics Data System (ADS)
Vogiatzis, Konstantinos; Thompson, Hugh
2016-08-01
Environmental effects on the Image Quality (IQ) of the Thirty Meter Telescope (TMT) are estimated by aero-thermal numerical simulations. These simulations utilize Computational Fluid Dynamics (CFD) to estimate, among others, thermal (dome and mirror) seeing as well as wind jitter and blur. As the design matures, guidance obtained from these numerical experiments can influence significant cost-performance trade-offs and even component survivability. The stochastic nature of environmental conditions results in the generation of a large computational solution matrix in order to statistically predict Observatory Performance. Moreover, the relative contribution of selected key subcomponents to IQ increases the parameter space and thus computational cost, while dictating a reduced prediction error bar. The current study presents the strategy followed to minimize prediction time and computational resources, the subsequent physical and numerical limitations and finally the approach to mitigate the issues experienced. In particular, the paper describes a mesh-independence study, the effect of interpolation of CFD results on the TMT IQ metric, and an analysis of the sensitivity of IQ to certain important heat sources and geometric features.
Effects of excimer laser illumination on microdrilling into an oblique polymer surface
NASA Astrophysics Data System (ADS)
Wu, Chih-Yang; Shu, Chun-Wei; Yeh, Zhi-Chang
2006-08-01
In this work, we present the experimental results of micromachining into polymethy-methacrylate exposed to oblique KrF excimer laser beams. The results of low-aspect-ratio ablations show that the ablation rate decreases monotonously with the increase of incident angle for various fluences. The ablation rate of high-aspect-ratio drilling with opening center on the focal plane is almost independent of incident angles and is less than that of low-aspect-ratio ablation. The results of high-aspect-ratio ablations show that the openings of the holes at a distance from the focal plane are enlarged and their edges are blurred. Besides, the depth of a hole in the samples oblique to the laser beam at a distance from the focal plane decreases with the increase of the distance from the focal plane. The number of deep holes generated by oblique laser beams through a matrix of apertures decreases with the increase of incident angle. Those phenomena reveal the influence of the local light intensity on microdrilling into an oblique surface.
Veldkamp, Wouter J H; Joemai, Raoul M S; van der Molen, Aart J; Geleijns, Jacob
2010-02-01
Metal prostheses cause artifacts in computed tomography (CT) images. The purpose of this work was to design an efficient and accurate metal segmentation in raw data to achieve artifact suppression and to improve CT image quality for patients with metal hip or shoulder prostheses. The artifact suppression technique incorporates two steps: metal object segmentation in raw data and replacement of the segmented region by new values using an interpolation scheme, followed by addition of the scaled metal signal intensity. Segmentation of metal is performed directly in sinograms, making it efficient and different from current methods that perform segmentation in reconstructed images in combination with Radon transformations. Metal signal segmentation is achieved by using a Markov random field model (MRF). Three interpolation methods are applied and investigated. To provide a proof of concept, CT data of five patients with metal implants were included in the study, as well as CT data of a PMMA phantom with Teflon, PVC, and titanium inserts. Accuracy was determined quantitatively by comparing mean Hounsfield (HU) values and standard deviation (SD) as a measure of distortion in phantom images with titanium (original and suppressed) and without titanium insert. Qualitative improvement was assessed by comparing uncorrected clinical images with artifact suppressed images. Artifacts in CT data of a phantom and five patients were automatically suppressed. The general visibility of structures clearly improved. In phantom images, the technique showed reduced SD close to the SD for the case where titanium was not inserted, indicating improved image quality. HU values in corrected images were different from expected values for all interpolation methods. Subtle differences between interpolation methods were found. The new artifact suppression design is efficient, for instance, in terms of preserving spatial resolution, as it is applied directly to original raw data. It successfully reduced artifacts in CT images of five patients and in phantom images. Sophisticated interpolation methods are needed to obtain reliable HU values close to the prosthesis.
Uematsu, Masahiro; Ito, Makiko; Hama, Yukihiro; Inomata, Takayuki; Fujii, Masahiro; Nishio, Teiji; Nakamura, Naoki; Nakagawa, Keiichi
2012-01-01
In this paper, we suggest a new method for verifying the motion of a binary multileaf collimator (MLC) in helical tomotherapy. For this we used a combination of a cylindrical scintillator and a general‐purpose camcorder. The camcorder records the light from the scintillator following photon irradiation, which we use to track the motion of the binary MLC. The purpose of this study is to demonstrate the feasibility of this method as a binary MLC quality assurance (QA) tool. First, the verification was performed using a simple binary MLC pattern with a constant leaf open time; secondly, verification using the binary MLC pattern used in a clinical setting was also performed. Sinograms of simple binary MLC patterns, in which leaves that were open were detected as “open” from the measured light, define the sensitivity which, in this case, was 1.000. On the other hand, the specificity, which gives the fraction of closed leaves detected as “closed”, was 0.919. The leaf open error identified by our method was −1.3±7.5%. The 68.6% of observed leaves were performed within ± 3% relative error. The leaf open error was expressed by the relative errors calculated on the sinogram. In the clinical binary MLC pattern, the sensitivity and specificity were 0.994 and 0.997, respectively. The measurement could be performed with −3.4±8.0% leaf open error. The 77.5% of observed leaves were performed within ± 3% relative error. With this method, we can easily verify the motion of the binary MLC, and the measurement unit developed was found to be an effective QA tool. PACS numbers: 87.56.Fc, 87.56.nk PMID:22231222
[High resolution reconstruction of PET images using the iterative OSEM algorithm].
Doll, J; Henze, M; Bublitz, O; Werling, A; Adam, L E; Haberkorn, U; Semmler, W; Brix, G
2004-06-01
Improvement of the spatial resolution in positron emission tomography (PET) by incorporation of the image-forming characteristics of the scanner into the process of iterative image reconstruction. All measurements were performed at the whole-body PET system ECAT EXACT HR(+) in 3D mode. The acquired 3D sinograms were sorted into 2D sinograms by means of the Fourier rebinning (FORE) algorithm, which allows the usage of 2D algorithms for image reconstruction. The scanner characteristics were described by a spatially variant line-spread function (LSF), which was determined from activated copper-64 line sources. This information was used to model the physical degradation processes in PET measurements during the course of 2D image reconstruction with the iterative OSEM algorithm. To assess the performance of the high-resolution OSEM algorithm, phantom measurements performed at a cylinder phantom, the hotspot Jaszczack phantom, and the 3D Hoffmann brain phantom as well as different patient examinations were analyzed. Scanner characteristics could be described by a Gaussian-shaped LSF with a full-width at half-maximum increasing from 4.8 mm at the center to 5.5 mm at a radial distance of 10.5 cm. Incorporation of the LSF into the iteration formula resulted in a markedly improved resolution of 3.0 and 3.5 mm, respectively. The evaluation of phantom and patient studies showed that the high-resolution OSEM algorithm not only lead to a better contrast resolution in the reconstructed activity distributions but also to an improved accuracy in the quantification of activity concentrations in small structures without leading to an amplification of image noise or even the occurrence of image artifacts. The spatial and contrast resolution of PET scans can markedly be improved by the presented image restauration algorithm, which is of special interest for the examination of both patients with brain disorders and small animals.
SU-F-T-489: 4-Years Experience of QA in TomoTherapy MVCT: What Do We Look Out For?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, F; Chan, K
2016-06-15
Purpose: To evaluate the QA results of TomoTherapy MVCT from March 2012 to February 2016, and to identify issues that may affect consistency in HU numbers and reconstructed treatment dose in MVCT. Methods: Monthly QA was performed on our TomoHD system. Phantom with rod inserts of various mass densities was imaged in MVCT and compared to baseline to evaluate HU number consistency. To evaluate treatment dose reconstructed by delivered sinogram and MVCT, a treatment plan was designed on a humanoid skull phantom. The phantom was imaged with MVCT and treatment plan was delivered to obtain the sinogram. The dose reconstructedmore » with the Planned Adaptive software was compared to the dose in the original plan. The QA tolerance for HU numbers was ±30 HU, and ±2% for discrepancy between original plan dose and reconstructed dose. Tolerances were referenced to AAPM TG148. Results: Several technical modifications or maintenance activities to the system have been identified which affected QA Results: 1) Upgrade in console system software which added a weekly HU calibration procedure; 2) Linac or MLC replacement leading to change in Accelerator Output Machine (AOM) parameters; 3) Upgrade in planning system algorithm affecting MVCT dose reconstruction. These events caused abrupt changes in QA results especially for the reconstructed dose. In the past 9 months, when no such modifications were done to the system, reconstructed dose was consistent with maximum deviation from baseline less than 0.6%. The HU number deviated less than 5HU. Conclusion: Routine QA is essential for MVCT, especially if the MVCT is used for daily dose reconstruction to monitor delivered dose to patients. Several technical events which may affect consistency of this are software changes, linac or MLC replacement. QA results reflected changes which justify re-calibration or system adjustment. In normal circumstances, the system should be relatively stable and quarterly QA may be sufficient.« less
Shin, Hyun Joo; Lee, Young Han; Choi, Jin-Young; Park, Mi-Suk; Kim, Myeong-Jin; Kim, Ki Whang
2013-01-01
Objective To evaluate the feasibility of sinogram-affirmed iterative reconstruction (SAFIRE) and automated kV modulation (CARE kV) in reducing radiation dose without increasing image noise for abdominal CT examination. Materials and Methods This retrospective study included 77 patients who received CT imaging with an application of CARE kV with or without SAFIRE and who had comparable previous CT images obtained without CARE kV or SAFIRE, using the standard dose (i.e., reference mAs of 240) on an identical CT scanner and reconstructed with filtered back projection (FBP) within 1 year. Patients were divided into two groups: group A (33 patients, CT scanned with CARE kV); and group B (44 patients, scanned after reducing the reference mAs from 240 to 170 and applying both CARE kV and SAFIRE). CT number, image noise for four organs and radiation dose were compared among the two groups. Results Image noise increased after CARE kV application (p < 0.001) and significantly decreased as SAFIRE strength increased (p < 0.001). Image noise with reduced-mAs scan (170 mAs) in group B became similar to that of standard-dose FBP images after applying CARE kV and SAFIRE strengths of 3 or 4 when measured in the aorta, liver or muscle (p ≥ 0.108). Effective doses decreased by 19.4% and 41.3% for groups A and B, respectively (all, p < 0.001) after application of CARE kV with or without SAFIRE. Conclusion Combining CARE kV, reduction of mAs from 240 to 170 mAs and noise reduction by applying SAFIRE strength 3 or 4 reduced the radiation dose by 41.3% without increasing image noise compared with the standard-dose FBP images. PMID:24265563
NASA Astrophysics Data System (ADS)
Rana, Narender; Chien, Chester
2018-03-01
A key sensor element in a Hard Disk Drive (HDD) is the read-write head device. The device is complex 3D shape and its fabrication requires over thousand process steps with many of them being various types of image inspection and critical dimension (CD) metrology steps. In order to have high yield of devices across a wafer, very tight inspection and metrology specifications are implemented. Many images are collected on a wafer and inspected for various types of defects and in CD metrology the quality of image impacts the CD measurements. Metrology noise need to be minimized in CD metrology to get better estimate of the process related variations for implementing robust process controls. Though there are specialized tools available for defect inspection and review allowing classification and statistics. However, due to unavailability of such advanced tools or other reasons, many times images need to be manually inspected. SEM Image inspection and CD-SEM metrology tools are different tools differing in software as well. SEM Image inspection and CD-SEM metrology tools are separate tools differing in software and purpose. There have been cases where a significant numbers of CD-SEM images are blurred or have some artefact and there is a need for image inspection along with the CD measurement. Tool may not report a practical metric highlighting the quality of image. Not filtering CD from these blurred images will add metrology noise to the CD measurement. An image classifier can be helpful here for filtering such data. This paper presents the use of artificial intelligence in classifying the SEM images. Deep machine learning is used to train a neural network which is then used to classify the new images as blurred and not blurred. Figure 1 shows the image blur artefact and contingency table of classification results from the trained deep neural network. Prediction accuracy of 94.9 % was achieved in the first model. Paper covers other such applications of the deep neural network in image classification for inspection, review and metrology.
NASA Technical Reports Server (NTRS)
Gille, Jennifer; Martin, Russel; Lubin, Jeffrey; Larimer, James
1995-01-01
In a series of papers presented in 1994, we examined the grayscale/resolution trade-off for natural images displayed on devices with discrete pixellation, such as AMLCD's. In the present paper we extend this study by examining the grayscale/resolution trade-off for text images on discrete-pixel displays. Halftoning in printing is an example of the grayscale/resolution trade-off. In printing, spatial resolution is sacrificed to produce grayscale. Another example of this trade-off is the inherent low-pass spatial filter of a CRT, caused by the point-spread function of the electron beam in the phosphor layer. On a CRT, sharp image edges are blurred by this inherent low-pass filtering, and the block noise created by spatial quantization is greatly reduced. A third example of this trade-off is text anti-aliasing, where grayscale is used to improve letter shape, size and location when rendered at a low spatial resolution. There are additional implications for display system design from the grayscale/resolution trade-off. For example, reduced grayscale can reduce system costs by requiring less complexity in the framestore, allowing the use of lower cost drivers, potentially increasing data transfer rates in the image subsystem, and simplifying the manufacturing processes that are used to construct the active matrix for AMLCD (active-matrix liquid-crystal display) or AMTFEL (active-matrix thin-film electroluminescent) devices. Therefore, the study of these trade-offs is important for display designers and manufacturing and systems engineers who wish to create the highest performance, lowest cost device possible. Our strategy for investigating this trade-off is to generate a set of simple test images, manipulate grayscale and resolution, predict discrimination performance using the ViDEOS(Sarnoff) Human Vision Model, conduct an empirical study of discrimination using psychophysical procedures, and verify the computational results using the psychophysical results.
Sun, Sol Z; Fidalgo, Celia; Barense, Morgan D; Lee, Andy C H; Cant, Jonathan S; Ferber, Susanne
2017-11-01
Interference disrupts information processing across many timescales, from immediate perception to memory over short and long durations. The widely held similarity assumption states that as similarity between interfering information and memory contents increases, so too does the degree of impairment. However, information is lost from memory in different ways. For instance, studied content might be erased in an all-or-nothing manner. Alternatively, information may be retained but the precision might be degraded or blurred. Here, we asked whether the similarity of interfering information to memory contents might differentially impact these 2 aspects of forgetting. Observers studied colored images of real-world objects, each followed by a stream of interfering objects. Across 4 experiments, we manipulated the similarity between the studied object and the interfering objects in circular color space. After interference, memory for object color was tested continuously on a color wheel, which in combination with mixture modeling, allowed for estimation of how erasing and blurring differentially contribute to forgetting. In contrast to the similarity assumption, we show that highly dissimilar interfering items caused the greatest increase in random guess responses, suggesting a greater frequency of memory erasure (Experiments 1-3). Moreover, we found that observers were generally able to resist interference from highly similar items, perhaps through surround suppression (Experiments 1 and 4). Finally, we report that interference from items of intermediate similarity tended to blur or decrease memory precision (Experiments 3 and 4). These results reveal that the nature of visual similarity can differentially alter how information is lost from memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Influence of different types of astigmatism on visual acuity.
Remón, Laura; Monsoriu, Juan A; Furlan, Walter D
To investigate the change in visual acuity (VA) produced by different types of astigmatism (on the basis of the refractive power and position of the principal meridians) on normal accommodating eyes. The lens induced method was employed to simulate a set of 28 astigmatic blur conditions on different healthy emmetropic eyes. Additionally, 24 values of spherical defocus were also simulated on the same eyes for comparison. VA was measured in each case and the results, expressed in logMAR units, were represented against of the modulus of the dioptric power vector (blur strength). LogMAR VA varies in a linear fashion with increasing astigmatic blur, being the slope of the line dependent on the accommodative demand in each type of astigmatism. However, in each case, we found no statistically significant differences between the three axes investigated (0°, 45°, 90°). Non-statistically significant differences were found either for the VA achieved with spherical myopic defocus (MD) and mixed astigmatism (MA). VA with simple hyperopic astigmatism (SHA) was higher than with simple myopic astigmatism (SMA), however, in this case non conclusive results were obtained in terms of statistical significance. The VA achieved with imposed compound hyperopic astigmatism (CHA) was highly influenced by the eye's accommodative response. VA is correlated with the blur strength in a different way for each type of astigmatism, depending on the accommodative demand. VA is better when one of the focal lines lie on the retina irrespective of the axis orientation; accommodation favors this situation. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Accommodative and vergence responses to conflicting blur and disparity stimuli during development
Bharadwaj, Shrikant R.; Candy, T. Rowan
2014-01-01
Accommodative and vergence responses of the typically developing visual system are generated using a combination of cues, including retinal blur and disparity. The developmental importance of blur and disparity cues in generating these motor responses was assessed by placing the two cues in conflict with each other. Cue-conflicts were induced by placing either −2 D lenses or 2 MA base-out prisms before both eyes of 140 subjects (2.0 months to 40.8 years) while they watched a cartoon movie binocularly at 80 cm. The frequency and amplitude of accommodation to lenses and vergence to prisms increased with age (both p < 0.001), with the vergence response (mean ± 1 SEM = 1.38 ± 0.05 MA) being slightly larger than the accommodative response (1.18 ± 0.04 D) at all ages (p = 0.007). The amplitude of these responses decreased with an increase in conflict stimuli (1 to 3 D or MA) (both p < 0.01). The coupled vergence response to −2 D lenses (0.31 ± 0.06 MA) and coupled accommodative response to 2 MA base-out prisms (0.21 ± 0.02 D) were significantly smaller than (both p < 0.001) and poorly correlated with the open-loop vergence (r = 0.12; p = 0.44) and open-loop accommodation (r = −0.08; p = 0.69), respectively. The typically developing visual system compensates for transiently induced conflicts between blur and disparity stimuli, without exhibiting a strong preference for either cue. The accuracy of this compensation decreases with an increase in amplitude of cue-conflict. PMID:20053067
Vergence driven accommodation with simulated disparity in myopia and emmetropia.
Maiello, Guido; Kerber, Kristen L; Thorn, Frank; Bex, Peter J; Vera-Diaz, Fuensanta A
2018-01-01
The formation of focused and corresponding foveal images requires a close synergy between the accommodation and vergence systems. This linkage is usually decoupled in virtual reality systems and may be dysfunctional in people who are at risk of developing myopia. We study how refractive error affects vergence-accommodation interactions in stereoscopic displays. Vergence and accommodative responses were measured in 21 young healthy adults (n=9 myopes, 22-31 years) while subjects viewed naturalistic stimuli on a 3D display. In Step 1, vergence was driven behind the monitor using a blurred, non-accommodative, uncrossed disparity target. In Step 2, vergence and accommodation were driven back to the monitor plane using naturalistic images that contained structured depth and focus information from size, blur and/or disparity. In Step 1, both refractive groups converged towards the stereoscopic target depth plane, but the vergence-driven accommodative change was smaller in emmetropes than in myopes (F 1,19 =5.13, p=0.036). In Step 2, there was little effect of peripheral depth cues on accommodation or vergence in either refractive group. However, vergence responses were significantly slower (F 1,19 =4.55, p=0.046) and accommodation variability was higher (F 1,19 =12.9, p=0.0019) in myopes. Vergence and accommodation responses are disrupted in virtual reality displays in both refractive groups. Accommodation responses are less stable in myopes, perhaps due to a lower sensitivity to dioptric blur. Such inaccuracies of accommodation may cause long-term blur on the retina, which has been associated with a failure of emmetropization. Copyright © 2017 Elsevier Ltd. All rights reserved.
Associations of Eye Diseases and Symptoms with Self-Reported Physical and Mental Health
Lee, Paul P.; Cunningham, William E.; Nakazono, Terry T.; Hays, Ron D.
2009-01-01
Purpose To study the associations of eye diseases and visual symptoms with the most widely used health-related quality of life (HRQOL) generic profile measure. Design HRQOL was assessed using the SF-36® version 1 survey administered to a sample of patients receiving care provided by a physician group practice association. Methods Eye dieases, ocular symptoms, and general health was assessed in a sample of patients from 48 physician groups. A total of 18,480 surveys were mailed out and 7,093 returned; 5,021of these had complete data. Multiple linear regression models were used to examine the decrements in self-reported physical and mental health associated with eye diseases and symptoms, including trouble seeing and blurred vision. Results Nine percent of the respondents had cataracts, 2% had age-related macular degeneration, 2% glaucoma, 8% blurred vision, and 13% trouble seeing. Trouble seeing and blurred vision both had statistically unique associations with worse scores on the SF-36 mental health summary score. Only trouble seeing had a significant association with the SF-36 physical health summary score. While these ocular symptoms were significantly associated with SF-36® scores, having an eye disease (cataracts, glaucoma, macular degeneration) was not, after adjusting for other variables in the model. Conclusions Our results suggest an important link between visual symptoms and general HRQOL. The study extends the findings of prior research to show that both trouble seeing and blurred vision have independent, measurable associations with HRQOL, while the presence of specific eye diseases may not. PMID:19712923
Adaptive recovery of motion blur point spread function from differently exposed images
NASA Astrophysics Data System (ADS)
Albu, Felix; Florea, Corneliu; Drîmbarean, Alexandru; Zamfir, Adrian
2010-01-01
Motion due to digital camera movement during the image capture process is a major factor that degrades the quality of images and many methods for camera motion removal have been developed. Central to all techniques is the correct recovery of what is known as the Point Spread Function (PSF). A very popular technique to estimate the PSF relies on using a pair of gyroscopic sensors to measure the hand motion. However, the errors caused either by the loss of the translational component of the movement or due to the lack of precision in gyro-sensors measurements impede the achievement of a good quality restored image. In order to compensate for this, we propose a method that begins with an estimation of the PSF obtained from 2 gyro sensors and uses a pair of under-exposed image together with the blurred image to adaptively improve it. The luminance of the under-exposed image is equalized with that of the blurred image. An initial estimation of the PSF is generated from the output signal of 2 gyro sensors. The PSF coefficients are updated using 2D-Least Mean Square (LMS) algorithms with a coarse-to-fine approach on a grid of points selected from both images. This refined PSF is used to process the blurred image using known deblurring methods. Our results show that the proposed method leads to superior PSF support and coefficient estimation. Also the quality of the restored image is improved compared to 2 gyro only approach or to blind image de-convolution results.
Geometric correction method for 3d in-line X-ray phase contrast image reconstruction
2014-01-01
Background Mechanical system with imperfect or misalignment of X-ray phase contrast imaging (XPCI) components causes projection data misplaced, and thus result in the reconstructed slice images of computed tomography (CT) blurred or with edge artifacts. So the features of biological microstructures to be investigated are destroyed unexpectedly, and the spatial resolution of XPCI image is decreased. It makes data correction an essential pre-processing step for CT reconstruction of XPCI. Methods To remove unexpected blurs and edge artifacts, a mathematics model for in-line XPCI is built by considering primary geometric parameters which include a rotation angle and a shift variant in this paper. Optimal geometric parameters are achieved by finding the solution of a maximization problem. And an iterative approach is employed to solve the maximization problem by using a two-step scheme which includes performing a composite geometric transformation and then following a linear regression process. After applying the geometric transformation with optimal parameters to projection data, standard filtered back-projection algorithm is used to reconstruct CT slice images. Results Numerical experiments were carried out on both synthetic and real in-line XPCI datasets. Experimental results demonstrate that the proposed method improves CT image quality by removing both blurring and edge artifacts at the same time compared to existing correction methods. Conclusions The method proposed in this paper provides an effective projection data correction scheme and significantly improves the image quality by removing both blurring and edge artifacts at the same time for in-line XPCI. It is easy to implement and can also be extended to other XPCI techniques. PMID:25069768
The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godfrey, Devon J.; Page McAdams, H.; Dobbins, James T. III
2013-02-15
Purpose: Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Methods: Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes,more » three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. Results: For scan angles of 20 Degree-Sign and 5 mm plane separation, seven MITS planes must be averaged to sufficiently remove partial-pixel artifacts. MITSa7 does appear to subtly reduce the contrast of high-frequency 'edge' information, but the removal of partial-pixel artifacts makes the appearance of low-contrast, fine-detail anatomy even more conspicuous in MITSa7 slices. MITSa7 also appears to render simulated subtle 5 mm pulmonary nodules with greater visibility than MITS alone, in both the open lung and regions overlying the mediastinum. Finally, the MITSa7 technique reduces stochastic image variance, though the in-plane stochastic SNR (for very thin objects which do not span multiple MITS planes) is only improved at spatial frequencies between 0.05 and 0.20 cycles/mm. Conclusions: The MITSa7 method is an improvement over traditional single-plane MITS for thoracic imaging and the pulmonary nodule detection task, and thus the authors plan to use the MITSa7 approach for all future MITS research at the authors' institution.« less
The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis.
Godfrey, Devon J; McAdams, H Page; Dobbins, James T
2013-02-01
Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. For scan angles of 20° and 5 mm plane separation, seven MITS planes must be averaged to sufficiently remove partial-pixel artifacts. MITSa7 does appear to subtly reduce the contrast of high-frequency "edge" information, but the removal of partial-pixel artifacts makes the appearance of low-contrast, fine-detail anatomy even more conspicuous in MITSa7 slices. MITSa7 also appears to render simulated subtle 5 mm pulmonary nodules with greater visibility than MITS alone, in both the open lung and regions overlying the mediastinum. Finally, the MITSa7 technique reduces stochastic image variance, though the in-plane stochastic SNR (for very thin objects which do not span multiple MITS planes) is only improved at spatial frequencies between 0.05 and 0.20 cycles∕mm. The MITSa7 method is an improvement over traditional single-plane MITS for thoracic imaging and the pulmonary nodule detection task, and thus the authors plan to use the MITSa7 approach for all future MITS research at the authors' institution.
Accommodation to wavefront vergence and chromatic aberration.
Wang, Yinan; Kruger, Philip B; Li, James S; Lin, Peter L; Stark, Lawrence R
2011-05-01
Longitudinal chromatic aberration (LCA) provides a cue to accommodation with small pupils. However, large pupils increase monochromatic aberrations, which may obscure chromatic blur. In this study, we examined the effect of pupil size and LCA on accommodation. Accommodation was recorded by infrared optometer while observers (nine normal trichromats) viewed a sinusoidally moving Maltese cross target in a Badal stimulus system. There were two illumination conditions: white (3000 K; 20 cd/m) and monochromatic (550 nm with 10 nm bandwidth; 20 cd/m) and two artificial pupil conditions (3 and 5.7 mm). Separately, static measurements of wavefront aberration were made with the eye accommodating to targets between 0 and 4 D (COAS, Wavefront Sciences). Large individual differences in accommodation to wavefront vergence and to LCA are a hallmark of accommodation. LCA continues to provide a signal at large pupil sizes despite higher levels of monochromatic aberrations. Monochromatic aberrations may defend against chromatic blur at high spatial frequencies, but accommodation responds best to optical vergence and to LCA at 3 c/deg where blur from higher order aberrations is less.
Accommodation to Wavefront Vergence and Chromatic Aberration
Wang, Yinan; Kruger, Philip B.; Li, James S.; Lin, Peter L.; Stark, Lawrence R.
2011-01-01
Purpose Longitudinal chromatic aberration (LCA) provides a cue to accommodation with small pupils. However, large pupils increase monochromatic aberrations, which may obscure chromatic blur. In the present study, we examined the effect of pupil size and LCA on accommodation. Methods Accommodation was recorded by infrared optometer while observers (nine normal trichromats) viewed a sinusoidally moving Maltese cross target in a Badal stimulus system. There were two illumination conditions: white (3000 K; 20 cd/m2) and monochromatic (550 nm with 10 nm bandwidth; 20 cd/m2) and two artificial pupil conditions (3 mm and 5.7 mm). Separately, static measurements of wavefront aberration were made with the eye accommodating to targets between 0 and 4 D (COAS, Wavefront Sciences). Results Large individual differences in accommodation to wavefront vergence and to LCA are a hallmark of accommodation. LCA continues to provide a signal at large pupil sizes despite higher levels of monochromatic aberrations. Conclusions Monochromatic aberrations may defend against chromatic blur at high spatial frequencies, but accommodation responds best to optical vergence and to LCA at 3 c/deg where blur from higher order aberrations is less. PMID:21317666
Prosthetic component segmentation with blur compensation: a fast method for 3D fluoroscopy.
Tarroni, Giacomo; Tersi, Luca; Corsi, Cristiana; Stagni, Rita
2012-06-01
A new method for prosthetic component segmentation from fluoroscopic images is presented. The hybrid approach we propose combines diffusion filtering, region growing and level-set techniques without exploiting any a priori knowledge of the analyzed geometry. The method was evaluated on a synthetic dataset including 270 images of knee and hip prosthesis merged to real fluoroscopic data simulating different conditions of blurring and illumination gradient. The performance of the method was assessed by comparing estimated contours to references using different metrics. Results showed that the segmentation procedure is fast, accurate, independent on the operator as well as on the specific geometrical characteristics of the prosthetic component, and able to compensate for amount of blurring and illumination gradient. Importantly, the method allows a strong reduction of required user interaction time when compared to traditional segmentation techniques. Its effectiveness and robustness in different image conditions, together with simplicity and fast implementation, make this prosthetic component segmentation procedure promising and suitable for multiple clinical applications including assessment of in vivo joint kinematics in a variety of cases.
Tan, Huan; Hoge, W. Scott; Hamilton, Craig A.; Günther, Matthias; Kraft, Robert A.
2014-01-01
Arterial spin labeling (ASL) is a non-invasive technique that can quantitatively measure cerebral blood flow (CBF). While traditionally ASL employs 2D EPI or spiral acquisition trajectories, single-shot 3D GRASE is gaining popularity in ASL due to inherent SNR advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T2 decay. A novel technique combining 3D GRASE and a PROPELLER trajectory (3DGP) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3×3×5mm3 nominal voxel size with Q2TIPS-FAIR as the ASL preparation sequence. Data from 5 healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in CBF quantification with 3D GRASE, 3DGP demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use. PMID:21254211
Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters
Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng
2016-01-01
Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046
PSF estimation for defocus blurred image based on quantum back-propagation neural network
NASA Astrophysics Data System (ADS)
Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang
2010-11-01
Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.
Capturing the plenoptic function in a swipe
NASA Astrophysics Data System (ADS)
Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi
2016-09-01
Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.
Automatic attention-based prioritization of unconstrained video for compression
NASA Astrophysics Data System (ADS)
Itti, Laurent
2004-06-01
We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.
Circular blurred shape model for multiclass symbol recognition.
Escalera, Sergio; Fornés, Alicia; Pujol, Oriol; Lladós, Josep; Radeva, Petia
2011-04-01
In this paper, we propose a circular blurred shape model descriptor to deal with the problem of symbol detection and classification as a particular case of object recognition. The feature extraction is performed by capturing the spatial arrangement of significant object characteristics in a correlogram structure. The shape information from objects is shared among correlogram regions, where a prior blurring degree defines the level of distortion allowed in the symbol, making the descriptor tolerant to irregular deformations. Moreover, the descriptor is rotation invariant by definition. We validate the effectiveness of the proposed descriptor in both the multiclass symbol recognition and symbol detection domains. In order to perform the symbol detection, the descriptors are learned using a cascade of classifiers. In the case of multiclass categorization, the new feature space is learned using a set of binary classifiers which are embedded in an error-correcting output code design. The results over four symbol data sets show the significant improvements of the proposed descriptor compared to the state-of-the-art descriptors. In particular, the results are even more significant in those cases where the symbols suffer from elastic deformations.
Lee, Cholho; Han, Kyung-Hoon; Kim, Kwon-Hyeon; Kim, Jang-Joo
2016-03-21
We have demonstrated a simple and efficient method to fabricate OLEDs with enhanced out-coupling efficiencies and with low pixel blurring by inserting nano-pillar arrays prepared through the lateral phase separation of two immiscible polymers in a blend film. By selecting a proper solvent for the polymer and controlling the composition of the polymer blend, the nano-pillar arrays were formed directly after spin-coating of the polymer blend and selective removal of one phase, needing no complicated processes such as nano-imprint lithography. Pattern size and distribution were easily controlled by changing the composition and thickness of the polymer blend film. Phosphorescent OLEDs using the internal light extraction layer containing the nano-pillar arrays showed a 30% enhancement of the power efficiency, no spectral variation with the viewing angle, and only a small increment in pixel blurring. With these advantages, this newly developed method can be adopted for the commercial fabrication process of OLEDs for lighting and display applications.
Evaluation of Deblur Methods for Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, William M.
2014-03-31
Radiography is used as a primary diagnostic for dynamic experiments, providing timeresolved radiographic measurements of areal mass density along a line of sight through the experiment. It is well known that the finite spot extent of the radiographic source, as well as scattering, are sources of blurring of the radiographic images. This blurring interferes with quantitative measurement of the areal mass density. In order to improve the quantitative utility of this diagnostic, it is necessary to deblur or “restore” the radiographs to recover the “true” areal mass density from a radiographic transmission measurement. Towards this end, I am evaluating threemore » separate methods currently in use for deblurring radiographs. I begin by briefly describing the problems associated with image restoration, and outlining the three methods. Next, I illustrate how blurring affects the quantitative measurements using radiographs. I then present the results of the various deblur methods, evaluating each according to several criteria. After I have summarized the results of the evaluation, I give a detailed account of how the restoration process is actually implemented.« less
A holographic technique for recording a hypervelocity projectile with front surface resolution.
Kurtz, R L; Loh, H Y
1970-05-01
Any motion of the scene during the exposure of a hologram results in a spatial modulation of the recorded fringe contrast. On reconstruction, this produces a spatial amplitude modulation of the reconstructed wavefront, which results in a blurring of the image, not unlike that of a conventional photograph. For motion of the scene sufficient to change the path length of the signal arm by a half wavelength, this blurring is generally prohibitive. This paper describes a proposed holographic technique which offers promise for front light resolution of targets moving at high speeds, heretofore unobtainable by conventional methods.
Image deblurring in smartphone devices using built-in inertial measurement sensors
NASA Astrophysics Data System (ADS)
Šindelář, Ondřej; Šroubek, Filip
2013-01-01
Long-exposure handheld photography is degraded with blur, which is difficult to remove without prior information about the camera motion. In this work, we utilize inertial sensors (accelerometers and gyroscopes) in modern smartphones to detect exact motion trajectory of the smartphone camera during exposure and remove blur from the resulting photography based on the recorded motion data. The whole system is implemented on the Android platform and embedded in the smartphone device, resulting in a close-to-real-time deblurring algorithm. The performance of the proposed system is demonstrated in real-life scenarios.
Analytical Properties of Time-of-Flight PET Data
Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M.
2015-01-01
We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the “bow-tie” property of the 2D Radon transform to the time of flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data. PMID:18460746
Alignment Solution for CT Image Reconstruction using Fixed Point and Virtual Rotation Axis.
Jun, Kyungtaek; Yoon, Seokhwan
2017-01-25
Since X-ray tomography is now widely adopted in many different areas, it becomes more crucial to find a robust routine of handling tomographic data to get better quality of reconstructions. Though there are several existing techniques, it seems helpful to have a more automated method to remove the possible errors that hinder clearer image reconstruction. Here, we proposed an alternative method and new algorithm using the sinogram and the fixed point. An advanced physical concept of Center of Attenuation (CA) was also introduced to figure out how this fixed point is applied to the reconstruction of image having errors we categorized in this article. Our technique showed a promising performance in restoring images having translation and vertical tilt errors.
Adaptive optical imaging through complex living plant cells
NASA Astrophysics Data System (ADS)
Tamada, Yosuke; Hayano, Yutaka; Murata, Takashi; Oya, Shin; Honma, Yusuke; Kanazawa, Minoru; Miura, Noriaki; Hasebe, Mitsuyasu; Kamei, Yasuhiro; Hattori, Masayuki
2017-04-01
Live-cell imaging using fluorescent molecules is now essential for biological researches. However, images of living cells are accompanied with blur, which becomes stronger according to the depth inside the cells and tissues. This image blur is caused by the disturbance on light that goes through optically inhomogeneous living cells and tissues. Here, we show adaptive optics (AO) imaging of living plant cells. AO has been developed in astronomy to correct the disturbance on light caused by atmospheric turbulence. We developed AO microscope effective for the observation of living plant cells with strong disturbance by chloroplasts, and successfully obtained clear images inside plant cells.
Drag queens' use of language and the performance of blurred gendered and racial identities.
Mann, Stephen L
2011-01-01
Building on Barrett (1998), this study provides a sociolinguistic analysis of the language used by Suzanne, a European-American drag queen, during her on-stage performance in the southeastern United States. Suzanne uses wigs and costumes to portray a female character on stage, but never hides the fact that she is biologically male. She is also a member of a predominantly African-American cast. Through her creative use of linguistic features such as stylemixing (i.e., the use of linguistic features shared across multiple language varieties) and expletives, Suzanne is able to perform an identity that frequently blurs gender and racial lines.
Blurred lines: the General Medical Council guidance on doctors and social media .
Cork, Nick; Grant, Paul
2016-06-01
Digital technology in the early 21st century has introduced significant changes to everyday life and the ways in which we practise medicine. It is important that the ease and practicality of accessing and disseminating information does not intrude on the high standards expected of doctors, and that the boundaries between professional and public life do not become blurred through the increasing adoption of social media. This said, as with any such profound disruption, the social media age could be responsible for driving a new understanding of what it means to be a medical professional. © 2016 Royal College of Physicians.
Feasibility of infrared Earth tracking for deep-space optical communications.
Chen, Yijiang; Hemmati, Hamid; Ortiz, Gerry G
2012-01-01
Infrared (IR) Earth thermal tracking is a viable option for optical communications to distant planet and outer-planetary missions. However, blurring due to finite receiver aperture size distorts IR Earth images in the presence of Earth's nonuniform thermal emission and limits its applicability. We demonstrate a deconvolution algorithm that can overcome this limitation and reduce the error from blurring to a negligible level. The algorithm is applied successfully to Earth thermal images taken by the Mars Odyssey spacecraft. With the solution to this critical issue, IR Earth tracking is established as a viable means for distant planet and outer-planetary optical communications. © 2012 Optical Society of America
Image restoration techniques as applied to Landsat MSS and TM data
Meyer, David
1987-01-01
Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.
Acute air pollution-related symptoms among residents in Chiang Mai, Thailand.
Wiwatanadate, Phongtape
2014-01-01
Open burnings (forest fires, agricultural, and garbage burnings) are the major sources of air pollution in Chiang Mai, Thailand. A time series prospective study was conducted in which 3025 participants were interviewed for 19 acute symptoms with the daily records of ambient air pollutants: particulate matter less than 10 microm in size (PM10), carbon monoxide (CO), nitrogen dioxide (NO2), sulfur dioxide (SO2), and ozone (O3). PM10 was positively associated with blurred vision with an adjusted odds ratio (OR) of 1.009. CO was positively associated with lower lung and heart symptoms with adjusted ORs of 1.137 and 1.117. NO2 was positively associated with nosebleed, larynx symptoms, dry cough, lower lung symptoms, heart symptoms, and eye irritation with the range of adjusted ORs (ROAORs) of 1.024 to 1.229. SO2 was positively associated with swelling feet, skin symptoms, eye irritation, red eyes, and blurred vision with ROAORs of 1.205 to 2.948. Conversely, O3 was negatively related to running nose, burning nose, dry cough, body rash, red eyes, and blurred vision with ROAORs of 0.891 to 0.979.
Maskless EUV lithography: an already difficult technology made even more complicated?
NASA Astrophysics Data System (ADS)
Chen, Yijian
2012-03-01
In this paper, we present the research progress made in maskless EUV lithography and discuss the emerging opportunities for this disruptive technology. It will be shown nanomirrors based maskless approach is one path to costeffective and defect-free EUV lithography, rather than making it even more complicated. The focus of our work is to optimize the existing vertical comb process and scale down the mirror size from several microns to sub-micron regime. The nanomirror device scaling, system configuration, and design issues will be addressed. We also report our theoretical and simulation study of reflective EUV nanomirror based imaging behavior. Dense line/space patterns are formed with an EUV nanomirror array by assigning a phase shift of π to neighboring nanomirrors. Our simulation results show that phase/intensity imbalance is an inherent characteristic of maskless EUV lithography while it only poses a manageable challenge to CD control and process window. The wafer scan and EUV laser jitter induced image blur phenomenon is discussed and a blurred imaging theory is constructed. This blur effect is found to degrade the image contrast at a level that mainly depends on the wafer scan speed.
Examination of an Electronic Patient Record Display Method to Protect Patient Information Privacy.
Niimi, Yukari; Ota, Katsumasa
2017-02-01
Electronic patient records facilitate the provision of safe, high-quality medical care. However, because personnel can view almost all stored information, this study designed a display method using a mosaic blur (pixelation) to temporarily conceal information patients do not want shared. This study developed an electronic patient records display method for patient information that balanced the patient's desire for personal information protection against the need for information sharing among medical personnel. First, medical personnel were interviewed about the degree of information required for both individual duties and team-based care. Subsequently, they tested a mock display method that partially concealed information using a mosaic blur, and they were interviewed about the effectiveness of the display method that ensures patient privacy. Participants better understood patients' demand for confidentiality, suggesting increased awareness of patients' privacy protection. However, participants also indicated that temporary concealment of certain information was problematic. Other issues included the inconvenience of removing the mosaic blur to obtain required information and risk of insufficient information for medical care. Despite several issues with using a display method that temporarily conceals information according to patient privacy needs, medical personnel could accept this display method if information essential to medical safety remains accessible.
Elad, M; Feuer, A
1997-01-01
The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.
Blind motion image deblurring using nonconvex higher-order total variation model
NASA Astrophysics Data System (ADS)
Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo
2016-09-01
We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.
Quantitative fluorescence microscopy and image deconvolution.
Swedlow, Jason R
2013-01-01
Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used to remove blurred signal from an image. There are two major types of deconvolution approaches, deblurring and restoration algorithms. Deblurring algorithms remove blur, but treat a series of optical sections as individual two-dimensional entities, and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed. Copyright © 1998 Elsevier Inc. All rights reserved.
Resolving ability and image discretization in the visual system.
Shelepin, Yu E; Bondarko, V M
2004-02-01
Psychophysiological studies were performed to measure the spatial threshold for resolution of two "points" and the thresholds for discriminating their orientations depending on the distance between the two points. Data were compared with the scattering of the "point" by the eye's optics, the packing density of cones in the fovea, and the characteristics of the receptive fields of ganglion cells in the foveal area of the retina and neurons in the corresponding projection zones of the primary visual cortex. The effective zone was shown to have to contain a scattering function for several receptors, as this allowed preliminary blurring of the image by the eye's optics to decrease the subsequent (at the level of receptors) discretization noise created by a matrix of receptors. The concordance of these parameters supports the optical operation of the spatial elements of the neural network determining the resolving ability of the visual system at different levels of visual information processing. It is suggested that the special geometry of the receptive fields of neurons in the striate cortex, which are concordant with the statistics of natural scenes, results in a further increase in the signal:noise ratio.
Tear dysfunction and the cornea: LXVIII Edward Jackson Memorial Lecture.
Pflugfelder, Stephen C
2011-12-01
To describe the cause and consequence of tear dysfunction-related corneal disease. Perspective on effects of tear dysfunction on the cornea. Evidence is presented on the effects of tear dysfunction on corneal morphology, function, and health, as well as efficacy of therapies for tear dysfunction-related corneal disease. Tear dysfunction is a prevalent eye disease and the most frequent cause for superficial corneal epithelial disease that results in corneal barrier disruption, an irregular optical surface, light scattering, optical aberrations, and exposure and sensitization of pain-sensing nerve endings (nociceptors). Tear dysfunction-related corneal disease causes irritation and visual symptoms such as photophobia and blurred and fluctuating vision that may decrease quality of life. Dysfunction of 1 or more components of the lacrimal functional unit results in changes in tear composition, including elevated osmolarity and increased concentrations of matrix metalloproteinases, inflammatory cytokines, and chemokines. These tear compositional changes promote disruption of tight junctions, alter differentiation, and accelerate death of corneal epithelial cells. Corneal epithelial disease resulting from tear dysfunction causes eye irritation and decreases visual function. Clinical and basic research has improved understanding of the pathogenesis of tear dysfunction-related corneal epithelial disease, as well as treatment outcomes. Copyright © 2011 Elsevier Inc. All rights reserved.
Rigorous analysis of an electric-field-driven liquid crystal lens for 3D displays
NASA Astrophysics Data System (ADS)
Kim, Bong-Sik; Lee, Seung-Chul; Park, Woo-Sang
2014-08-01
We numerically analyzed the optical performance of an electric field driven liquid crystal (ELC) lens adopted for 3-dimensional liquid crystal displays (3D-LCDs) through rigorous ray tracing. For the calculation, we first obtain the director distribution profile of the liquid crystals by using the Erickson-Leslie motional equation; then, we calculate the transmission of light through the ELC lens by using the extended Jones matrix method. The simulation was carried out for a 9view 3D-LCD with a diagonal of 17.1 inches, where the ELC lens was slanted to achieve natural stereoscopic images. The results show that each view exists separately according to the viewing position at an optimum viewing distance of 80 cm. In addition, our simulation results provide a quantitative explanation for the ghost or blurred images between views observed from a 3D-LCD with an ELC lens. The numerical simulations are also shown to be in good agreement with the experimental results. The present simulation method is expected to provide optimum design conditions for obtaining natural 3D images by rigorously analyzing the optical functionalities of an ELC lens.
NASA Astrophysics Data System (ADS)
Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.
Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.
Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong
2011-09-01
Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.
Computed tomography in the evaluation of Crohn disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, H.I.; Gore, R.M.; Margulis, A.R.
1983-02-01
The abdominal and pelvic computed tomographic examinations in 28 patients with Crohn disease were analyzed and correlated with conventional barium studies, sinograms, and surgical findings. Mucosal abnormalities such as aphthous lesions, pseudopolyps, and ulcerations were only imaged by conventional techniques. Computed tomography proved superior in demonstrating the mural, serosal, and mesenteric abnormalities such as bowel wall thickening (82%), fibrofatty proliferation of mesenteric fat (39%), mesenteric abscess (25%), inflammatory reaction of the mesentery (14%), and mesenteric lymphadenopathy (18%). Computed tomography was most useful clinically in defining the nature of mass effects, separation, or displacement of small bowel segments seen on smallmore » bowel series. Although conventional barium studies remain the initial diagnostic procedure in evaluating Crohn disease, computed tomography can be a useful adjunct in resolving difficult clinical and radiologic diagnostic problems.« less
Deshpande, Shrikant; Xing, Aitang; Metcalfe, Peter; Holloway, Lois; Vial, Philip; Geurts, Mark
2017-10-01
The aim of this study was to validate the accuracy of an exit detector-based dose reconstruction tool for helical tomotherapy (HT) delivery quality assurance (DQA). Exit detector-based DQA tool was developed for patient-specific HT treatment verification. The tool performs a dose reconstruction on the planning image using the sinogram measured by the HT exit detector with no objects in the beam (i.e., static couch), and compares the reconstructed dose to the planned dose. Vendor supplied (three "TomoPhant") plans with a cylindrical solid water ("cheese") phantom were used for validation. Each "TomoPhant" plan was modified with intentional multileaf collimator leaf open time (MLC LOT) errors to assess the sensitivity and robustness of this tool. Four scenarios were tested; leaf 32 was "stuck open," leaf 42 was "stuck open," random leaf LOT was closed first by mean values of 2% and then 4%. A static couch DQA procedure was then run five times (once with the unmodified sinogram and four times with modified sinograms) for each of the three "TomoPhant" treatment plans. First, the original optimized delivery plan was compared with the original machine agnostic delivery plan, then the original optimized plans with a known modification applied (intentional MLC LOT error) were compared to the corresponding error plan exit detector measurements. An absolute dose comparison between calculated and ion chamber (A1SL, Standard Imaging, Inc., WI, USA) measured dose was performed for the unmodified "TomoPhant" plans. A 3D gamma evaluation (2%/2 mm global) was performed by comparing the planned dose ("original planned dose" for unmodified plans and "adjusted planned dose" for each intentional error) to exit detector-reconstructed dose for all three "Tomophant" plans. Finally, DQA for 119 clinical (treatment length <25 cm) and three cranio-spinal irradiation (CSI) plans were measured with both the ArcCHECK phantom (Sun Nuclear Corp., Melbourne, FL, USA) and the exit detector DQA tool to assess the time required for DQA and similarity between two methods. The measured ion chamber dose agreed to within 1.5% of the reconstructed dose computed by the exit detector DQA tool on a cheese phantom for all unmodified "Tomophant" plans. Excellent agreement in gamma pass rate (>95%) was observed between the planned and reconstructed dose for all "Tomophant" plans considered using the tool. The gamma pass rate from 119 clinical plan DQA measurements was 94.9% ± 1.5% and 91.9% ± 4.37% for the exit detector DQA tool and ArcCHECK phantom measurements (P = 0.81), respectively. For the clinical plans (treatment length <25 cm), the average time required to perform DQA was 24.7 ± 3.5 and 39.5 ± 4.5 min using the exit detector QA tool and ArcCHECK phantom, respectively, whereas the average time required for the 3 CSI treatments was 35 ± 3.5 and 90 ± 5.2 min, respectively. The exit detector tool has been demonstrated to be faster for performing the DQA with equivalent sensitivity for detecting MLC LOT errors relative to a conventional phantom-based QA method. In addition, comprehensive MLC performance evaluation and features of reconstructed dose provide additional insight into understanding DQA failures and the clinical relevance of DQA results. © 2017 American Association of Physicists in Medicine.
An implementation of the NiftyRec medical imaging library for PIXE-tomography reconstruction
NASA Astrophysics Data System (ADS)
Michelet, C.; Barberet, P.; Desbarats, P.; Giovannelli, J.-F.; Schou, C.; Chebil, I.; Delville, M.-H.; Gordillo, N.; Beasley, D. G.; Devès, G.; Moretto, P.; Seznec, H.
2017-08-01
A new development of the TomoRebuild software package is presented, including ;thick sample; correction for non linear X-ray production (NLXP) and X-ray absorption (XA). As in the previous versions, C++ programming with standard libraries was used for easier portability. Data reduction requires different steps which may be run either from a command line instruction or via a user friendly interface, developed as a portable Java plugin in ImageJ. All experimental and reconstruction parameters can be easily modified, either directly in the ASCII parameter files or via the ImageJ interface. A detailed user guide in English is provided. Sinograms and final reconstructed images are generated in usual binary formats that can be read by most public domain graphic softwares. New MLEM and OSEM methods are proposed, using optimized methods from the NiftyRec medical imaging library. An overview of the different medical imaging methods that have been used for ion beam microtomography applications is presented. In TomoRebuild, PIXET data reduction is performed for each chemical element independently and separately from STIMT, except for two steps where the fusion of STIMT and PIXET data is required: the calculation of the correction matrix and the normalization of PIXET data to obtain mass fraction distributions. Correction matrices for NLXP and XA are calculated using procedures extracted from the DISRA code, taking into account a large X-ray detection solid angle. For this, the 3D STIMT mass density distribution is used, considering a homogeneous global composition. A first example of PIXET experiment using two detectors is presented. Reconstruction results are compared and found in good agreement between different codes: FBP, NiftyRec MLEM and OSEM of the TomoRebuild software package, the original DISRA, its accelerated version provided in JPIXET and the accelerated MLEM version of JPIXET, with or without correction.
The use of cues to convergence and accommodation in naïve, uninstructed participants.
Horwood, Anna M; Riddell, Patricia M
2008-07-01
A remote haploscopic video refractor was used to assess vergence and accommodation responses in a group of 32 emmetropic, orthophoric, symptom free, young adults naïve to vision experiments in a minimally instructed setting. Picture targets were presented at four positions between 2 m and 33 cm. Blur, disparity and looming cues were presented in combination or separately to asses their contributions to the total near response in a within-subjects design. Response gain for both vergence and accommodation reduced markedly whenever disparity was excluded, with much smaller effects when blur and proximity were excluded. Despite the clinical homogeneity of the participant group there were also some individual differences.
The use of cues to convergence and accommodation in naïve, uninstructed participants
Horwood, Anna M; Riddell, Patricia M
2015-01-01
A remote haploscopic video refractor was used to assess vergence and accommodation responses in a group of 32 emmetropic, orthophoric, symptom free, young adults naïve to vision experiments in a minimally instructed setting. Picture targets were presented at four positions between 2m and 33cm. Blur, disparity and looming cues were presented in combination or separately to asses their contributions to the total near response in a within-subjects design. Response gain for both vergence and accommodation reduced markedly whenever disparity was excluded, with much smaller effects when blur and proximity were excluded. Despite the clinical homogeneity of the participant group there were also some individual differences. PMID:18538815
Optic disk findings in hypervitaminosis A.
Marcus, D F; Turgeon, P; Aaberg, T M; Wiznia, R A; Wetzig, P C; Bovino, J A
1985-07-01
Three cases of papilledema secondary to chronic excessive vitamin A intake are presented, and the optic disk changes are documented with intravenous fluorescein angiography. Two of the three patients reported in this study were symptomatic with blurred vision and systemic complaints. The symptoms of blurred vision and systemic complaints disappeared within one week, and papilledema resolved over several months after discontinuance of vitamin A. The fluorescein angiographic changes observed in the optic disk of patients with hypervitaminosis A are similar to those associated with other known causes of papilledema. Since vitamin A is a nonprescription drug, and its indiscriminate use is potentially great, any history of vitamin ingestion should be elicited during the evaluation of papilledema.
BILATERAL SEROUS MACULAR DETACHMENT IN A PATIENT WITH NEPHROTIC SYNDROME.
Bilge, Ayse D; Yaylali, Sevil A; Yavuz, Sara; Simsek, İlke B
2018-01-01
The purpose of this study was to report a case of a woman with nephrotic syndrome who presented with blurred vision because of bilateral serous macular detachment. Case report and literature review. A 55-year-old woman with a history of essential hypertension, diabetes, and nephrotic syndrome was presented with blurred vision in both eyes. Her fluorescein angiography revealed dye leakage in the early and subretinal pooling in the late phases, and optical coherence tomography scans confirmed the presence of subretinal fluid in the subfovel area. In nephrotic syndrome cases especially with accompaniment of high blood pressure, fluid accumulation in the retina layer may occur. Serous macular detachment must be kept in mind when treating these patients.
Ibaraki, Masanobu; Sato, Kaoru; Mizuta, Tetsuro; Kitamura, Keishi; Miura, Shuichi; Sugawara, Shigeki; Shinohara, Yuki; Kinoshita, Toshibumi
2009-09-01
A modified version of row-action maximum likelihood algorithm (RAMLA) using a 'subset-dependent' relaxation parameter for noise suppression, or dynamic RAMLA (DRAMA), has been proposed. The aim of this study was to assess the capability of DRAMA reconstruction for quantitative (15)O brain positron emission tomography (PET). Seventeen healthy volunteers were studied using a 3D PET scanner. The PET study included 3 sequential PET scans for C(15)O, (15)O(2) and H (2) (15) O. First, the number of main iterations (N (it)) in DRAMA was optimized in relation to image convergence and statistical image noise. To estimate the statistical variance of reconstructed images on a pixel-by-pixel basis, a sinogram bootstrap method was applied using list-mode PET data. Once the optimal N (it) was determined, statistical image noise and quantitative parameters, i.e., cerebral blood flow (CBF), cerebral blood volume (CBV), cerebral metabolic rate of oxygen (CMRO(2)) and oxygen extraction fraction (OEF) were compared between DRAMA and conventional FBP. DRAMA images were post-filtered so that their spatial resolutions were matched with FBP images with a 6-mm FWHM Gaussian filter. Based on the count recovery data, N (it) = 3 was determined as an optimal parameter for (15)O PET data. The sinogram bootstrap analysis revealed that DRAMA reconstruction resulted in less statistical noise, especially in a low-activity region compared to FBP. Agreement of quantitative values between FBP and DRAMA was excellent. For DRAMA images, average gray matter values of CBF, CBV, CMRO(2) and OEF were 46.1 +/- 4.5 (mL/100 mL/min), 3.35 +/- 0.40 (mL/100 mL), 3.42 +/- 0.35 (mL/100 mL/min) and 42.1 +/- 3.8 (%), respectively. These values were comparable to corresponding values with FBP images: 46.6 +/- 4.6 (mL/100 mL/min), 3.34 +/- 0.39 (mL/100 mL), 3.48 +/- 0.34 (mL/100 mL/min) and 42.4 +/- 3.8 (%), respectively. DRAMA reconstruction is applicable to quantitative (15)O PET study and is superior to conventional FBP in terms of image quality.
Sinogram restoration for ultra-low-dose x-ray multi-slice helical CT by nonparametric regression
NASA Astrophysics Data System (ADS)
Jiang, Lu; Siddiqui, Khan; Zhu, Bin; Tao, Yang; Siegel, Eliot
2007-03-01
During the last decade, x-ray computed tomography (CT) has been applied to screen large asymptomatic smoking and nonsmoking populations for early lung cancer detection. Because a larger population will be involved in such screening exams, more and more attention has been paid to studying low-dose, even ultra-low-dose x-ray CT. However, reducing CT radiation exposure will increase noise level in the sinogram, thereby degrading the quality of reconstructed CT images as well as causing more streak artifacts near the apices of the lung. Thus, how to reduce the noise levels and streak artifacts in the low-dose CT images is becoming a meaningful topic. Since multi-slice helical CT has replaced conventional stop-and-shoot CT in many clinical applications, this research mainly focused on the noise reduction issue in multi-slice helical CT. The experiment data were provided by Siemens SOMATOM Sensation 16-Slice helical CT. It included both conventional CT data acquired under 120 kvp voltage and 119 mA current and ultra-low-dose CT data acquired under 120 kvp and 10 mA protocols. All other settings are the same as that of conventional CT. In this paper, a nonparametric smoothing method with thin plate smoothing splines and the roughness penalty was proposed to restore the ultra-low-dose CT raw data. Each projection frame was firstly divided into blocks, and then the 2D data in each block was fitted to a thin-plate smoothing splines' surface via minimizing a roughness-penalized least squares objective function. By doing so, the noise in each ultra-low-dose CT projection was reduced by leveraging the information contained not only within each individual projection profile, but also among nearby profiles. Finally the restored ultra-low-dose projection data were fed into standard filtered back projection (FBP) algorithm to reconstruct CT images. The rebuilt results as well as the comparison between proposed approach and traditional method were given in the results and discussions section, and showed effectiveness of proposed thin-plate based nonparametric regression method.
Tan, Huan; Hoge, W Scott; Hamilton, Craig A; Günther, Matthias; Kraft, Robert A
2011-07-01
Arterial spin labeling is a noninvasive technique that can quantitatively measure cerebral blood flow. While traditionally arterial spin labeling employs 2D echo planar imaging or spiral acquisition trajectories, single-shot 3D gradient echo and spin echo (GRASE) is gaining popularity in arterial spin labeling due to inherent signal-to-noise ratio advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T(2) decay. A novel technique combining 3D GRASE and a periodically rotated overlapping parallel lines with enhanced reconstruction trajectory (PROPELLER) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3 × 3 × 5 mm(3) nominal voxel size with pulsed arterial spin labeling preparation sequence. Data from five healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in cerebral blood flow quantification with 3D gradient echo and spin echo, 3D GRASE PROPELLER demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use. Copyright © 2011 Wiley-Liss, Inc.
An improved non-uniformity correction algorithm and its hardware implementation on FPGA
NASA Astrophysics Data System (ADS)
Rong, Shenghui; Zhou, Huixin; Wen, Zhigang; Qin, Hanlin; Qian, Kun; Cheng, Kuanhong
2017-09-01
The Non-uniformity of Infrared Focal Plane Arrays (IRFPA) severely degrades the infrared image quality. An effective non-uniformity correction (NUC) algorithm is necessary for an IRFPA imaging and application system. However traditional scene-based NUC algorithm suffers the image blurring and artificial ghosting. In addition, few effective hardware platforms have been proposed to implement corresponding NUC algorithms. Thus, this paper proposed an improved neural-network based NUC algorithm by the guided image filter and the projection-based motion detection algorithm. First, the guided image filter is utilized to achieve the accurate desired image to decrease the artificial ghosting. Then a projection-based moving detection algorithm is utilized to determine whether the correction coefficients should be updated or not. In this way the problem of image blurring can be overcome. At last, an FPGA-based hardware design is introduced to realize the proposed NUC algorithm. A real and a simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. Experimental results indicated that the proposed NUC algorithm can effectively eliminate the fix pattern noise with less image blurring and artificial ghosting. The proposed hardware design takes less logic elements in FPGA and spends less clock cycles to process one frame of image.
Identification of handheld objects for electro-optic/FLIR applications
NASA Astrophysics Data System (ADS)
Moyer, Steve K.; Flug, Eric; Edwards, Timothy C.; Krapels, Keith A.; Scarbrough, John
2004-08-01
This paper describes research on the determination of the fifty-percent probability of identification cycle criterion (N50) for two sets of handheld objects. The first set consists of 12 objects which are commonly held in a single hand. The second set consists of 10 objects commonly held in both hands. These sets consist of not only typical civilian handheld objects but also objects that are potentially lethal. A pistol, a cell phone, a rocket propelled grenade (RPG) launcher, and a broom are examples of the objects in these sets. The discrimination of these objects is an inherent part of homeland security, force protection, and also general population security. Objects were imaged from each set in the visible and mid-wave infrared (MWIR) spectrum. Various levels of blur are then applied to these images. These blurred images were then used in a forced choice perception experiment. Results were analyzed as a function of blur level and target size to give identification probability as a function of resolvable cycles on target. These results are applicable to handheld object target acquisition estimates for visible imaging systems and MWIR systems. This research provides guidance in the design and analysis of electro-optical systems and forward-looking infrared (FLIR) systems for use in homeland security, force protection, and also general population security.
Is HE 0436-4717 Anemic? A deep look at a bare Seyfert 1 galaxy
NASA Astrophysics Data System (ADS)
Bonson, K.; Gallo, L. C.; Vasudevan, R.
2015-06-01
A multi-epoch, multi-instrument analysis of the Seyfert 1 galaxy HE 0436-4717 is conducted using optical to X-ray data from XMM-Newton and Swift (including the Burst Alert Telescope). Fitting of the UV-to-X-ray spectral energy distribution shows little evidence of extinction and the X-ray spectral analysis does not confirm previous reports of deep absorption edges from O VIII. HE 0436-4717 is a `bare' Seyfert with negligible line-of-sight absorption making it ideal to study the central X-ray emitting region. Three scenarios were considered to describe the X-ray data: partial covering absorption, blurred reflection, and soft Comptonization. All three interpretations describe the 0.5-10.0 keV spectra well. Extrapolating the models to 100 keV results in poorer fits for the partial covering model. When also considering the rapid variability during one of the XMM-Newton observations, the blurred reflection model appears to describe all the observations in the most self-consistent manner. If adopted, the blurred reflection model requires a very low iron abundance in HE 0436-4717. We consider the possibilities that this is an artefact of the fitting process, but it appears possible that it is intrinsic to the object.
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
Deblurring sequential ocular images from multi-spectral imaging (MSI) via mutual information.
Lian, Jian; Zheng, Yuanjie; Jiao, Wanzhen; Yan, Fang; Zhao, Bojun
2018-06-01
Multi-spectral imaging (MSI) produces a sequence of spectral images to capture the inner structure of different species, which was recently introduced into ocular disease diagnosis. However, the quality of MSI images can be significantly degraded by motion blur caused by the inevitable saccades and exposure time required for maintaining a sufficiently high signal-to-noise ratio. This degradation may confuse an ophthalmologist, reduce the examination quality, or defeat various image analysis algorithms. We propose an early work specially on deblurring sequential MSI images, which is distinguished from many of the current image deblurring techniques by resolving the blur kernel simultaneously for all the images in an MSI sequence. It is accomplished by incorporating several a priori constraints including the sharpness of the latent clear image, the spatial and temporal smoothness of the blur kernel and the similarity between temporally-neighboring images in MSI sequence. Specifically, we model the similarity between MSI images with mutual information considering the different wavelengths used for capturing different images in MSI sequence. The optimization of the proposed approach is based on a multi-scale framework and stepwise optimization strategy. Experimental results from 22 MSI sequences validate that our approach outperforms several state-of-the-art techniques in natural image deblurring.
Huynh, Phat; Do, Trong-Hop; Yoo, Myungsik
2017-02-10
This paper proposes a probability-based algorithm to track the LED in vehicle visible light communication systems using a camera. In this system, the transmitters are the vehicles' front and rear LED lights. The receivers are high speed cameras that take a series of images of the LEDs. ThedataembeddedinthelightisextractedbyfirstdetectingthepositionoftheLEDsintheseimages. Traditionally, LEDs are detected according to pixel intensity. However, when the vehicle is moving, motion blur occurs in the LED images, making it difficult to detect the LEDs. Particularly at high speeds, some frames are blurred at a high degree, which makes it impossible to detect the LED as well as extract the information embedded in these frames. The proposed algorithm relies not only on the pixel intensity, but also on the optical flow of the LEDs and on statistical information obtained from previous frames. Based on this information, the conditional probability that a pixel belongs to a LED is calculated. Then, the position of LED is determined based on this probability. To verify the suitability of the proposed algorithm, simulations are conducted by considering the incidents that can happen in a real-world situation, including a change in the position of the LEDs at each frame, as well as motion blur due to the vehicle speed.
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; ...
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posteriormore » is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.« less
A New Variational Approach for Multiplicative Noise and Blur Removal
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad; Sun, HongGuang
2017-01-01
This paper proposes a new variational model for joint multiplicative denoising and deblurring. It combines a total generalized variation filter (which has been proved to be able to reduce the blocky-effects by being aware of high-order smoothness) and shearlet transform (that effectively preserves anisotropic image features such as sharp edges, curves and so on). The new model takes the advantage of both regularizers since it is able to minimize the staircase effects while preserving sharp edges, textures and other fine image details. The existence and uniqueness of a solution to the proposed variational model is also discussed. The resulting energy functional is then solved by using alternating direction method of multipliers. Numerical experiments showing that the proposed model achieves satisfactory restoration results, both visually and quantitatively in handling the blur (motion, Gaussian, disk, and Moffat) and multiplicative noise (Gaussian, Gamma, or Rayleigh) reduction. A comparison with other recent methods in this field is provided as well. The proposed model can also be applied for restoring both single and multi-channel images contaminated with multiplicative noise, and permit cross-channel blurs when the underlying image has more than one channel. Numerical tests on color images are conducted to demonstrate the effectiveness of the proposed model. PMID:28141802
Pons, Carmen; Mazade, Reece; Jin, Jianzhong; Dul, Mitchell W; Zaidi, Qasim; Alonso, Jose-Manuel
2017-12-01
Artists and astronomers noticed centuries ago that humans perceive dark features in an image differently from light ones; however, the neuronal mechanisms underlying these dark/light asymmetries remained unknown. Based on computational modeling of neuronal responses, we have previously proposed that such perceptual dark/light asymmetries originate from a luminance/response saturation within the ON retinal pathway. Consistent with this prediction, here we show that stimulus conditions that increase ON luminance/response saturation (e.g., dark backgrounds) or its effect on light stimuli (e.g., optical blur) impair the perceptual discrimination and salience of light targets more than dark targets in human vision. We also show that, in cat visual cortex, the magnitude of the ON luminance/response saturation remains relatively constant under a wide range of luminance conditions that are common indoors, and only shifts away from the lowest luminance contrasts under low mesopic light. Finally, we show that the ON luminance/response saturation affects visual salience mostly when the high spatial frequencies of the image are reduced by poor illumination or optical blur. Because both low luminance and optical blur are risk factors in myopia, our results suggest a possible neuronal mechanism linking myopia progression with the function of the ON visual pathway.
Pons, Carmen; Mazade, Reece; Jin, Jianzhong; Dul, Mitchell W.; Zaidi, Qasim; Alonso, Jose-Manuel
2017-01-01
Artists and astronomers noticed centuries ago that humans perceive dark features in an image differently from light ones; however, the neuronal mechanisms underlying these dark/light asymmetries remained unknown. Based on computational modeling of neuronal responses, we have previously proposed that such perceptual dark/light asymmetries originate from a luminance/response saturation within the ON retinal pathway. Consistent with this prediction, here we show that stimulus conditions that increase ON luminance/response saturation (e.g., dark backgrounds) or its effect on light stimuli (e.g., optical blur) impair the perceptual discrimination and salience of light targets more than dark targets in human vision. We also show that, in cat visual cortex, the magnitude of the ON luminance/response saturation remains relatively constant under a wide range of luminance conditions that are common indoors, and only shifts away from the lowest luminance contrasts under low mesopic light. Finally, we show that the ON luminance/response saturation affects visual salience mostly when the high spatial frequencies of the image are reduced by poor illumination or optical blur. Because both low luminance and optical blur are risk factors in myopia, our results suggest a possible neuronal mechanism linking myopia progression with the function of the ON visual pathway. PMID:29196762
How face blurring affects body language processing of static gestures in women and men.
Proverbio, A M; Ornaghi, L; Gabaro, V
2018-05-14
The role of facial coding in body language comprehension was investigated by ERP recordings in 31 participants viewing 800 photographs of gestures (iconic, deictic and emblematic), which could be congruent or incongruent with their caption. Facial information was obscured by blurring in half of the stimuli. The task consisted of evaluating picture/caption congruence. Quicker response times were observed in women than in men to congruent stimuli, and a cost for incongruent vs. congruent stimuli was found only in men. Face obscuration did not affect accuracy in women as reflected by omission percentages, nor reduced their cognitive potentials, thus suggesting a better comprehension of face deprived pantomimes. N170 response (modulated by congruity and face presence) peaked later in men than in women. Late Positivity was much larger for congruent stimuli in the female brain, regardless of face blurring. Face presence specifically activated the right superior temporal and fusiform gyri, cingulate cortex and insula, according to source reconstruction. These regions have been reported to be insufficiently activated in face-avoiding individuals with social deficits. Overall, the results corroborate the hypothesis that females might be more resistant to the lack of facial information or better at understanding body language in face-deprived social information.
Bending the Rules: Widefield Microscopy and the Abbe Limit of Resolution
Verdaasdonk, Jolien S.; Stephens, Andrew D.; Haase, Julian; Bloom, Kerry
2014-01-01
One of the most fundamental concepts of microscopy is that of resolution–the ability to clearly distinguish two objects as separate. Recent advances such as structured illumination microscopy (SIM) and point localization techniques including photoactivated localization microscopy (PALM), and stochastic optical reconstruction microscopy (STORM) strive to overcome the inherent limits of resolution of the modern light microscope. These techniques, however, are not always feasible or optimal for live cell imaging. Thus, in this review, we explore three techniques for extracting high resolution data from images acquired on a widefield microscope–deconvolution, model convolution, and Gaussian fitting. Deconvolution is a powerful tool for restoring a blurred image using knowledge of the point spread function (PSF) describing the blurring of light by the microscope, although care must be taken to ensure accuracy of subsequent quantitative analysis. The process of model convolution also requires knowledge of the PSF to blur a simulated image which can then be compared to the experimentally acquired data to reach conclusions regarding its geometry and fluorophore distribution. Gaussian fitting is the basis for point localization microscopy, and can also be applied to tracking spot motion over time or measuring spot shape and size. All together, these three methods serve as powerful tools for high-resolution imaging using widefield microscopy. PMID:23893718
Smans, Kristien; Zoetelief, Johannes; Verbrugge, Beatrijs; Haeck, Wim; Struelens, Lara; Vanhavere, Filip; Bosmans, Hilde
2010-05-01
The purpose of this study was to compare and validate three methods to simulate radiographic image detectors with the Monte Carlo software MCNP/MCNPX in a time efficient way. The first detector model was the standard semideterministic radiography tally, which has been used in previous image simulation studies. Next to the radiography tally two alternative stochastic detector models were developed: A perfect energy integrating detector and a detector based on the energy absorbed in the detector material. Validation of three image detector models was performed by comparing calculated scatter-to-primary ratios (SPRs) with the published and experimentally acquired SPR values. For mammographic applications, SPRs computed with the radiography tally were up to 44% larger than the published results, while the SPRs computed with the perfect energy integrating detectors and the blur-free absorbed energy detector model were, on the average, 0.3% (ranging from -3% to 3%) and 0.4% (ranging from -5% to 5%) lower, respectively. For general radiography applications, the radiography tally overestimated the measured SPR by as much as 46%. The SPRs calculated with the perfect energy integrating detectors were, on the average, 4.7% (ranging from -5.3% to -4%) lower than the measured SPRs, whereas for the blur-free absorbed energy detector model, the calculated SPRs were, on the average, 1.3% (ranging from -0.1% to 2.4%) larger than the measured SPRs. For mammographic applications, both the perfect energy integrating detector model and the blur-free energy absorbing detector model can be used to simulate image detectors, whereas for conventional x-ray imaging using higher energies, the blur-free energy absorbing detector model is the most appropriate image detector model. The radiography tally overestimates the scattered part and should therefore not be used to simulate radiographic image detectors.
Ma, Ren; Zhou, Xiaoqing; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng
2016-12-21
In this study we present a three-dimensional (3D) reconstruction algorithm for magneto-acoustic tomography with magnetic induction (MAT-MI) based on the characteristics of the ultrasound transducer. The algorithm is investigated to solve the blur problem of the MAT-MI acoustic source image, which is caused by the ultrasound transducer and the scanning geometry. First, we established a transducer model matrix using measured data from the real transducer. With reference to the S-L model used in the computed tomography algorithm, a 3D phantom model of electrical conductivity is set up. Both sphere scanning and cylinder scanning geometries are adopted in the computer simulation. Then, using finite element analysis, the distribution of the eddy current and the acoustic source as well as the acoustic pressure can be obtained with the transducer model matrix. Next, using singular value decomposition, the inverse transducer model matrix together with the reconstruction algorithm are worked out. The acoustic source and the conductivity images are reconstructed using the proposed algorithm. Comparisons between an ideal point transducer and the realistic transducer are made to evaluate the algorithms. Finally, an experiment is performed using a graphite phantom. We found that images of the acoustic source reconstructed using the proposed algorithm are a better match than those using the previous one, the correlation coefficient of sphere scanning geometry is 98.49% and that of cylinder scanning geometry is 94.96%. Comparison between the ideal point transducer and the realistic transducer shows that the correlation coefficients are 90.2% in sphere scanning geometry and 86.35% in cylinder scanning geometry. The reconstruction of the graphite phantom experiment also shows a higher resolution using the proposed algorithm. We conclude that the proposed reconstruction algorithm, which considers the characteristics of the transducer, can obviously improve the resolution of the reconstructed image. This study can be applied to analyse the effect of the position of the transducer and the scanning geometry on imaging. It may provide a more precise method to reconstruct the conductivity distribution in MAT-MI.
The effect of amorphous selenium detector thickness on dual-energy digital breast imaging
Hu, Yue-Houng; Zhao, Wei
2014-01-01
Purpose: Contrast enhanced (CE) imaging techniques for both planar digital mammography (DM) and three-dimensional (3D) digital breast tomosynthesis (DBT) applications requires x-ray photon energies higher than the k-edge of iodine (33.2 keV). As a result, x-ray tube potentials much higher (>40 kVp) than those typical for screening mammography must be utilized. Amorphous selenium (a-Se) based direct conversion flat-panel imagers (FPI) have been widely used in DM and DBT imaging systems. The a-Se layer is typically 200 μm thick with quantum detective efficiency (QDE) >87% for x-ray energies below 26 keV. However, QDE decreases substantially above this energy. To improve the object detectability of either CE-DM or CE-DBT, it may be advantageous to increase the thickness (dSe) of the a-Se layer. Increasing the dSe will improve the detective quantum efficiency (DQE) at the higher energies used in CE imaging. However, because most DBT systems are designed with partially isocentric geometries, where the gantry moves about a stationary detector, the oblique entry of x-rays will introduce additional blur to the system. The present investigation quantifies the effect of a-Se thickness on imaging performance for both CE-DM and CE-DBT, discussing the effects of improving photon absorption and blurring from oblique entry of x-rays. Methods: In this paper, a cascaded linear system model (CLSM) was used to investigate the effect of dSe on the imaging performance (i.e., MTF, NPS, and DQE) of FPI in CE-DM and CE-DBT. The results from the model are used to calculate the ideal observer signal-to-noise ratio, d′, which is used as a figure-of-merit to determine the total effect of increasing dSe for CE-DM and CE-DBT. Results: The results of the CLSM show that increasing dSe causes a substantial increase in QDE at the high energies used in CE-DM. However, at the oblique projection angles used in DBT, the increased length of penetration through a-Se introduces additional image blur. The reduced MTF and DQE at high spatial frequencies lead to reduced two-dimensional d′. These losses in projection image resolution may subsequently result in a decrease in the 3D d′, but the degree of which is largely dependent on the DBT reconstruction algorithm. For a filtered backprojection (FBP) algorithm with spectral apodization and slice-thickness filters, which dominate the blur for reconstructed images at oblique angles, the effect of oblique entry of x-rays on 3D d′ is minimal. Thus, increasing dSe results in an improvement in d′ for both CE-DM and CE-DBT with typical FBP reconstruction parameters. Conclusions: Increased dSe improves CE breast imaging performance by increasing QDE of detectors at higher energies, e.g., 49 kVp. Although there is additional blur in the oblique angled projections of a DBT scan, the overall 3D d′ for DBT is not degraded because the dominant source blur at these angles results from the reconstruction filters of the employed FBP algorithm. PMID:25370637
The effect of amorphous selenium detector thickness on dual-energy digital breast imaging.
Hu, Yue-Houng; Zhao, Wei
2014-11-01
Contrast enhanced (CE) imaging techniques for both planar digital mammography (DM) and three-dimensional (3D) digital breast tomosynthesis (DBT) applications requires x-ray photon energies higher than the k-edge of iodine (33.2 keV). As a result, x-ray tube potentials much higher (>40 kVp) than those typical for screening mammography must be utilized. Amorphous selenium (a-Se) based direct conversion flat-panel imagers (FPI) have been widely used in DM and DBT imaging systems. The a-Se layer is typically 200 μm thick with quantum detective efficiency (QDE) >87% for x-ray energies below 26 keV. However, QDE decreases substantially above this energy. To improve the object detectability of either CE-DM or CE-DBT, it may be advantageous to increase the thickness (dSe) of the a-Se layer. Increasing the dSe will improve the detective quantum efficiency (DQE) at the higher energies used in CE imaging. However, because most DBT systems are designed with partially isocentric geometries, where the gantry moves about a stationary detector, the oblique entry of x-rays will introduce additional blur to the system. The present investigation quantifies the effect of a-Se thickness on imaging performance for both CE-DM and CE-DBT, discussing the effects of improving photon absorption and blurring from oblique entry of x-rays. In this paper, a cascaded linear system model (CLSM) was used to investigate the effect of dSe on the imaging performance (i.e., MTF, NPS, and DQE) of FPI in CE-DM and CE-DBT. The results from the model are used to calculate the ideal observer signal-to-noise ratio, d', which is used as a figure-of-merit to determine the total effect of increasing dSe for CE-DM and CE-DBT. The results of the CLSM show that increasing dSe causes a substantial increase in QDE at the high energies used in CE-DM. However, at the oblique projection angles used in DBT, the increased length of penetration through a-Se introduces additional image blur. The reduced MTF and DQE at high spatial frequencies lead to reduced two-dimensional d'. These losses in projection image resolution may subsequently result in a decrease in the 3D d', but the degree of which is largely dependent on the DBT reconstruction algorithm. For a filtered backprojection (FBP) algorithm with spectral apodization and slice-thickness filters, which dominate the blur for reconstructed images at oblique angles, the effect of oblique entry of x-rays on 3D d' is minimal. Thus, increasing dSe results in an improvement in d' for both CE-DM and CE-DBT with typical FBP reconstruction parameters. Increased dSe improves CE breast imaging performance by increasing QDE of detectors at higher energies, e.g., 49 kVp. Although there is additional blur in the oblique angled projections of a DBT scan, the overall 3D d' for DBT is not degraded because the dominant source blur at these angles results from the reconstruction filters of the employed FBP algorithm.
Roy, Nathalie; Roy, Gilles; Bissonnette, Luc R; Simard, Jean-Robert
2004-05-01
We measure with a gated intensified CCD camera the cross-polarized backscattered light from a linearly polarized laser beam penetrating a cloud made of spherical particles. In accordance with previously published results we observe a clear azimuthal pattern in the recorded images. We show that the pattern is symmetrical, that it originates from second-order scattering, and that higher-order scattering causes blurring that increases with optical depth. We also find that the contrast in the symmetrical features can be related to measurement of the optical depth. Moreover, when the blurring contributions are identified and subtracted, the resulting pattern provides a pure second-order scattering measurement that can be used for retrieval of droplet size.
An Automated Blur Detection Method for Histological Whole Slide Imaging
Moles Lopez, Xavier; D'Andrea, Etienne; Barbot, Paul; Bridoux, Anne-Sophie; Rorive, Sandrine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine
2013-01-01
Whole slide scanners are novel devices that enable high-resolution imaging of an entire histological slide. Furthermore, the imaging is achieved in only a few minutes, which enables image rendering of large-scale studies involving multiple immunohistochemistry biomarkers. Although whole slide imaging has improved considerably, locally poor focusing causes blurred regions of the image. These artifacts may strongly affect the quality of subsequent analyses, making a slide review process mandatory. This tedious and time-consuming task requires the scanner operator to carefully assess the virtual slide and to manually select new focus points. We propose a statistical learning method that provides early image quality feedback and automatically identifies regions of the image that require additional focus points. PMID:24349343
Restoration of high-resolution AFM images captured with broken probes
NASA Astrophysics Data System (ADS)
Wang, Y. F.; Corrigan, D.; Forman, C.; Jarvis, S.; Kokaram, A.
2012-03-01
A type of artefact is induced by damage of the scanning probe when the Atomic Force Microscope (AFM) captures a material surface structure with nanoscale resolution. This artefact has a dramatic form of distortion rather than the traditional blurring artefacts. Practically, it is not easy to prevent the damage of the scanning probe. However, by using natural image deblurring techniques in image processing domain, a comparatively reliable estimation of the real sample surface structure can be generated. This paper introduces a novel Hough Transform technique as well as a Bayesian deblurring algorithm to remove this type of artefact. The deblurring result is successful at removing blur artefacts in the AFM artefact images. And the details of the fibril surface topography are well preserved.
Improved deconvolution of very weak confocal signals.
Day, Kasey J; La Rivière, Patrick J; Chandler, Talon; Bindokas, Vytas P; Ferrier, Nicola J; Glick, Benjamin S
2017-01-01
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.
View-interpolation of sparsely sampled sinogram using convolutional neural network
NASA Astrophysics Data System (ADS)
Lee, Hoyeon; Lee, Jongha; Cho, Suengryong
2017-02-01
Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.
McPhail, Neil; Mustard, Robert A.
1966-01-01
The embryology, anatomy and pathology of branchial cleft anomalies are discussed and 87 cases reviewed. The most frequent anomaly was branchial cleft cyst, of which there were 77 cases. Treatment in all cases consisted of complete excision. There were five cases of external branchial sinus and five cases of complete branchial fistula. Sinograms were helpful in demonstrating these lesions. Excision presented little difficulty. No proved case of branchiogenic carcinoma has been found in the Toronto General Hospital. Five cases are described in which the original diagnosis was branchiogenic carcinoma—in four of these a primary tumour has already been found. The authors believe that the diagnosis of branchiogenic carcinoma should never be accepted until repeated examinations over a period of at least five years have failed to reveal a primary tumour. ImagesFig. 1Fig. 2Fig. 3Fig. 4Fig. 5 PMID:5901161
Synthesis of blind source separation algorithms on reconfigurable FPGA platforms
NASA Astrophysics Data System (ADS)
Du, Hongtao; Qi, Hairong; Szu, Harold H.
2005-03-01
Recent advances in intelligence technology have boosted the development of micro- Unmanned Air Vehicles (UAVs) including Sliver Fox, Shadow, and Scan Eagle for various surveillance and reconnaissance applications. These affordable and reusable devices have to fit a series of size, weight, and power constraints. Cameras used on such micro-UAVs are therefore mounted directly at a fixed angle without any motion-compensated gimbals. This mounting scheme has resulted in the so-called jitter effect in which jitter is defined as sub-pixel or small amplitude vibrations. The jitter blur caused by the jitter effect needs to be corrected before any other processing algorithms can be practically applied. Jitter restoration has been solved by various optimization techniques, including Wiener approximation, maximum a-posteriori probability (MAP), etc. However, these algorithms normally assume a spatial-invariant blur model that is not the case with jitter blur. Szu et al. developed a smart real-time algorithm based on auto-regression (AR) with its natural generalization of unsupervised artificial neural network (ANN) learning to achieve restoration accuracy at the sub-pixel level. This algorithm resembles the capability of the human visual system, in which an agreement between the pair of eyes indicates "signal", otherwise, the jitter noise. Using this non-statistical method, for each single pixel, a deterministic blind sources separation (BSS) process can then be carried out independently based on a deterministic minimum of the Helmholtz free energy with a generalization of Shannon's information theory applied to open dynamic systems. From a hardware implementation point of view, the process of jitter restoration of an image using Szu's algorithm can be optimized by pixel-based parallelization. In our previous work, a parallelly structured independent component analysis (ICA) algorithm has been implemented on both Field Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) using standard-height cells. ICA is an algorithm that can solve BSS problems by carrying out the all-order statistical, decorrelation-based transforms, in which an assumption that neighborhood pixels share the same but unknown mixing matrix A is made. In this paper, we continue our investigation on the design challenges of firmware approaches to smart algorithms. We think two levels of parallelization can be explored, including pixel-based parallelization and the parallelization of the restoration algorithm performed at each pixel. This paper focuses on the latter and we use ICA as an example to explain the design and implementation methods. It is well known that the capacity constraints of single FPGA have limited the implementation of many complex algorithms including ICA. Using the reconfigurability of FPGA, we show, in this paper, how to manipulate the FPGA-based system to provide extra computing power for the parallelized ICA algorithm with limited FPGA resources. The synthesis aiming at the pilchard re-configurable FPGA platform is reported. The pilchard board is embedded with single Xilinx VIRTEX 1000E FPGA and transfers data directly to CPU on the 64-bit memory bus at the maximum frequency of 133MHz. Both the feasibility performance evaluations and experimental results validate the effectiveness and practicality of this synthesis, which can be extended to the spatial-variant jitter restoration for micro-UAV deployment.
Decreased accommodation during decompensation of distance exotropia.
Horwood, Anna M; Riddell, Patricia M
2012-04-01
Disparity cues can be a major drive to accommodation via the convergence accommodation to convergence (CA/C) linkage, but, on decompensation of exotropia, disparity cues are extinguished by suppression so this drive is lost. This study investigated accommodation and vergence responses to disparity, blur and proximal cues in a group of distance exotropes aged between 4 and 11 years both during decompensation and when exotropic. 19 participants with distance exotropia were tested using a PlusoptiXSO4 photo refractor set in a remote haploscopic device that assessed simultaneous vergence and accommodation to a range of targets incorporating different combinations of blur, disparity and proximal cues at four fixation distances between 2 m and 33 cm. Responses on decompensation were compared with those from the same children when their deviation was controlled. Manifest exotropia was more common in the more impoverished cue conditions. When decompensated for near, mean accommodation gain for the all-cue (naturalistic) target was significantly reduced (p<0.0001), with resultant mean under-accommodation of 2.33 D at 33 cm. The profile of near cues usage changed after decompensation, with blur and proximity driving residual responses, but these remaining cues did not compensate for loss of accommodation caused by the removal of disparity. Accommodation often reduces on decompensation of distance exotropia as the drive from convergence is extinguished, providing a further reason to try to prevent decompensation for near.
Decreased accommodation during decompensation of distance exotropia
Horwood, Anna M; Riddell, Patricia M
2015-01-01
Objective Disparity cues can be a major drive to accommodation via the CA/C (convergence accommodation to convergence) linkage but, on decompensation of exotropia, disparity cues are extinguished by suppression, so this drive is lost. This study investigated accommodation and vergence responses to disparity, blur and proximal cues in a group of distance exotropes aged between 4-11 years both during decompensation and when exotropic. Methods 19 participants with distance exotropia were tested using a PlusoptiXSO4 photorefractor set in a remote haploscopic device which assessed simultaneous vergence and accommodation to a range of targets incorporating different combinations of blur, disparity and proximal cues at four fixation distances between 2m and 33cm. Responses on decompensation were compared to those from the same children when their deviation was controlled. Results Manifest exotropia was more common in the more impoverished cue conditions. When decompensated for near, mean accommodation gain for the all-cue (naturalistic) target reduced significantly (p<0.0001), with resultant mean under-accommodation of 2.33D at 33cm. The profile of near cues usage changed after decompensation, with blur and proximity driving residual responses, but these remaining cues did not compensate for loss of accommodation caused by the removal of disparity. Conclusions Accommodation often reduces on decompensation of distance exotropia as the drive from convergence is extinguished, providing a further reason to try to prevent decompensation for near. PMID:21873311
Ways of Viewing Pictorial Plasticity
2017-01-01
The plastic effect is historically used to denote various forms of stereopsis. The vivid impression of depth often associated with binocular stereopsis can also be achieved in other ways, for example, using a synopter. Accounts of this go back over a hundred years. These ways of viewing all aim to diminish sensorial evidence that the picture is physically flat. Although various viewing modes have been proposed in the literature, their effects have never been compared. In the current study, we compared three viewing modes: monocular blur, synoptic viewing, and free viewing (using a placebo synopter). By designing a physical embodiment that was indistinguishable for the three experimental conditions, we kept observers naïve with respect to the differences between them; 197 observers participated in an experiment where the three viewing modes were compared by performing a rating task. Results indicate that synoptic viewing causes the largest plastic effect. Monocular blur scores lower than synoptic viewing but is still rated significantly higher than the baseline conditions. The results strengthen the idea that synoptic viewing is not due to a placebo effect. Furthermore, monocular blur has been verified for the first time as a way of experiencing the plastic effect, although the effect is smaller than synoptic viewing. We discuss the results with respect to the theoretical basis for the plastic effect. We show that current theories are not described with sufficient details to explain the differences we found. PMID:28491270
Dixon water-fat separation in PROPELLER MRI acquired with two interleaved echoes.
Schär, Michael; Eggers, Holger; Zwart, Nicholas R; Chang, Yuchou; Bakhru, Akshay; Pipe, James G
2016-02-01
To propose a novel combination of robust Dixon fat suppression and motion insensitive PROPELLER (periodically rotated overlapping parallel lines with enhanced reconstruction) MRI. Two different echoes were acquired interleaved in each shot enabling water-fat separation on individual blades. Fat, which was blurred in standard PROPELLER because the water-fat shift (WFS) rotated with the blades, was shifted back in each blade. Additionally, field maps obtained from the water-fat separation were used to unwarp off-resonance-induced shifts in each blade. PROPELLER was then applied to the water, corrected fat, or recombined water-fat blades. This approach was compared quantitatively in volunteers with regard to motion estimation and signal-to-noise ratio (SNR) to a standard PROPELLER acquisition with minimal WFS and fat suppression. Shifting the fat back in each blade reduced errors in the translation correction. SNR in the proposed Dixon PROPELLER was 21% higher compared with standard PROPELLER with identical scan time. High image quality was achieved even when the volunteers were moving during data acquisition. Furthermore, sharp water-fat borders and image details were seen in areas where standard PROPELLER suffered from blurring when acquired with a low readout bandwidth. The proposed method enables motion-insensitive PROPELLER MRI with robust fat suppression and reduced blurring. Additionally, fat images are available if desired. © 2015 Wiley Periodicals, Inc.
Peripheral refraction and image blur in four meridians in emmetropes and myopes.
Shen, Jie; Spors, Frank; Egan, Donald; Liu, Chunming
2018-01-01
The peripheral refractive error of the human eye has been hypothesized to be a major stimulus for the development of its central refractive error. The purpose of this study was to investigate the changes in the peripheral refractive error across horizontal, vertical and two diagonal meridians in emmetropic and low, moderate and high myopic adults. Thirty-four adult subjects were recruited and aberration was measured using a modified commercial aberrometer. We then computed the refractive error in power vector notation from second-order Zernike terms. Statistical analysis was performed to evaluate the statistical differences in refractive error profiles between the subject groups and across all measured visual field meridians. Small amounts of relative myopic shift were observed in emmetropic and low myopic subjects. However, moderate and high myopic subjects exhibited a relative hyperopic shift in all four meridians. Astigmatism J 0 and J 45 had quadratic or linear changes dependent on the visual field meridians. Peripheral Sphero-Cylindrical Retinal Image Blur increased in emmetropic eyes in most of the measured visual fields. The findings indicate an overall emmetropic or slightly relative myopic periphery (spherical or oblate retinal shape) formed in emmetropes and low myopes, while moderate and high myopes form relative hyperopic periphery (prolate, or less oblate, retinal shape). In general, human emmetropic eyes demonstrate higher amount of peripheral retinal image blur.
Application-Driven No-Reference Quality Assessment for Dermoscopy Images With Multiple Distortions.
Xie, Fengying; Lu, Yanan; Bovik, Alan C; Jiang, Zhiguo; Meng, Rusong
2016-06-01
Dermoscopy images often suffer from blur and uneven illumination distortions that occur during acquisition, which can adversely influence consequent automatic image analysis results on potential lesion objects. The purpose of this paper is to deploy an algorithm that can automatically assess the quality of dermoscopy images. Such an algorithm could be used to direct image recapture or correction. We describe an application-driven no-reference image quality assessment (IQA) model for dermoscopy images affected by possibly multiple distortions. For this purpose, we created a multiple distortion dataset of dermoscopy images impaired by varying degrees of blur and uneven illumination. The basis of this model is two single distortion IQA metrics that are sensitive to blur and uneven illumination, respectively. The outputs of these two metrics are combined to predict the quality of multiply distorted dermoscopy images using a fuzzy neural network. Unlike traditional IQA algorithms, which use human subjective score as ground truth, here ground truth is driven by the application, and generated according to the degree of influence of the distortions on lesion analysis. The experimental results reveal that the proposed model delivers accurate and stable quality prediction results for dermoscopy images impaired by multiple distortions. The proposed model is effective for quality assessment of multiple distorted dermoscopy images. An application-driven concept for IQA is introduced, and at the same time, a solution framework for the IQA of multiple distortions is proposed.
NASA Astrophysics Data System (ADS)
Whitenton, Eric; Heigel, Jarred; Lane, Brandon; Moylan, Shawn
2016-05-01
Accurate non-contact temperature measurement is important to optimize manufacturing processes. This applies to both additive (3D printing) and subtractive (material removal by machining) manufacturing. Performing accurate single wavelength thermography suffers numerous challenges. A potential alternative is hyperpixel array hyperspectral imaging. Focusing on metals, this paper discusses issues involved such as unknown or changing emissivity, inaccurate greybody assumptions, motion blur, and size of source effects. The algorithm which converts measured thermal spectra to emissivity and temperature uses a customized multistep non-linear equation solver to determine the best-fit emission curve. Emissivity dependence on wavelength may be assumed uniform or have a relationship typical for metals. The custom software displays residuals for intensity, temperature, and emissivity to gauge the correctness of the greybody assumption. Initial results are shown from a laser powder-bed fusion additive process, as well as a machining process. In addition, the effects of motion blur are analyzed, which occurs in both additive and subtractive manufacturing processes. In a laser powder-bed fusion additive process, the scanning laser causes the melt pool to move rapidly, causing a motion blur-like effect. In machining, measuring temperature of the rapidly moving chip is a desirable goal to develop and validate simulations of the cutting process. A moving slit target is imaged to characterize how the measured temperature values are affected by motion of a measured target.
Facilitating normative judgments of conditional probability: frequency or nested sets?
Yamagishi, Kimihiko
2003-01-01
Recent probability judgment research contrasts two opposing views. Some theorists have emphasized the role of frequency representations in facilitating probabilistic correctness; opponents have noted that visualizing the probabilistic structure of the task sufficiently facilitates normative reasoning. In the current experiment, the following conditional probability task, an isomorph of the "Problem of Three Prisoners" was tested. "A factory manufactures artificial gemstones. Each gemstone has a 1/3 chance of being blurred, a 1/3 chance of being cracked, and a 1/3 chance of being clear. An inspection machine removes all cracked gemstones, and retains all clear gemstones. However, the machine removes 1/2 of the blurred gemstones. What is the chance that a gemstone is blurred after the inspection?" A 2 x 2 design was administered. The first variable was the use of frequency instruction. The second manipulation was the use of a roulette-wheel diagram that illustrated a "nested-sets" relationship between the prior and the posterior probabilities. Results from two experiments showed that frequency alone had modest effects, while the nested-sets instruction achieved a superior facilitation of normative reasoning. The third experiment compared the roulette-wheel diagram to tree diagrams that also showed the nested-sets relationship. The roulette-wheel diagram outperformed the tree diagrams in facilitation of probabilistic reasoning. Implications for understanding the nature of intuitive probability judgments are discussed.
Impact of multi-focused images on recognition of soft biometric traits
NASA Astrophysics Data System (ADS)
Chiesa, V.; Dugelay, J. L.
2016-09-01
In video surveillance semantic traits estimation as gender and age has always been debated topic because of the uncontrolled environment: while light or pose variations have been largely studied, defocused images are still rarely investigated. Recently the emergence of new technologies, as plenoptic cameras, yields to deal with these problems analyzing multi-focus images. Thanks to a microlens array arranged between the sensor and the main lens, light field cameras are able to record not only the RGB values but also the information related to the direction of light rays: the additional data make possible rendering the image with different focal plane after the acquisition. For our experiments, we use the GUC Light Field Face Database that includes pictures from the First Generation Lytro camera. Taking advantage of light field images, we explore the influence of defocusing on gender recognition and age estimation problems. Evaluations are computed on up-to-date and competitive technologies based on deep learning algorithms. After studying the relationship between focus and gender recognition and focus and age estimation, we compare the results obtained by images defocused by Lytro software with images blurred by more standard filters in order to explore the difference between defocusing and blurring effects. In addition we investigate the impact of deblurring on defocused images with the goal to better understand the different impacts of defocusing and standard blurring on gender and age estimation.
... prolonged QT interval (a problem with the way electricity is conducted in the heart that may cause ... may cause blurred vision. Do not drive a car or operate machinery until you know how this ...
... to temporarily relieve eye pain and sensitivity to light in patients who are recovering from corneal refractive ... that something is in the eye sensitivity to light blurred or decreased vision teary eyes eye discharge ...
... if vomiting blood causes dizziness after standing, rapid, shallow breathing or other signs of shock. Call 911 ... severe blood loss or shock, such as: Rapid, shallow breathing Dizziness or lightheadedness after standing up Blurred ...
... soft contact lenses. You should not use ketorolac eye drops while wearing your soft contact lenses.use caution when driving or operating machinery because your vision may be blurred after you instill the drops.
American Society of Hematology
... industry position. From Blurred Lines to Guidelines for Myeloproliferative Neoplasms Dr. Aaron Gerds and Dr. Ruben Mesa summarize new NCCN guidelines for myeloproliferative neoplasms. A Woman With Hypergammaglobulinemia and Diffuse Lymphadenopathy Take ...
... AND THROAT Blurred vision Drooling Dry mouth Nasal congestion Small pupils Yellow eyes STOMACH AND INTESTINES Constipation ... symptoms Activated charcoal Laxative Breathing support, including a tube through the mouth into the lungs and connected ...
... AND THROAT Blurred vision Drooling Dry mouth Nasal congestion Swallowing difficulties Ulcers in the mouth, on the ... urine tests Breathing support, including oxygen and a tube through the mouth into the lungs CT scan ( ...
... may use a variety of instruments, shine bright lights directly at your eyes and request that you ... exam is complete, as daylight or other bright lights may be uncomfortable or cause blurred vision. Also, ...
Kinematic model for the space-variant image motion of star sensors under dynamical conditions
NASA Astrophysics Data System (ADS)
Liu, Chao-Shan; Hu, Lai-Hong; Liu, Guang-Bin; Yang, Bo; Li, Ai-Jun
2015-06-01
A kinematic description of a star spot in the focal plane is presented for star sensors under dynamical conditions, which involves all necessary parameters such as the image motion, velocity, and attitude parameters of the vehicle. Stars at different locations of the focal plane correspond to the slightly different orientation and extent of motion blur, which characterize the space-variant point spread function. Finally, the image motion, the energy distribution, and centroid extraction are numerically investigated using the kinematic model under dynamic conditions. A centroid error of eight successive iterations <0.002 pixel is used as the termination criterion for the Richardson-Lucy deconvolution algorithm. The kinematic model of a star sensor is useful for evaluating the compensation algorithms of motion-blurred images.
Pathways to labor force exit: work transitions and work instability.
Mutchler, J E; Burr, J A; Pienta, A M; Massagli, M P
1997-01-01
The purpose of this study is to examine alternative pathways to labor force exit among older men. Based on the life course perspective, we distinguish between crisp exits from the labor force, which are characterized as being unidirectional, and blurred transition patterns, which include repeated exists, entrances, and unemployment spells. Using longitudinal data from the 1984 Survey of Income and Program Participation, we find that one-quarter of the sample of men aged 55 to 74 at first interview experienced at least one transition in labor force status over a 28-month observation period. Fewer than half of these can be characterized as crisp exists from the labor force. Our multivariate analysis suggests that blurred transition patterns are likely part of an effort to maintain economic status in later life.
Deblurring traffic sign images based on exemplars
Qiu, Tianshuang; Luan, Shengyang; Song, Haiyu; Wu, Linxiu
2018-01-01
Motion blur appearing in traffic sign images may lead to poor recognition results, and therefore it is of great significance to study how to deblur the images. In this paper, a novel method for deblurring traffic sign is proposed based on exemplars and several related approaches are also made. First, an exemplar dataset construction method is proposed based on multiple-size partition strategy to lower calculation cost of exemplar matching. Second, a matching criterion based on gradient information and entropy correlation coefficient is also proposed to enhance the matching accuracy. Third, L0.5-norm is introduced as the regularization item to maintain the sparsity of blur kernel. Experiments verify the superiority of the proposed approaches and extensive evaluations against state-of-the-art methods demonstrate the effectiveness of the proposed algorithm. PMID:29513677
Improved deconvolution of very weak confocal signals
Day, Kasey J.; La Rivière, Patrick J.; Chandler, Talon; Bindokas, Vytas P.; Ferrier, Nicola J.; Glick, Benjamin S.
2017-01-01
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage. PMID:28868135
Improved deconvolution of very weak confocal signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less
Improved deconvolution of very weak confocal signals
Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon; ...
2017-06-06
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badkul, R; Pokhrel, D; Jiang, H
2016-06-15
Purpose: Intra-fractional tumor motion due to respiration may potentially compromise dose delivery for SBRT of lung tumors. Even sufficient margins are used to ensure there is no geometric miss of target volume, there is potential dose blurring effect may present due to motion and could impact the tumor coverage if motions are larger. In this study we investigated dose blurring effect of open fields as well as Lung SBRT patients planned using 2 non-coplanar dynamic conformal arcs(NCDCA) and few conformal beams(CB) calculated with Monte Carlo (MC) based algorithm utilizing phantom with 2D-diode array(MapCheck) and ion-chamber. Methods: SBRT lung patients weremore » planned on Brainlab-iPlan system using 4D-CT scan and ITV were contoured on MIP image set and verified on all breathing phase image sets to account for breathing motion and then 5mm margin was applied to generate PTV. Plans were created using two NCDCA and 4-5 CB 6MV photon calculated using XVMC MC-algorithm. 3 SBRT patients plans were transferred to phantom with MapCheck and 0.125cc ion-chamber inserted in the middle of phantom to calculate dose. Also open field 3×3, 5×5 and 10×10 were calculated on this phantom. Phantom was placed on motion platform with varying motion from 5, 10, 20 and 30 mm with duty cycle of 4 second. Measurements were carried out for open fields as well 3 patients plans at static and various degree of motions. MapCheck planar dose and ion-chamber reading were collected and compared with static measurements and computed values to evaluate the dosimetric effect on tumor coverage due to motion. Results: To eliminate complexity of patients plan 3 simple open fields were also measured to see the dose blurring effect with the introduction of motion. All motion measured ionchamber values were normalized to corresponding static value. For open fields 5×5 and 10×10 normalized central axis ion-chamber values were 1.00 for all motions but for 3×3 they were 1 up to 10mm motion and 0.97 and 0.87 for 20 and 30mm motion respectively. For SBRT plans central axis dose values were within 1% upto 10mm motions but decreased to average of 5% for 20mm and 8% for 30mm motion. Mapcheck comparison with static showed penumbra enlargement due to motion blurring at the edges of the field for 3×3,5×5,10×10 pass rates were 88% to 12%, 100% to 43% and 100% to 63% respectively as motion increased from 5 to 30mm. For SBRT plans MapCheck mean pass rate were decreased from 73.8% to 39.5% as motion increased from 5mm to 30mm. Conclusion: Dose blurring effect has been seen in open fields as well as SBRT lung plans using NCDCA with CB which worsens with increasing respiratory motion and decreasing field size(tumor size). To reduce this effect larger margins and appropriate motion reduction techniques should be utilized.« less
Computer-aided analysis for the Mechanics of Granular Materials (MGM) experiment
NASA Technical Reports Server (NTRS)
Parker, Joey K.
1986-01-01
The Mechanics of Granular Materials (MGM) program is planned to provide experimental determinations of the mechanics of granular materials under very low gravity conditions. The initial experiments will use small glass beads as the granular material, and a precise tracking of individual beads during the test is desired. Real-time video images of the experimental specimen were taken with a television camera, and subsequently digitized by a frame grabber installed in a microcomputer. Easily identified red tracer beads were randomly scattered throughout the test specimen. A set of Pascal programs was written for processing and analyzing the digitized images. Filtering the image with Laplacian, dilation, and blurring filters when using a threshold function produced a binary (black on white) image which clearly identified the red beads. The centroids and areas for each bead were then determined. Analyzing a series of the images determined individual red bead displacements throughout the experiment. The system can provide displacement accuracies on the order of 0.5 to 1 pixel is the image is taken directly from the video camera. Digitizing an image from a video cassette recorder introduces an additional repeatability error of 0.5 to 1 pixel. Other programs were written to provide hardcopy prints of the digitized images on a dot-matrix printer.
High-speed line-scan camera with digital time delay integration
NASA Astrophysics Data System (ADS)
Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.
An effective method for cirrhosis recognition based on multi-feature fusion
NASA Astrophysics Data System (ADS)
Chen, Yameng; Sun, Gengxin; Lei, Yiming; Zhang, Jinpeng
2018-04-01
Liver disease is one of the main causes of human healthy problem. Cirrhosis, of course, is the critical phase during the development of liver lesion, especially the hepatoma. Many clinical cases are still influenced by the subjectivity of physicians in some degree, and some objective factors such as illumination, scale, edge blurring will affect the judgment of clinicians. Then the subjectivity will affect the accuracy of diagnosis and the treatment of patients. In order to solve the difficulty above and improve the recognition rate of liver cirrhosis, we propose a method of multi-feature fusion to obtain more robust representations of texture in ultrasound liver images, the texture features we extract include local binary pattern(LBP), gray level co-occurrence matrix(GLCM) and histogram of oriented gradient(HOG). In this paper, we firstly make a fusion of multi-feature to recognize cirrhosis and normal liver based on parallel combination concept, and the experimental results shows that the classifier is effective for cirrhosis recognition which is evaluated by the satisfying classification rate, sensitivity and specificity of receiver operating characteristic(ROC), and cost time. Through the method we proposed, it will be helpful to improve the accuracy of diagnosis of cirrhosis and prevent the development of liver lesion towards hepatoma.
Higher order reconstruction for MRI in the presence of spatiotemporal field perturbations.
Wilm, Bertram J; Barmet, Christoph; Pavan, Matteo; Pruessmann, Klaas P
2011-06-01
Despite continuous hardware advances, MRI is frequently subject to field perturbations that are of higher than first order in space and thus violate the traditional k-space picture of spatial encoding. Sources of higher order perturbations include eddy currents, concomitant fields, thermal drifts, and imperfections of higher order shim systems. In conventional MRI with Fourier reconstruction, they give rise to geometric distortions, blurring, artifacts, and error in quantitative data. This work describes an alternative approach in which the entire field evolution, including higher order effects, is accounted for by viewing image reconstruction as a generic inverse problem. The relevant field evolutions are measured with a third-order NMR field camera. Algebraic reconstruction is then formulated such as to jointly minimize artifacts and noise in the resulting image. It is solved by an iterative conjugate-gradient algorithm that uses explicit matrix-vector multiplication to accommodate arbitrary net encoding. The feasibility and benefits of this approach are demonstrated by examples of diffusion imaging. In a phantom study, it is shown that higher order reconstruction largely overcomes variable image distortions that diffusion gradients induce in EPI data. In vivo experiments then demonstrate that the resulting geometric consistency permits straightforward tensor analysis without coregistration. Copyright © 2011 Wiley-Liss, Inc.
Chang, Guoping; Chang, Tingting; Pan, Tinsu; Clark, John W; Mawlawi, Osama R
2010-12-01
Respiratory motion artifacts and partial volume effects (PVEs) are two degrading factors that affect the accuracy of image quantification in PET/CT imaging. In this article, the authors propose a joint motion and PVE correction approach (JMPC) to improve PET quantification by simultaneously correcting for respiratory motion artifacts and PVE in patients with lung/thoracic cancer. The objective of this article is to describe this approach and evaluate its performance using phantom and patient studies. The proposed joint correction approach incorporates a model of motion blurring, PVE, and object size/shape. A motion blurring kernel (MBK) is then estimated from the deconvolution of the joint model, while the activity concentration (AC) of the tumor is estimated from the normalization of the derived MBK. To evaluate the performance of this approach, two phantom studies and eight patient studies were performed. In the phantom studies, two motion waveforms-a linear sinusoidal and a circular motion-were used to control the motion of a sphere, while in the patient studies, all participants were instructed to breathe regularly. For the phantom studies, the resultant MBK was compared to the true MBK by measuring a correlation coefficient between the two kernels. The measured sphere AC derived from the proposed method was compared to the true AC as well as the ACs in images exhibiting PVE only and images exhibiting both PVE and motion blurring. For the patient studies, the resultant MBK was compared to the motion extent derived from a 4D-CT study, while the measured tumor AC was compared to the AC in images exhibiting both PVE and motion blurring. For the phantom studies, the estimated MBK approximated the true MBK with an average correlation coefficient of 0.91. The tumor ACs following the joint correction technique were similar to the true AC with an average difference of 2%. Furthermore, the tumor ACs on the PVE only images and images with both motion blur and PVE effects were, on average, 75% and 47.5% (10%) of the true AC, respectively, for the linear (circular) motion phantom study. For the patient studies, the maximum and mean AC/SUV on the PET images following the joint correction are, on average, increased by 125.9% and 371.6%, respectively, when compared to the PET images with both PVE and motion. The motion extents measured from the derived MBK and 4D-CT exhibited an average difference of 1.9 mm. The proposed joint correction approach can improve the accuracy of PET quantification by simultaneously compensating for the respiratory motion artifacts and PVE in lung/thoracic PET/CT imaging.
... hands Fatigue Nausea Shortness of breath Blurred Vision Tinnitus Difficulty swallowing Leg weakness % 98 84 72 69 ... system) may not work properly, which may cause tinnitus (ringing in the ears), depth perception, running into ...
... tear film can result in excess tearing or dry eye. Because tears are necessary to keep the cornea ... redness of the eye, blurred vision, frothy tears, dry eye, or crusting of the eyelashes on awakening. Treatment ...
... Health and Consumer Devices Consumer Products Contact Lenses Contact Lens Risks Share Tweet Linkedin Pin it More ... redness blurred vision swelling pain Serious Hazards of Contact Lenses Symptoms of eye irritation can indicate a ...
... while sleeping (rarely permanent) Double or blurred vision Dry eyes Temporary swelling of the eyelids Tiny whiteheads after ... conditions that make blepharoplasty more risky are: Diabetes Dry eye or not enough tear production Heart disease or ...
... Signs and symptoms of ocular rosacea may include: Dry eyes Burning or stinging in the eyes Itchy eyes ... signs and symptoms of ocular rosacea, such as dry eyes, burning or itchy eyes, redness, or blurred vision. ...
Amitriptyline and perphenazine overdose
... MOUTH Blurred vision Dry mouth Enlarged pupils Nasal congestion Unpleasant taste in mouth HEART AND BLOOD Irregular ... symptoms Activated charcoal Laxative Breathing support, including a tube through the mouth into the lungs and connected ...
Low-frequency noise effect on terahertz tomography using thermal detectors.
Guillet, J P; Recur, B; Balacey, H; Bou Sleiman, J; Darracq, F; Lewis, D; Mounaix, P
2015-08-01
In this paper, the impact of low-frequency noise on terahertz-computed tomography (THz-CT) is analyzed for several measurement configurations and pyroelectric detectors. We acquire real noise data from a continuous millimeter-wave tomographic scanner in order to figure out its impact on reconstructed images. Second, noise characteristics are quantified according to two distinct acquisition methods by (i) extrapolating from experimental acquisitions a sinogram for different noise backgrounds and (ii) reconstructing the corresponding spatial distributions in a slice using a CT reconstruction algorithm. Then we describe the low-frequency noise fingerprint and its influence on reconstructed images. Thanks to the observations, we demonstrate that some experimental choices can dramatically affect the 3D rendering of reconstructions. Thus, we propose some experimental methodologies optimizing the resulting quality and accuracy of the 3D reconstructions, with respect to the low-frequency noise characteristics observed during acquisitions.
NASA Astrophysics Data System (ADS)
Chen, Hu; Zhang, Yi; Zhou, Jiliu; Wang, Ge
2017-09-01
Given the potential risk of X-ray radiation to the patient, low-dose CT has attracted a considerable interest in the medical imaging field. Currently, the main stream low-dose CT methods include vendor-specific sinogram domain filtration and iterative reconstruction algorithms, but they need to access raw data whose formats are not transparent to most users. Due to the difficulty of modeling the statistical characteristics in the image domain, the existing methods for directly processing reconstructed images cannot eliminate image noise very well while keeping structural details. Inspired by the idea of deep learning, here we combine the autoencoder, deconvolution network, and shortcut connections into the residual encoder-decoder convolutional neural network (RED-CNN) for low-dose CT imaging. After patch-based training, the proposed RED-CNN achieves a competitive performance relative to the-state-of-art methods. Especially, our method has been favorably evaluated in terms of noise suppression and structural preservation.
Abella, M; Vicente, E; Rodríguez-Ruano, A; España, S; Lage, E; Desco, M; Udias, J M; Vaquero, J J
2012-11-21
Technological advances have improved the assembly process of PET detectors, resulting in quite small mechanical tolerances. However, in high-spatial-resolution systems, even submillimetric misalignments of the detectors may lead to a notable degradation of image resolution and artifacts. Therefore, the exact characterization of misalignments is critical for optimum reconstruction quality in such systems. This subject has been widely studied for CT and SPECT scanners based on cone beam geometry, but this is not the case for PET tomographs based on rotating planar detectors. The purpose of this work is to analyze misalignment effects in these systems and to propose a robust and easy-to-implement protocol for geometric characterization. The result of the proposed calibration method, which requires no more than a simple calibration phantom, can then be used to generate a correct 3D-sinogram from the acquired list mode data.
A signature dissimilarity measure for trabecular bone texture in knee radiographs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woloszynski, T.; Podsiadlo, P.; Stachowiak, G. W.
Purpose: The purpose of this study is to develop a dissimilarity measure for the classification of trabecular bone (TB) texture in knee radiographs. Problems associated with the traditional extraction and selection of texture features and with the invariance to imaging conditions such as image size, anisotropy, noise, blur, exposure, magnification, and projection angle were addressed. Methods: In the method developed, called a signature dissimilarity measure (SDM), a sum of earth mover's distances calculated for roughness and orientation signatures is used to quantify dissimilarities between textures. Scale-space theory was used to ensure scale and rotation invariance. The effects of image size,more » anisotropy, noise, and blur on the SDM developed were studied using computer generated fractal texture images. The invariance of the measure to image exposure, magnification, and projection angle was studied using x-ray images of human tibia head. For the studies, Mann-Whitney tests with significance level of 0.01 were used. A comparison study between the performances of a SDM based classification system and other two systems in the classification of Brodatz textures and the detection of knee osteoarthritis (OA) were conducted. The other systems are based on weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM) and local binary patterns (LBP). Results: Results obtained indicate that the SDM developed is invariant to image exposure (2.5-30 mA s), magnification (x1.00-x1.35), noise associated with film graininess and quantum mottle (<25%), blur generated by a sharp film screen, and image size (>64x64 pixels). However, the measure is sensitive to changes in projection angle (>5 deg.), image anisotropy (>30 deg.), and blur generated by a regular film screen. For the classification of Brodatz textures, the SDM based system produced comparable results to the LBP system. For the detection of knee OA, the SDM based system achieved 78.8% classification accuracy and outperformed the WND-CHARM system (64.2%). Conclusions: The SDM is well suited for the classification of TB texture images in knee OA detection and may be useful for the texture classification of medical images in general.« less
Temporal lobe epilepsy and focal cortical dysplasia in children: A tip to find the abnormality.
Bartolini, Luca; Whitehead, Matthew T; Ho, Cheng-Ying; Sepeta, Leigh N; Oluigbo, Chima O; Havens, Kathryn; Freilich, Emily R; Schreiber, John M; Gaillard, William D
2017-01-01
To demonstrate an association between magnetic resonance imaging (MRI) findings and pathologic characteristics in children who had surgery for medically refractory epilepsy due to focal cortical dysplasia (FCD). We retrospectively studied 110 children who had epilepsy surgery. Twenty-seven patients with FCD were included. Thirteen had temporal lobe epilepsy (TLE) and 14 had extra-temporal lobe epilepsy (ETLE). Three patients had associated mesial temporal sclerosis. Preoperative 3T MRIs interleaved with nine controls were blindly re-reviewed and categorized according to signal alteration. Pathologic specimens were classified according to the 2011 International League Against Epilepsy (ILAE) classification and compared to MRI studies. Rates of pathology subtypes differed between TLE and ETLE (χ 2 (3) = 8.57, p = 0.04). FCD type I was more frequent in TLE, whereas FCD type II was more frequent in ETLE. In the TLE group, nine patients had temporal tip abnormalities. They all exhibited gray-white matter blurring with decreased myelination and white matter hyperintense signal. Blurring involved the whole temporal tip, not just the area of dysplasia. These patients were less likely to demonstrate cortical thickening compared to those without temporal tip findings (χ 2 (1) = 9.55, p = 0.002). Three of them had FCD Ib, three had FCD IIa, two had FCD IIIa, and one had FCD IIb; MRI features could not entirely distinguish between FCD subtypes. TLE patients showed more pronounced findings than ETLE on MRI (χ 2 (1) = 11.95, p = 0.003, odds ratio [OR] 18.00). In all cases of FCD, isolated blurring was more likely to be associated with FCD II, whereas blurring with decreased myelination was seen with FCD I (χ 2 (6) = 13.07, p = 0.042). Our study described associations between MRI characteristics and pathology in children with FCD and offered a detailed analysis of temporal lobe tip abnormalities and FCD subtypes in children with TLE. These findings may contribute to the presurgical evaluation of patients with refractory epilepsy. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.
... testing. AMSLER GRID TEST This test helps detect macular degeneration . This is a disease that causes blurred vision, ... exam. People who are at risk of developing macular degeneration may be told by their ophthalmologist to perform ...
Visualizing deep neural network by alternately image blurring and deblurring.
Wang, Feng; Liu, Haijun; Cheng, Jian
2018-01-01
Visualization from trained deep neural networks has drawn massive public attention in recent. One of the visualization approaches is to train images maximizing the activation of specific neurons. However, directly maximizing the activation would lead to unrecognizable images, which cannot provide any meaningful information. In this paper, we introduce a simple but effective technique to constrain the optimization route of the visualization. By adding two totally inverse transformations, image blurring and deblurring, to the optimization procedure, recognizable images can be created. Our algorithm is good at extracting the details in the images, which are usually filtered by previous methods in the visualizations. Extensive experiments on AlexNet, VGGNet and GoogLeNet illustrate that we can better understand the neural networks utilizing the knowledge obtained by the visualization. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reducing Visual Discomfort with HMDs Using Dynamic Depth of Field.
Carnegie, Kieran; Rhee, Taehyun
2015-01-01
Although head-mounted displays (HMDs) are ideal devices for personal viewing of immersive stereoscopic content, exposure to VR applications on them results in significant discomfort for the majority of people, with symptoms including eye fatigue, headaches, nausea, and sweating. A conflict between accommodation and vergence depth cues on stereoscopic displays is a significant cause of visual discomfort. This article describes the results of an evaluation used to judge the effectiveness of dynamic depth-of-field (DoF) blur in an effort to reduce discomfort caused by exposure to stereoscopic content on HMDs. Using a commercial game engine implementation, study participants report a reduction of visual discomfort on a simulator sickness questionnaire when DoF blurring is enabled. The study participants reported a decrease in symptom severity caused by HMD exposure, indicating that dynamic DoF can effectively reduce visual discomfort.
The mean field theory in EM procedures for blind Markov random field image restoration.
Zhang, J
1993-01-01
A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing.
Horwood, Anna M.; Riddell, Patricia M.
2014-01-01
Purpose To propose an alternative and practical model to conceptualize clinical patterns of concomitant intermittent strabismus, heterophoria, and convergence and accommodation anomalies. Methods Despite identical ratios, there can be a disparity- or blur-biased “style” in three hypothetical scenarios: normal; high ratio of accommodative convergence to accommodation (AC/A) and low ratio of convergence accommodation to convergence (CA/C); low AC/A and high CA/C. We calculated disparity bias indices (DBI) to reflect these biases and provide early objective data from small illustrative clinical groups that fit these styles. Results Normal adults (n = 56) and children (n = 24) showed disparity bias (adult DBI 0.43 [95% CI, 0.50-0.36], child DBI 0.20 [95% CI, 0.31-0.07]; P = 0.001). Accommodative esotropia (n = 3) showed less disparity-bias (DBI 0.03). In the high AC/A–low CA/C scenario, early presbyopia (n = 22) showed mean DBI of 0.17 (95% CI, 0.28-0.06), compared to DBI of −0.31 in convergence excess esotropia (n=8). In the low AC/A–high CA/C scenario near exotropia (n = 17) showed mean DBI of 0.27. DBI ranged between 1.25 and −1.67. Conclusions Establishing disparity or blur bias adds to AC/A and CA/C ratios to explain clinical patterns. Excessive bias or inflexibility in near-cue use increases risk of clinical problems. PMID:25498466
Horwood, Anna M; Riddell, Patricia M
2014-12-01
To propose an alternative and practical model to conceptualize clinical patterns of concomitant intermittent strabismus, heterophoria, and convergence and accommodation anomalies. Despite identical ratios, there can be a disparity- or blur-biased "style" in three hypothetical scenarios: normal; high ratio of accommodative convergence to accommodation (AC/A) and low ratio of convergence accommodation to convergence (CA/C); low AC/A and high CA/C. We calculated disparity bias indices (DBI) to reflect these biases and provide early objective data from small illustrative clinical groups that fit these styles. Normal adults (n = 56) and children (n = 24) showed disparity bias (adult DBI 0.43 [95% CI, 0.50-0.36], child DBI 0.20 [95% CI, 0.31-0.07]; P = 0.001). Accommodative esotropia (n = 3) showed less disparity-bias (DBI 0.03). In the high AC/A-low CA/C scenario, early presbyopia (n = 22) showed mean DBI of 0.17 (95% CI, 0.28-0.06), compared to DBI of -0.31 in convergence excess esotropia (n=8). In the low AC/A-high CA/C scenario near exotropia (n = 17) showed mean DBI of 0.27. DBI ranged between 1.25 and -1.67. Establishing disparity or blur bias adds to AC/A and CA/C ratios to explain clinical patterns. Excessive bias or inflexibility in near-cue use increases risk of clinical problems. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Are high lags of accommodation in myopic children due to motor deficits?
Labhishetty, Vivek; Bobier, William R
2017-01-01
Children with a progressing myopia exhibit an abnormal pattern of high accommodative lags coupled with high accommodative convergence (AC/A) and high accommodative adaptation. This is not predicted by the current models of accommodation and vergence. Reduced accommodative plant gain and reduced sensitivity to blur have been suggested as potential causes for this abnormal behavior. These etiologies were tested by altering parameters (sensory, controller and plant gains) in the Simulink model of accommodation. Predictions were then compared to the static and dynamic blur accommodation (BA) measures taken using a Badal optical system on 12 children (6 emmetropes and 6 myopes, 8-13years) and 6 adults (20-35years). Other critical parameters such as CA/C, AC/A, and accommodative adaptation were also measured. Usable BA responses were classified as either typical or atypical. Typical accommodation data confirmed the abnormal pattern of myopia along with an unchanged CA/C. Main sequence relationship remained invariant between myopic and nonmyopic children. An overall reduction was noted in the response dynamics such as peak velocity and acceleration with age. Neither a reduced plant gain nor reduced blur sensitivity could predict the abnormal accommodative behavior. A model adjustment reflecting a reduced accommodative sensory gain (ASG) coupled with an increased AC cross-link gain and reduced vergence adaptive gain does predict the empirical findings. Empirical measures also showed a greater frequency of errors in accommodative response generation (atypical responses) in both myopic and control children compared to adults. Copyright © 2016 Elsevier Ltd. All rights reserved.
The use of vision-based image quality metrics to predict low-light performance of camera phones
NASA Astrophysics Data System (ADS)
Hultgren, B.; Hertel, D.
2010-01-01
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.
Image registration for multi-exposed HDRI and motion deblurring
NASA Astrophysics Data System (ADS)
Lee, Seok; Wey, Ho-Cheon; Lee, Seong-Deok
2009-02-01
In multi-exposure based image fusion task, alignment is an essential prerequisite to prevent ghost artifact after blending. Compared to usual matching problem, registration is more difficult when each image is captured under different photographing conditions. In HDR imaging, we use long and short exposure images, which have different brightness and there exist over/under satuated regions. In motion deblurring problem, we use blurred and noisy image pair and the amount of motion blur varies from one image to another due to the different exposure times. The main difficulty is that luminance levels of the two images are not in linear relationship and we cannot perfectly equalize or normalize the brightness of each image and this leads to unstable and inaccurate alignment results. To solve this problem, we applied probabilistic measure such as mutual information to represent similarity between images after alignment. In this paper, we discribed about the characteristics of multi-exposed input images in the aspect of registration and also analyzed the magnitude of camera hand shake. By exploiting the independence of luminance of mutual information, we proposed a fast and practically useful image registration technique in multiple capturing. Our algorithm can be applied to extreme HDR scenes and motion blurred scenes with over 90% success rate and its simplicity enables to be embedded in digital camera and mobile camera phone. The effectiveness of our registration algorithm is examined by various experiments on real HDR or motion deblurring cases using hand-held camera.
unWISE: Unblurred Coadds of the WISE Imaging
NASA Astrophysics Data System (ADS)
Lang, Dustin
2014-05-01
The Wide-field Infrared Survey Explorer (WISE) satellite observed the full sky in four mid-infrared bands in the 2.8-28 μm range. The primary mission was completed in 2010. The WISE team has done a superb job of producing a series of high-quality, well-documented, complete data releases in a timely manner. However, the "Atlas Image" coadds that are part of the recent AllWISE and previous data releases were intentionally blurred. Convolving the images by the point-spread function while coadding results in "matched-filtered" images that are close to optimal for detecting isolated point sources. But these matched-filtered images are sub-optimal or inappropriate for other purposes. For example, we are photometering the WISE images at the locations of sources detected in the Sloan Digital Sky Survey through forward modeling, and this blurring decreases the available signal-to-noise by effectively broadening the point-spread function. This paper presents a new set of coadds of the WISE images that have not been blurred. These images retain the intrinsic resolution of the data and are appropriate for photometry preserving the available signal-to-noise. Users should be cautioned, however, that the W3- and W4-band coadds contain artifacts around large, bright structures (large galaxies, dusty nebulae, etc.); eliminating these artifacts is the subject of ongoing work. These new coadds, and the code used to produce them, are publicly available at http://unwise.me.
Synthetic biology as red herring.
Preston, Beth
2013-12-01
It has become commonplace to say that with the advent of technologies like synthetic biology the line between artifacts and living organisms, policed by metaphysicians since antiquity, is beginning to blur. But that line began to blur 10,000 years ago when plants and animals were first domesticated; and has been thoroughly blurred at least since agriculture became the dominant human subsistence pattern many millennia ago. Synthetic biology is ultimately only a late and unexceptional offshoot of this prehistoric development. From this perspective, then, synthetic biology is a red herring, distracting us from more thorough philosophical consideration of the most truly revolutionary human practice-agriculture. In the first section of this paper I will make this case with regard to ontology, arguing that synthetic biology crosses no ontological lines that were not crossed already in the Neolithic. In the second section I will construct a parallel case with regard to cognition, arguing that synthetic biology as biological engineering represents no cognitive advance over what was required for domestication and the new agricultural subsistence pattern it grounds. In the final section I will make the case with regard to human existence, arguing that synthetic biology, even if wildly successful, is not in a position to cause significant existential change in what it is to be human over and above the massive existential change caused by the transition to agriculture. I conclude that a longer historical perspective casts new light on some important issues in philosophy of technology and environmental philosophy. Copyright © 2013 Elsevier Ltd. All rights reserved.
Color image definition evaluation method based on deep learning method
NASA Astrophysics Data System (ADS)
Liu, Di; Li, YingChun
2018-01-01
In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.
Point spread function based classification of regions for linear digital tomosynthesis
NASA Astrophysics Data System (ADS)
Israni, Kenny; Avinash, Gopal; Li, Baojun
2007-03-01
In digital tomosynthesis, one of the limitations is the presence of out-of-plane blur due to the limited angle acquisition. The point spread function (PSF) characterizes blur in the imaging volume, and is shift-variant in tomosynthesis. The purpose of this research is to classify the tomosynthesis imaging volume into four different categories based on PSF-driven focus criteria. We considered linear tomosynthesis geometry and simple back projection algorithm for reconstruction. The three-dimensional PSF at every pixel in the imaging volume was determined. Intensity profiles were computed for every pixel by integrating the PSF-weighted intensities contained within the line segment defined by the PSF, at each slice. Classification rules based on these intensity profiles were used to categorize image regions. At background and low-frequency pixels, the derived intensity profiles were flat curves with relatively low and high maximum intensities respectively. At in-focus pixels, the maximum intensity of the profiles coincided with the PSF-weighted intensity of the pixel. At out-of-focus pixels, the PSF-weighted intensity of the pixel was always less than the maximum intensity of the profile. We validated our method using human observer classified regions as gold standard. Based on the computed and manual classifications, the mean sensitivity and specificity of the algorithm were 77+/-8.44% and 91+/-4.13% respectively (t=-0.64, p=0.56, DF=4). Such a classification algorithm may assist in mitigating out-of-focus blur from tomosynthesis image slices.
Qiu, Kang-Fu
2017-01-01
This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2–2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking. PMID:29027950
Ultrafast scene detection and recognition with limited visual information
Hagmann, Carl Erick; Potter, Mary C.
2016-01-01
Humans can detect target color pictures of scenes depicting concepts like picnic or harbor in sequences of six or twelve pictures presented as briefly as 13 ms, even when the target is named after the sequence (Potter, Wyble, Hagmann, & McCourt, 2014). Such rapid detection suggests that feedforward processing alone enabled detection without recurrent cortical feedback. There is debate about whether coarse, global, low spatial frequencies (LSFs) provide predictive information to high cortical levels through the rapid magnocellular (M) projection of the visual path, enabling top-down prediction of possible object identities. To test the “Fast M” hypothesis, we compared detection of a named target across five stimulus conditions: unaltered color, blurred color, grayscale, thresholded monochrome, and LSF pictures. The pictures were presented for 13–80 ms in six-picture rapid serial visual presentation (RSVP) sequences. Blurred, monochrome, and LSF pictures were detected less accurately than normal color or grayscale pictures. When the target was named before the sequence, all picture types except LSF resulted in above-chance detection at all durations. Crucially, when the name was given only after the sequence, performance dropped and the monochrome and LSF pictures (but not the blurred pictures) were at or near chance. Thus, without advance information, monochrome and LSF pictures were rarely understood. The results offer only limited support for the Fast M hypothesis, suggesting instead that feedforward processing is able to activate conceptual representations without complementary reentrant processing. PMID:28255263
Sims, Sarah; Hewitt, Gillian; Harris, Ruth
2015-01-01
Interprofessional teamwork has become an integral feature of healthcare delivery in a wide range of conditions and services in many countries. Many assumptions are made in healthcare literature and policy about how interprofessional teams function and about the outcomes of interprofessional teamwork. Realist synthesis is an approach to reviewing research evidence on complex interventions which seeks to explore these assumptions. It does this by unpacking the mechanisms of an intervention, exploring the contexts which trigger or deactivate them and connecting these contexts and mechanisms to their subsequent outcomes. This is the second in a series of four papers reporting a realist synthesis of interprofessional teamworking. The paper discusses four of the 13 mechanisms identified in the synthesis: collaboration and coordination; pooling of resources; individual learning; and role blurring. These mechanisms together capture the day-to-day functioning of teams and the dependence of that on members' understanding each others' skills and knowledge and learning from them. This synthesis found empirical evidence to support all four mechanisms, which tentatively suggests that collaboration, pooling, learning, and role blurring are all underlying processes of interprofessional teamwork. However, the supporting evidence for individual learning was relatively weak, therefore there may be assumptions made about learning within healthcare literature and policy that are not founded upon strong empirical evidence. There is a need for more robust research on individual learning to further understand its relationship with interprofessional teamworking in healthcare.
Wang, Jeremy H-S; Qiu, Kang-Fu; Chao, Paul C-P
2017-10-13
This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.
Solomon, Justin; Ba, Alexandre; Bochud, François; Samei, Ehsan
2016-12-01
To use novel voxel-based 3D printed textured phantoms in order to compare low-contrast detectability between two reconstruction algorithms, FBP (filtered-backprojection) and SAFIRE (sinogram affirmed iterative reconstruction) and determine what impact background texture (i.e., anatomical noise) has on estimating the dose reduction potential of SAFIRE. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find CLB textures that were reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, four cylindrical phantoms (Textures A-C and uniform, 165 mm in diameter, and 30 mm height) were designed, each containing 20 low-contrast spherical signals (6 mm diameter at nominal contrast levels of ∼3.2, 5.2, 7.2, 10, and 14 HU with four repeats per signal). The phantoms were voxelized and input into a commercial multimaterial 3D printer (Object Connex 350), with custom software for voxel-based printing (using principles of digital dithering). Images of the textured phantoms and a corresponding uniform phantom were acquired at six radiation dose levels (SOMATOM Flash, Siemens Healthcare) and observer model detection performance (detectability index of a multislice channelized Hotelling observer) was estimated for each condition (5 contrasts × 6 doses × 2 reconstructions × 4 backgrounds = 240 total conditions). A multivariate generalized regression analysis was performed (linear terms, no interactions, random error term, log link function) to assess whether dose, reconstruction algorithm, signal contrast, and background type have statistically significant effects on detectability. Also, fitted curves of detectability (averaged across contrast levels) as a function of dose were constructed for each reconstruction algorithm and background texture. FBP and SAFIRE were compared for each background type to determine the improvement in detectability at a given dose, and the reduced dose at which SAFIRE had equivalent performance compared to FBP at 100% dose. Detectability increased with increasing radiation dose (P = 2.7 × 10 -59 ) and contrast level (P = 2.2 × 10 -86 ) and was higher in the uniform phantom compared to the textured phantoms (P = 6.9 × 10 -51 ). Overall, SAFIRE had higher d' compared to FBP (P = 0.02). The estimated dose reduction potential of SAFIRE was found to be 8%, 10%, 27%, and 8% for Texture-A, Texture-B, Texture-C and uniform phantoms. In all background types, detectability was higher with SAFIRE compared to FBP. However, the relative improvement observed from SAFIRE was highly dependent on the complexity of the background texture. Iterative algorithms such as SAFIRE should be assessed in the most realistic context possible.
A virtual source model for Monte Carlo simulation of helical tomotherapy.
Yuan, Jiankui; Rong, Yi; Chen, Quan
2015-01-08
The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase-space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS-generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of < 1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of < 2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM-based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose-volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM-based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media.
MRI-assisted PET motion correction for neurologic studies in an integrated MR-PET scanner.
Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B; Michel, Christian J; El Fakhri, Georges; Schmand, Matthias; Sorensen, A Gregory
2011-01-01
Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MRI data can be used for motion tracking. In this work, a novel algorithm for data processing and rigid-body motion correction (MC) for the MRI-compatible BrainPET prototype scanner is described, and proof-of-principle phantom and human studies are presented. To account for motion, the PET prompt and random coincidences and sensitivity data for postnormalization were processed in the line-of-response (LOR) space according to the MRI-derived motion estimates. The processing time on the standard BrainPET workstation is approximately 16 s for each motion estimate. After rebinning in the sinogram space, the motion corrected data were summed, and the PET volume was reconstructed using the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed, and motion estimates were obtained using 2 high-temporal-resolution MRI-based motion-tracking techniques. After accounting for the misalignment between the 2 scanners, perfectly coregistered MRI and PET volumes were reproducibly obtained. The MRI output gates inserted into the PET list-mode allow the temporal correlation of the 2 datasets within 0.2 ms. The Hoffman phantom volume reconstructed by processing the PET data in the LOR space was similar to the one obtained by processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the procedure. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 s and 20 ms, respectively. Motion-deblurred PET images, with excellent delineation of specific brain structures, were obtained using these 2 MRI-based estimates. An MRI-based MC algorithm was implemented for an integrated MR-PET scanner. High-temporal-resolution MRI-derived motion estimates (obtained while simultaneously acquiring anatomic or functional MRI data) can be used for PET MC. An MRI-based MC method has the potential to improve PET image quality, increasing its reliability, reproducibility, and quantitative accuracy, and to benefit many neurologic applications.
Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika
2017-10-01
In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Song, Inyoung; Yi, Jeong Geun; Park, Jeong Hee; Ko, Sung Min
2016-01-01
Objective To evaluate the image quality and radiation dose of indirect computed tomographic venography (CTV) using 80 kVp with sinogram-affirmed iterative reconstruction (SAFIRE) and 120 kVp with filtered back projection (FBP). Materials and Methods This retrospective study was approved by our institution and informed consent was waived. Sixty-one consecutive patients (M: F = 27: 34, mean age 60 ± 16, mean BMI 23.6 ± 3.6 kg/m2) underwent pelvic and lower extremity CTVs [group A (n = 31, 120 kVp, reconstructed with FBP) vs. group B (n = 30, 80 kVp, reconstructed with SAFIRE)]. The vascular enhancement, image noise, contrast-to-noise ratio (CNR), and signal-to-noise ratio (SNR) were compared. Subjective image analysis for image quality and noise was performed by two radiologists. Radiation dose was compared between the two groups. Results Compared with group A, higher mean vascular enhancement was observed in the group B (group A vs. B, 118.8 ± 15.7 HU vs. 178.6 ± 39.6 HU, p < 0.001), as well as image noise (12.0 ± 3.8 HU vs. 17.9 ± 6.1 HU, p < 0.001) and CNR (5.1 ± 1.9 vs. 7.6 ± 3.0, p < 0.001). The SNRs were not significantly different in both groups (11.2 ± 4.8 vs. 10.8 ± 3.7, p = 0.617). There was no significant difference in subjective image quality between the two groups (all p > 0.05). The subjective image noise was higher in the group B (p = 0.036 in reader 1, p = 0.005 in reader 2). The inter-observer reliability for assessing subjective image quality was good (ICC 0.746~0.784, p < 0.001). The mean CT dose index volume (CTDIvol) and mean dose length product (DLP) were significantly lower in group B than group A [CTDIvol, 6.4 ± 1.3 vs. 2.2 ± 2.2 mGy (p < 0.001); DLP, 499.1 ± 116.0 vs. 133.1 ± 45.7 mGy × cm (p < 0.001)]. Conclusions CTV using 80 kVp combined with SAFIRE provides lower radiation dose and improved CNR compared to CTV using 120 kVp with FBP. PMID:27662618
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laberge, S; Beauregard, J; Archambault, L
2016-06-15
Purpose: Textural biomarkers as a tool for quantifying intratumoral heterogeneity hold great promise for diagnosis and early assessment of treatment response in prostate cancer. However, spill-in counts from the bladder uptake are suspected to have an impact on the textural measurements of the prostate volume. This work proposes a correction method for the FCh-PET bladder uptake and investigates its impact on intraprostatic textural properties. Methods: Two patients with PC received pre-treatment dynamic FCh-PET scans reconstructed at four time points (interval: 2 min), for which prostate and bladder contours were obtained. Projection bins affected by bladder uptake were determined by forward-projection.more » For each time point and axial position, virtual sinograms were obtained and affected bins replaced by a weighted combination of original values and values interpolated using cubic spline from non-affected bins of the current and adjacent projection angles. The process was optimized using a genetic algorithm in terms of minimization of the root-mean-square error (RMSE) within the bladder between the corrected dynamic time point volume and a reference initial uptake volume. Finally, the impact of the bladder uptake correction on the prostate region was investigated using two standard SUV metrics (1) and three texture metrics (2): 1) SUVmax, SUVmean; 2) Contrast, Homogeneity, Coarseness. Results: Without bladder uptake correction, SUVmax and SUVmean were on average overestimated in the prostate by 0%, 0%, 33.2%, 51.2%, and 3.6%, 6.0%, 2.9%, 3.2%, for each time point respectively. Contrast varied by −9.1%, −6.7%, +40.4%, +107.7%, and Homogeneity and Coarseness by +4.5%, +1.8%, −8.8%, −14.8% and +1.0%, +0.5%, −9.5%, +0.9%. Conclusion: We proposed a method for FCh-PET bladder uptake correction and showed an impact on the quantification of the prostate signal. This method achieved a large reduction of intra-prostatic SUVmax while minimizing the impact on SUVmean. Further investigation is necessary to interpret changes in textural features. SL acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290).« less
A virtual source model for Monte Carlo simulation of helical tomotherapy
Yuan, Jiankui; Rong, Yi
2015-01-01
The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase‐space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS‐generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of <1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of <2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM‐based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose‐volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM‐based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media. PACS numbers: 87.53.‐j, 87.55.K‐ PMID:25679157
Nagayama, Yasunori; Nakaura, Takeshi; Tsuji, Akinori; Urata, Joji; Furusawa, Mitsuhiro; Yuki, Hideaki; Hirarta, Kenichiro; Oda, Seitaro; Kidoh, Masafumi; Utsunomiya, Daisuke; Yamashita, Yasuyuki
2017-02-01
The purpose of this study was to evaluate the feasibility of a contrast medium (CM), radiation dose reduction protocol for cerebral bone-subtraction CT angiography (BSCTA) using 80-kVp and sinogram-affirmed iterative reconstruction (SAFIRE). Seventy-five patients who had undergone BSCTA under the 120- (n = 37) or the 80-kVp protocol (n = 38) were included. CM was 370 mgI/kg for the 120-kVp and 296 mgI/kg for the 80-kVp protocol; the 120- and the 80-kVp images were reconstructed with filtered back-projection (FBP) and SAFIRE, respectively. We compared effective dose (ED), CT attenuation, image noise, and contrast-to-noise ratio (CNR) of two protocols. We also scored arterial contrast, sharpness, depiction of small arteries, visibility near skull base/clip, and overall image quality on a four-point scale. ED was 62% lower at 80- than 120-kVp (0.59 ± 0.06 vs 1.56 ± 0.13 mSv, p < 0.01). CT attenuation of the internal carotid artery (ICA) and middle cerebral artery (MCA) was significantly higher on 80- than 120-kVp (ICA: 557.4 ± 105.7 vs 370.0 ± 59.3 Hounsfield units (HU), p < 0.01; MCA: 551.9 ± 107.9 vs 364.6 ± 62.2 HU, p < 0.01). The CNR was also significantly higher on 80- than 120-kVp (ICA: 46.2 ± 10.2 vs 36.9 ± 7.6, p < 0.01; MCA: 45.7 ± 10.0 vs 35.7 ± 9.0, p < 0.01). Visibility near skull base and clip was not significantly different (p = 0.45). The other subjective scores were higher with the 80- than the 120-kVp protocol (p < 0.05). The 80-kVp acquisition with SAFIRE yields better image quality for BSCTA and substantial reduction in the radiation and CM dose compared to the 120-kVp with FBP protocol.
ERIC Educational Resources Information Center
Greenslade, Thomas B., Jr.
1984-01-01
Describes several methods of executing lecture demonstrations involving the recombination of the spectrum. Groups the techniques into two general classes: bringing selected portions of the spectrum together using lenses or mirrors and blurring the colors by rapid movement or foreshortening. (JM)
Three Channel Polarimetric Based Data Deconvolution
2011-03-01
which have been degraded by atmospheric turbulence and noise . This thesis explains in entirety the process used for deblurring and de- noising images...10 3.1.2 Noise Model...Blur and Noise .............................................................................................................. 34 5.3 Laboratory Results
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-07
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies
Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong
2017-01-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies
NASA Astrophysics Data System (ADS)
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.