Sample records for scatter correction methods

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less

  2. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    PubMed

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  3. A study on scattering correction for γ-photon 3D imaging test method

    NASA Astrophysics Data System (ADS)

    Xiao, Hui; Zhao, Min; Liu, Jiantang; Chen, Hao

    2018-03-01

    A pair of 511KeV γ-photons is generated during a positron annihilation. Their directions differ by 180°. The moving path and energy information can be utilized to form the 3D imaging test method in industrial domain. However, the scattered γ-photons are the major factors influencing the imaging precision of the test method. This study proposes a γ-photon single scattering correction method from the perspective of spatial geometry. The method first determines possible scattering points when the scattered γ-photon pair hits the detector pair. The range of scattering angle can then be calculated according to the energy window. Finally, the number of scattered γ-photons denotes the attenuation of the total scattered γ-photons along its moving path. The corrected γ-photons are obtained by deducting the scattered γ-photons from the original ones. Two experiments are conducted to verify the effectiveness of the proposed scattering correction method. The results concluded that the proposed scattering correction method can efficiently correct scattered γ-photons and improve the test accuracy.

  4. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    PubMed

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  5. Robust scatter correction method for cone-beam CT using an interlacing-slit plate

    NASA Astrophysics Data System (ADS)

    Huang, Kui-Dong; Xu, Zhe; Zhang, Ding-Hua; Zhang, Hua; Shi, Wen-Long

    2016-06-01

    Cone-beam computed tomography (CBCT) has been widely used in medical imaging and industrial nondestructive testing, but the presence of scattered radiation will cause significant reduction of image quality. In this article, a robust scatter correction method for CBCT using an interlacing-slit plate (ISP) is carried out for convenient practice. Firstly, a Gaussian filtering method is proposed to compensate the missing data of the inner scatter image, and simultaneously avoid too-large values of calculated inner scatter and smooth the inner scatter field. Secondly, an interlacing-slit scan without detector gain correction is carried out to enhance the practicality and convenience of the scatter correction method. Finally, a denoising step for scatter-corrected projection images is added in the process flow to control the noise amplification The experimental results show that the improved method can not only make the scatter correction more robust and convenient, but also achieve a good quality of scatter-corrected slice images. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Aeronautical Science Fund of China (2014ZE53059), and Fundamental Research Funds for Central Universities of China (3102014KYJD022)

  6. Investigation on Beam-Blocker-Based Scatter Correction Method for Improving CT Number Accuracy

    NASA Astrophysics Data System (ADS)

    Lee, Hoyeon; Min, Jonghwan; Lee, Taewon; Pua, Rizza; Sabir, Sohail; Yoon, Kown-Ha; Kim, Hokyung; Cho, Seungryong

    2017-03-01

    Cone-beam computed tomography (CBCT) is gaining widespread use in various medical and industrial applications but suffers from substantially larger amount of scatter than that in the conventional diagnostic CT resulting in relatively poor image quality. Various methods that can reduce and/or correct for the scatter in the CBCT have therefore been developed. Scatter correction method that uses a beam-blocker has been considered a direct measurement-based approach providing accurate scatter estimation from the data in the shadows of the beam-blocker. To the best of our knowledge, there has been no record reporting the significance of the scatter from the beam-blocker itself in such correction methods. In this paper, we identified the scatter from the beam-blocker that is detected in the object-free projection data investigated its influence on the image accuracy of CBCT reconstructed images, and developed a scatter correction scheme that takes care of this scatter as well as the scatter from the scanned object.

  7. The beam stop array method to measure object scatter in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook

    2014-03-01

    Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.

  8. Scatter and crosstalk corrections for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging using a CZT SPECT system with pinhole collimators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Peng; Hutton, Brian F.; Holstensson, Maria

    2015-12-15

    Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effectsmore » of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were observed with both correction methods compared to no correction, especially for the images of {sup 99m}Tc in dual-radionuclide imaging where there is heavy contamination from {sup 123}I. In this case, the nontransmural defect contrast was improved from 0.39 to 0.47 with the TEW method and to 0.51 with their proposed method and the transmural defect contrast was improved from 0.62 to 0.74 with the TEW method and to 0.73 with their proposed method. In the patient study, the proposed method provided higher myocardium-to-blood pool contrast than that of the TEW method. Similar to the phantom experiment, the improvement was the most substantial for the images of {sup 99m}Tc in dual-radionuclide imaging. In this case, the myocardium-to-blood pool ratio was improved from 7.0 to 38.3 with the TEW method and to 63.6 with their proposed method. Compared to the TEW method, the proposed method also provided higher count levels in the reconstructed images in both phantom and patient studies, indicating reduced overestimation of scatter. Using the proposed method, consistent reconstruction results were obtained for both single-radionuclide data with scatter correction and dual-radionuclide data with scatter and crosstalk corrections, in both phantom and human studies. Conclusions: The authors demonstrate that the TEW method leads to overestimation in scatter and crosstalk for the CZT-based imaging system while the proposed scatter and crosstalk correction method can provide more accurate self-scatter and down-scatter estimations for quantitative single-radionuclide and dual-radionuclide imaging.« less

  9. Scatter correction for cone-beam computed tomography using self-adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Xie, Shi-Peng; Luo, Li-Min

    2012-06-01

    The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.

  10. Scatter measurement and correction method for cone-beam CT based on single grating scan

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  11. Improvement of scattering correction for in situ coastal and inland water absorption measurement using exponential fitting approach

    NASA Astrophysics Data System (ADS)

    Ye, Huping; Li, Junsheng; Zhu, Jianhua; Shen, Qian; Li, Tongji; Zhang, Fangfang; Yue, Huanyin; Zhang, Bing; Liao, Xiaohan

    2017-10-01

    The absorption coefficient of water is an important bio-optical parameter for water optics and water color remote sensing. However, scattering correction is essential to obtain accurate absorption coefficient values in situ using the nine-wavelength absorption and attenuation meter AC9. Establishing the correction always fails in Case 2 water when the correction assumes zero absorption in the near-infrared (NIR) region and underestimates the absorption coefficient in the red region, which affect processes such as semi-analytical remote sensing inversion. In this study, the scattering contribution was evaluated by an exponential fitting approach using AC9 measurements at seven wavelengths (412, 440, 488, 510, 532, 555, and 715 nm) and by applying scattering correction. The correction was applied to representative in situ data of moderately turbid coastal water, highly turbid coastal water, eutrophic inland water, and turbid inland water. The results suggest that the absorption levels in the red and NIR regions are significantly higher than those obtained using standard scattering error correction procedures. Knowledge of the deviation between this method and the commonly used scattering correction methods will facilitate the evaluation of the effect on satellite remote sensing of water constituents and general optical research using different scattering-correction methods.

  12. Single-scan patient-specific scatter correction in computed tomography using peripheral detection of scatter and compressed sensing scatter retrieval

    PubMed Central

    Meng, Bowen; Lee, Ho; Xing, Lei; Fahimian, Benjamin P.

    2013-01-01

    Purpose: X-ray scatter results in a significant degradation of image quality in computed tomography (CT), representing a major limitation in cone-beam CT (CBCT) and large field-of-view diagnostic scanners. In this work, a novel scatter estimation and correction technique is proposed that utilizes peripheral detection of scatter during the patient scan to simultaneously acquire image and patient-specific scatter information in a single scan, and in conjunction with a proposed compressed sensing scatter recovery technique to reconstruct and correct for the patient-specific scatter in the projection space. Methods: The method consists of the detection of patient scatter at the edges of the field of view (FOV) followed by measurement based compressed sensing recovery of the scatter through-out the projection space. In the prototype implementation, the kV x-ray source of the Varian TrueBeam OBI system was blocked at the edges of the projection FOV, and the image detector in the corresponding blocked region was used for scatter detection. The design enables image data acquisition of the projection data on the unblocked central region of and scatter data at the blocked boundary regions. For the initial scatter estimation on the central FOV, a prior consisting of a hybrid scatter model that combines the scatter interpolation method and scatter convolution model is estimated using the acquired scatter distribution on boundary region. With the hybrid scatter estimation model, compressed sensing optimization is performed to generate the scatter map by penalizing the L1 norm of the discrete cosine transform of scatter signal. The estimated scatter is subtracted from the projection data by soft-tuning, and the scatter-corrected CBCT volume is obtained by the conventional Feldkamp-Davis-Kress algorithm. Experimental studies using image quality and anthropomorphic phantoms on a Varian TrueBeam system were carried out to evaluate the performance of the proposed scheme. Results: The scatter shading artifacts were markedly suppressed in the reconstructed images using the proposed method. On the Catphan©504 phantom, the proposed method reduced the error of CT number to 13 Hounsfield units, 10% of that without scatter correction, and increased the image contrast by a factor of 2 in high-contrast regions. On the anthropomorphic phantom, the spatial nonuniformity decreased from 10.8% to 6.8% after correction. Conclusions: A novel scatter correction method, enabling unobstructed acquisition of the high frequency image data and concurrent detection of the patient-specific low frequency scatter data at the edges of the FOV, is proposed and validated in this work. Relative to blocker based techniques, rather than obstructing the central portion of the FOV which degrades and limits the image reconstruction, compressed sensing is used to solve for the scatter from detection of scatter at the periphery of the FOV, enabling for the highest quality reconstruction in the central region and robust patient-specific scatter correction. PMID:23298098

  13. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  14. Optimization-based scatter estimation using primary modulation for computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao

    Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less

  15. A breast-specific, negligible-dose scatter correction technique for dedicated cone-beam breast CT: a physics-based approach to improve Hounsfield Unit accuracy

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Burkett, George, Jr.; Boone, John M.

    2014-11-01

    The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.

  16. WE-AB-207A-07: A Planning CT-Guided Scatter Artifact Correction Method for CBCT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X; Liu, T; Dong, X

    Purpose: Cone beam computed tomography (CBCT) imaging is on increasing demand for high-performance image-guided radiotherapy such as online tumor delineation and dose calculation. However, the current CBCT imaging has severe scatter artifacts and its current clinical application is therefore limited to patient setup based mainly on the bony structures. This study’s purpose is to develop a CBCT artifact correction method. Methods: The proposed scatter correction method utilizes the planning CT to improve CBCT image quality. First, an image registration is used to match the planning CT with the CBCT to reduce the geometry difference between the two images. Then, themore » planning CT-based prior information is entered into the Bayesian deconvolution framework to iteratively perform a scatter artifact correction for the CBCT mages. This technique was evaluated using Catphan phantoms with multiple inserts. Contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR), and the image spatial nonuniformity (ISN) in selected volume of interests (VOIs) were calculated to assess the proposed correction method. Results: Post scatter correction, the CNR increased by a factor of 1.96, 3.22, 3.20, 3.46, 3.44, 1.97 and 1.65, and the SNR increased by a factor 1.05, 2.09, 1.71, 3.95, 2.52, 1.54 and 1.84 for the Air, PMP, LDPE, Polystryrene, Acrylic, Delrin and Teflon inserts, respectively. The ISN decreased from 21.1% to 4.7% in the corrected images. All values of CNR, SNR and ISN in the corrected CBCT image were much closer to those in the planning CT images. The results demonstrated that the proposed method reduces the relevant artifacts and recovers CT numbers. Conclusion: We have developed a novel CBCT artifact correction method based on CT image, and demonstrated that the proposed CT-guided correction method could significantly reduce scatter artifacts and improve the image quality. This method has great potential to correct CBCT images allowing its use in adaptive radiotherapy.« less

  17. Correction method for influence of tissue scattering for sidestream dark-field oximetry using multicolor LEDs

    NASA Astrophysics Data System (ADS)

    Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki

    2016-12-01

    We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.

  18. Low dose scatter correction for digital chest tomosynthesis

    NASA Astrophysics Data System (ADS)

    Inscoe, Christina R.; Wu, Gongting; Shan, Jing; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping

    2015-03-01

    Digital chest tomosynthesis (DCT) provides superior image quality and depth information for thoracic imaging at relatively low dose, though the presence of strong photon scatter degrades the image quality. In most chest radiography, anti-scatter grids are used. However, the grid also blocks a large fraction of the primary beam photons requiring a significantly higher imaging dose for patients. Previously, we have proposed an efficient low dose scatter correction technique using a primary beam sampling apparatus. We implemented the technique in stationary digital breast tomosynthesis, and found the method to be efficient in correcting patient-specific scatter with only 3% increase in dose. In this paper we reported the feasibility study of applying the same technique to chest tomosynthesis. This investigation was performed utilizing phantom and cadaver subjects. The method involves an initial tomosynthesis scan of the object. A lead plate with an array of holes, or primary sampling apparatus (PSA), was placed above the object. A second tomosynthesis scan was performed to measure the primary (scatter-free) transmission. This PSA data was used with the full-field projections to compute the scatter, which was then interpolated to full-field scatter maps unique to each projection angle. Full-field projection images were scatter corrected prior to reconstruction. Projections and reconstruction slices were evaluated and the correction method was found to be effective at improving image quality and practical for clinical implementation.

  19. Correction of scatter in megavoltage cone-beam CT

    NASA Astrophysics Data System (ADS)

    Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.

    2001-03-01

    The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.

  20. Scatter correction method for x-ray CT using primary modulation: Phantom studies

    PubMed Central

    Gao, Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun, Mingshan; Star-Lack, Josh; Zhu, Lei

    2010-01-01

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan©600 phantom, an anthropomorphic chest phantom, and the Catphan©600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan©600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan©600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy. PMID:20229902

  1. Scatter correction using a primary modulator on a clinical angiography C-arm CT system.

    PubMed

    Bier, Bastian; Berger, Martin; Maier, Andreas; Kachelrieß, Marc; Ritschl, Ludwig; Müller, Kerstin; Choi, Jang-Hwan; Fahrig, Rebecca

    2017-09-01

    Cone beam computed tomography (CBCT) suffers from a large amount of scatter, resulting in severe scatter artifacts in the reconstructions. Recently, a new scatter correction approach, called improved primary modulator scatter estimation (iPMSE), was introduced. That approach utilizes a primary modulator that is inserted between the X-ray source and the object. This modulation enables estimation of the scatter in the projection domain by optimizing an objective function with respect to the scatter estimate. Up to now the approach has not been implemented on a clinical angiography C-arm CT system. In our work, the iPMSE method is transferred to a clinical C-arm CBCT. Additional processing steps are added in order to compensate for the C-arm scanner motion and the automatic X-ray tube current modulation. These challenges were overcome by establishing a reference modulator database and a block-matching algorithm. Experiments with phantom and experimental in vivo data were performed to evaluate the method. We show that scatter correction using primary modulation is possible on a clinical C-arm CBCT. Scatter artifacts in the reconstructions are reduced with the newly extended method. Compared to a scan with a narrow collimation, our approach showed superior results with an improvement of the contrast and the contrast-to-noise ratio for the phantom experiments. In vivo data are evaluated by comparing the results with a scan with a narrow collimation and with a constant scatter correction approach. Scatter correction using primary modulation is possible on a clinical CBCT by compensating for the scanner motion and the tube current modulation. Scatter artifacts could be reduced in the reconstructions of phantom scans and in experimental in vivo data. © 2017 American Association of Physicists in Medicine.

  2. Method for measuring multiple scattering corrections between liquid scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.

    2016-04-11

    In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  3. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    PubMed

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  4. Data consistency-driven scatter kernel optimization for x-ray cone-beam CT

    NASA Astrophysics Data System (ADS)

    Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong

    2015-08-01

    Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.

  5. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  6. Guide-star-based computational adaptive optics for broadband interferometric tomography

    PubMed Central

    Adie, Steven G.; Shemonski, Nathan D.; Graf, Benedikt W.; Ahmad, Adeel; Scott Carney, P.; Boppart, Stephen A.

    2012-01-01

    We present a method for the numerical correction of optical aberrations based on indirect sensing of the scattered wavefront from point-like scatterers (“guide stars”) within a three-dimensional broadband interferometric tomogram. This method enables the correction of high-order monochromatic and chromatic aberrations utilizing guide stars that are revealed after numerical compensation of defocus and low-order aberrations of the optical system. Guide-star-based aberration correction in a silicone phantom with sparse sub-resolution-sized scatterers demonstrates improvement of resolution and signal-to-noise ratio over a large isotome. Results in highly scattering muscle tissue showed improved resolution of fine structure over an extended volume. Guide-star-based computational adaptive optics expands upon the use of image metrics for numerically optimizing the aberration correction in broadband interferometric tomography, and is analogous to phase-conjugation and time-reversal methods for focusing in turbid media. PMID:23284179

  7. Analytic image reconstruction from partial data for a single-scan cone-beam CT with scatter correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr

    Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less

  8. TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Bai, T

    2014-06-15

    Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less

  9. Scatter correction for x-ray conebeam CT using one-dimensional primary modulation

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Gao, Hewei; Bennett, N. Robert; Xing, Lei; Fahrig, Rebecca

    2009-02-01

    Recently, we developed an efficient scatter correction method for x-ray imaging using primary modulation. A two-dimensional (2D) primary modulator with spatially variant attenuating materials is inserted between the x-ray source and the object to separate primary and scatter signals in the Fourier domain. Due to the high modulation frequency in both directions, the 2D primary modulator has a strong scatter correction capability for objects with arbitrary geometries. However, signal processing on the modulated projection data requires knowledge of the modulator position and attenuation. In practical systems, mainly due to system gantry vibration, beam hardening effects and the ramp-filtering in the reconstruction, the insertion of the 2D primary modulator results in artifacts such as rings in the CT images, if no post-processing is applied. In this work, we eliminate the source of artifacts in the primary modulation method by using a one-dimensional (1D) modulator. The modulator is aligned parallel to the ramp-filtering direction to avoid error magnification, while sufficient primary modulation is still achieved for scatter correction on a quasicylindrical object, such as a human body. The scatter correction algorithm is also greatly simplified for the convenience and stability in practical implementations. The method is evaluated on a clinical CBCT system using the Catphan© 600 phantom. The result shows effective scatter suppression without introducing additional artifacts. In the selected regions of interest, the reconstruction error is reduced from 187.2HU to 10.0HU if the proposed method is used.

  10. Library based x-ray scatter correction for dedicated cone beam breast CT

    PubMed Central

    Shi, Linxi; Karellas, Andrew; Zhu, Lei

    2016-01-01

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the geant4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging. PMID:27487870

  11. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods for /sup 201/Tl cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Narita, Y.; Iida, H.; Ebert, S.; Nakamura, T.

    1997-12-01

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for three numerical phantoms for /sup 201/Tl. Data were reconstructed with ordered-subset EM algorithm including noise-less transmission data based attenuation correction. Accuracy of TDCS and TEW scatter corrections were assessed by comparison with simulated true primary data. The uniform cylindrical phantom simulation demonstrated better quantitative accuracy with TDCS than with TEW (-2.0% vs. 16.7%) and better S/N (6.48 vs. 5.05). A uniform ring myocardial phantom simulation demonstrated better homogeneity with TDCS than TEW in the myocardium; i.e., anterior-to-posterior wall count ratios were 0.99 and 0.76 with TDCS and TEW, respectively. For the MCAT phantom, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.

  12. Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI

    PubMed Central

    Rank, Christopher M.; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A.; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc

    2017-01-01

    Objectives Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Methods Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. Results The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient’s diagnosis. Conclusion Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts. PMID:28817656

  13. TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A; Zbijewski, W; Stayman, J

    Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less

  14. Interference detection and correction applied to incoherent-scatter radar power spectrum measurement

    NASA Technical Reports Server (NTRS)

    Ying, W. P.; Mathews, J. D.; Rastogi, P. K.

    1986-01-01

    A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.

  15. A fast and pragmatic approach for scatter correction in flat-detector CT using elliptic modeling and iterative optimization

    NASA Astrophysics Data System (ADS)

    Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis

    2010-01-01

    Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.

  16. Physics and Computational Methods for X-ray Scatter Estimation and Correction in Cone-Beam Computed Tomography

    NASA Astrophysics Data System (ADS)

    Bootsma, Gregory J.

    X-ray scatter in cone-beam computed tomography (CBCT) is known to reduce image quality by introducing image artifacts, reducing contrast, and limiting computed tomography (CT) number accuracy. The extent of the effect of x-ray scatter on CBCT image quality is determined by the shape and magnitude of the scatter distribution in the projections. A method to allay the effects of scatter is imperative to enable application of CBCT to solve a wider domain of clinical problems. The work contained herein proposes such a method. A characterization of the scatter distribution through the use of a validated Monte Carlo (MC) model is carried out. The effects of imaging parameters and compensators on the scatter distribution are investigated. The spectral frequency components of the scatter distribution in CBCT projection sets are analyzed using Fourier analysis and found to reside predominately in the low frequency domain. The exact frequency extents of the scatter distribution are explored for different imaging configurations and patient geometries. Based on the Fourier analysis it is hypothesized the scatter distribution can be represented by a finite sum of sine and cosine functions. The fitting of MC scatter distribution estimates enables the reduction of the MC computation time by diminishing the number of photon tracks required by over three orders of magnitude. The fitting method is incorporated into a novel scatter correction method using an algorithm that simultaneously combines multiple MC scatter simulations. Running concurrent MC simulations while simultaneously fitting the results allows for the physical accuracy and flexibility of MC methods to be maintained while enhancing the overall efficiency. CBCT projection set scatter estimates, using the algorithm, are computed on the order of 1--2 minutes instead of hours or days. Resulting scatter corrected reconstructions show a reduction in artifacts and improvement in tissue contrast and voxel value accuracy.

  17. SU-C-201-02: Quantitative Small-Animal SPECT Without Scatter Correction Using High-Purity Germanium Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, A; Peterson, T; Johnson, L

    2015-06-15

    Purpose: To evaluate the impact of the exceptional energy resolution of germanium detectors for preclinical SPECT in comparison to conventional detectors. Methods: A cylindrical water phantom was created in GATE with a spherical Tc-99m source in the center. Sixty-four projections over 360 degrees using a pinhole collimator were simulated. The same phantom was simulated using air instead of water to establish the true reconstructed voxel intensity without attenuation. Attenuation correction based on the Chang method was performed on MLEM reconstructed images from the water phantom to determine a quantitative measure of the effectiveness of the attenuation correction. Similarly, a NEMAmore » phantom was simulated, and the effectiveness of the attenuation correction was evaluated. Both simulations were carried out using both NaI detectors with an energy resolution of 10% FWHM and Ge detectors with an energy resolution of 1%. Results: Analysis shows that attenuation correction without scatter correction using germanium detectors can reconstruct a small spherical source to within 3.5%. Scatter analysis showed that for standard sized objects in a preclinical scanner, a NaI detector has a scatter-to-primary ratio between 7% and 12.5% compared to between 0.8% and 1.5% for a Ge detector. Preliminary results from line profiles through the NEMA phantom suggest that applying attenuation correction without scatter correction provides acceptable results for the Ge detectors but overestimates the phantom activity using NaI detectors. Due to the decreased scatter, we believe that the spillover ratio for the air and water cylinders in the NEMA phantom will be lower using germanium detectors compared to NaI detectors. Conclusion: This work indicates that the superior energy resolution of germanium detectors allows for less scattered photons to be included within the energy window compared to traditional SPECT detectors. This may allow for quantitative SPECT without implementing scatter correction, reducing uncertainties introduced by scatter correction algorithms. Funding provided by NIH/NIBIB grant R01EB013677; Todd Peterson, Ph.D., has had a research contract with PHDs Co., Knoxville, TN.« less

  18. Improved scatter correction with factor analysis for planar and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw

    2017-09-01

    Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user-independent approach for scatter correction in nuclear medicine.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems,more » the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy.« less

  20. Correction of Atmospheric Haze in RESOURCESAT-1 LISS-4 MX Data for Urban Analysis: AN Improved Dark Object Subtraction Approach

    NASA Astrophysics Data System (ADS)

    Mustak, S.

    2013-09-01

    The correction of atmospheric effects is very essential because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering. The objectives of the paper is to find out the haze values present in the all spectral bands and to correct the haze values for urban analysis. In this paper, Improved Dark Object Subtraction method of P. Chavez (1988) is applied for the correction of atmospheric haze in the Resoucesat-1 LISS-4 multispectral satellite image. Dark object Subtraction is a very simple image-based method of atmospheric haze which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance termed as dark object which are clear water body and shadows whose DN values zero (0) or Close to zero in the image. Simple Dark Object Subtraction method is a first order atmospheric correction but Improved Dark Object Subtraction method which tends to correct the Haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere. The haze values extracted using Simple Dark Object Subtraction method for Green band (Band2), Red band (Band3) and NIR band (band4) are 40, 34 and 18 but the haze values extracted using Improved Dark Object Subtraction method are 40, 18.02 and 11.80 for aforesaid bands. Here it is concluded that the haze values extracted by Improved Dark Object Subtraction method provides more realistic results than Simple Dark Object Subtraction method.

  1. Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI.

    PubMed

    Heußer, Thorsten; Mann, Philipp; Rank, Christopher M; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc; Freitag, Martin T

    2017-01-01

    Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient's diagnosis. Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts.

  2. Novel auto-correction method in a fiber-optic distributed-temperature sensor using reflected anti-Stokes Raman scattering.

    PubMed

    Hwang, Dusun; Yoon, Dong-Jin; Kwon, Il-Bum; Seo, Dae-Cheol; Chung, Youngjoo

    2010-05-10

    A novel method for auto-correction of fiber optic distributed temperature sensor using anti-Stokes Raman back-scattering and its reflected signal is presented. This method processes two parts of measured signal. One part is the normal back scattered anti-Stokes signal and the other part is the reflected signal which eliminate not only the effect of local losses due to the micro-bending or damages on fiber but also the differential attenuation. Because the beams of the same wavelength are used to cancel out the local variance in transmission medium there is no differential attenuation inherently. The auto correction concept was verified by the bending experiment on different bending points. (c) 2010 Optical Society of America.

  3. A model-based scatter artifacts correction for cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Wei; Zhu, Jun; Wang, Luyao

    2016-04-15

    Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain ormore » projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection domain and image domain, respectively. The contrast of the in vivo human images is greatly improved after correction. Conclusions: The software-based technique has a number of advantages, such as high computational efficiency and accuracy, and the capability of performing scatter correction without modifying the clinical workflow (i.e., no extra scan/measurement data are needed) or modifying the imaging hardware. When implemented practically, this should improve the accuracy of CBCT image quantitation and significantly impact CBCT-based interventional procedures and adaptive radiation therapy.« less

  4. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less

  5. Library based x-ray scatter correction for dedicated cone beam breast CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correctionmore » on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging.« less

  6. WE-DE-207B-12: Scatter Correction for Dedicated Cone Beam Breast CT Based On a Forward Projection Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, L; Zhu, L; Vedantham, S

    2016-06-15

    Purpose: The image quality of dedicated cone-beam breast CT (CBBCT) is fundamentally limited by substantial x-ray scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose to suppress x-ray scatter in CBBCT images using a deterministic forward projection model. Method: We first use the 1st-pass FDK-reconstructed CBBCT images to segment fibroglandular and adipose tissue. Attenuation coefficients are assigned to the two tissues based on the x-ray spectrum used for imaging acquisition, and is forward projected to simulatemore » scatter-free primary projections. We estimate the scatter by subtracting the simulated primary projection from the measured projection, and then the resultant scatter map is further refined by a Fourier-domain fitting algorithm after discarding untrusted scatter information. The final scatter estimate is subtracted from the measured projection for effective scatter correction. In our implementation, the proposed scatter correction takes 0.5 seconds for each projection. The method was evaluated using the overall image spatial non-uniformity (SNU) metric and the contrast-to-noise ratio (CNR) with 5 clinical datasets of BI-RADS 4/5 subjects. Results: For the 5 clinical datasets, our method reduced the SNU from 7.79% to 1.68% in coronal view and from 6.71% to 3.20% in sagittal view. The average CNR is improved by a factor of 1.38 in coronal view and 1.26 in sagittal view. Conclusion: The proposed scatter correction approach requires no additional scans or prior images and uses a deterministic model for efficient calculation. Evaluation with clinical datasets demonstrates the feasibility and stability of the method. These features are attractive for clinical CBBCT and make our method distinct from other approaches. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less

  7. Optimization of a simultaneous dual-isotope 201Tl/123I-MIBG myocardial SPECT imaging protocol with a CZT camera for trigger zone assessment after myocardial infarction for routine clinical settings: Are delayed acquisition and scatter correction necessary?

    PubMed

    D'estanque, Emmanuel; Hedon, Christophe; Lattuca, Benoît; Bourdon, Aurélie; Benkiran, Meriem; Verd, Aurélie; Roubille, François; Mariano-Goulart, Denis

    2017-08-01

    Dual-isotope 201 Tl/ 123 I-MIBG SPECT can assess trigger zones (dysfunctions in the autonomic nervous system located in areas of viable myocardium) that are substrate for ventricular arrhythmias after STEMI. This study evaluated the necessity of delayed acquisition and scatter correction for dual-isotope 201 Tl/ 123 I-MIBG SPECT studies with a CZT camera to identify trigger zones after revascularization in patients with STEMI in routine clinical settings. Sixty-nine patients were prospectively enrolled after revascularization to undergo 201 Tl/ 123 I-MIBG SPECT using a CZT camera (Discovery NM 530c, GE). The first acquisition was a single thallium study (before MIBG administration); the second and the third were early and late dual-isotope studies. We compared the scatter-uncorrected and scatter-corrected (TEW method) thallium studies with the results of magnetic resonance imaging or transthoracic echography (reference standard) to diagnose myocardial necrosis. Summed rest scores (SRS) were significantly higher in the delayed MIBG studies than the early MIBG studies. SRS and necrosis surface were significantly higher in the delayed thallium studies with scatter correction than without scatter correction, leading to less trigger zone diagnosis for the scatter-corrected studies. Compared with the scatter-uncorrected studies, the late thallium scatter-corrected studies provided the best diagnostic values for myocardial necrosis assessment. Delayed acquisitions and scatter-corrected dual-isotope 201 Tl/ 123 I-MIBG SPECT acquisitions provide an improved evaluation of trigger zones in routine clinical settings after revascularization for STEMI.

  8. Characterization and correction of cupping effect artefacts in cone beam CT

    PubMed Central

    Hunter, AK; McDavid, WD

    2012-01-01

    Objective The purpose of this study was to demonstrate and correct the cupping effect artefact that occurs owing to the presence of beam hardening and scatter radiation during image acquisition in cone beam CT (CBCT). Methods A uniform aluminium cylinder (6061) was used to demonstrate the cupping effect artefact on the Planmeca Promax 3D CBCT unit (Planmeca OY, Helsinki, Finland). The cupping effect was studied using a line profile plot of the grey level values using ImageJ software (National Institutes of Health, Bethesda, MD). A hardware-based correction method using copper pre-filtration was used to address this artefact caused by beam hardening and a software-based subtraction algorithm was used to address scatter contamination. Results The hardware-based correction used to address the effects of beam hardening suppressed the cupping effect artefact but did not eliminate it. The software-based correction used to address the effects of scatter resulted in elimination of the cupping effect artefact. Conclusion Compensating for the presence of beam hardening and scatter radiation improves grey level uniformity in CBCT. PMID:22378754

  9. Fully iterative scatter corrected digital breast tomosynthesis using GPU-based fast Monte Carlo simulation and composition ratio update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr; Lee, Taewon

    2015-09-15

    Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue compositionmore » for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.« less

  10. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, S; Wang, Y; Lue, K

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends onmore » the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient tail information and therefore improve the accuracy of scatter estimation.« less

  11. CT-based attenuation and scatter correction compared with uniform attenuation correction in brain perfusion SPECT imaging for dementia

    NASA Astrophysics Data System (ADS)

    Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.

    2015-09-01

    This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.

  12. A modified TEW approach to scatter correction for In-111 and Tc-99m dual-isotope small-animal SPECT.

    PubMed

    Prior, Paul; Timmins, Rachel; Petryk, Julia; Strydhorst, Jared; Duan, Yin; Wei, Lihui; Glenn Wells, R

    2016-10-01

    In dual-isotope (Tc-99m/In-111) small-animal single-photon emission computed tomography (SPECT), quantitative accuracy of Tc-99m activity measurements is degraded due to the detection of Compton-scattered photons in the Tc-99m photopeak window, which originate from the In-111 emissions (cross talk) and from the Tc-99m emission (self-scatter). The standard triple-energy window (TEW) estimates the total scatter (self-scatter and cross talk) using one scatter window on either side of the Tc-99m photopeak window, but the estimate is biased due to the presence of unscattered photons in the scatter windows. The authors present a modified TEW method to correct for total scatter that compensates for this bias and evaluate the method in phantoms and in vivo. The number of unscattered Tc-99m and In-111 photons present in each scatter-window projection is estimated based on the number of photons detected in the photopeak of each isotope, using the isotope-dependent energy resolution of the detector. The camera-head-specific energy resolutions for the 140 keV Tc-99m and 171 keV In-111 emissions were determined experimentally by separately sampling the energy spectra of each isotope. Each sampled spectrum was fit with a Linear + Gaussian function. The fitted Gaussian functions were integrated across each energy window to determine the proportion of unscattered photons from each emission detected in the scatter windows. The method was first tested and compared to the standard TEW in phantoms containing Tc-99m:In-111 activity ratios between 0.15 and 6.90. True activities were determined using a dose calibrator, and SPECT activities were estimated from CT-attenuation-corrected images with and without scatter-correction. The method was then tested in vivo in six rats using In-111-liposome and Tc-99m-tetrofosmin to generate cross talk in the area of the myocardium. The myocardium was manually segmented using the SPECT and CT images, and partial-volume correction was performed using a template-based approach. The rat heart was counted in a well-counter to determine the true activity. In the phantoms without correction for Compton-scatter, Tc-99m activity quantification errors as high as 85% were observed. The standard TEW method quantified Tc-99m activity with an average accuracy of -9.0% ± 0.7%, while the modified TEW was accurate within 5% of truth in phantoms with Tc-99m:In-111 activity ratios ≥0.52. Without scatter-correction, In-111 activity was quantified with an average accuracy of 4.1%, and there was no dependence of accuracy on the activity ratio. In rat myocardia, uncorrected images were overestimated by an average of 23% ± 5%, and the standard TEW had an accuracy of -13.8% ± 1.6%, while the modified TEW yielded an accuracy of -4.0% ± 1.6%. Cross talk and self-scatter were shown to produce quantification errors in phantoms as well as in vivo. The standard TEW provided inaccurate results due to the inclusion of unscattered photons in the scatter windows. The modified TEW improved the scatter estimate and reduced the quantification errors in phantoms and in vivo.

  13. A simple method for correcting spatially resolved solar intensity oscillation observations for variations in scattered light

    NASA Technical Reports Server (NTRS)

    Jefferies, S. M.; Duvall, T. L., Jr.

    1991-01-01

    A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.

  14. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  15. A Practical Cone-beam CT Scatter Correction Method with Optimized Monte Carlo Simulations for Image-Guided Radiation Therapy

    PubMed Central

    Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun

    2015-01-01

    Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299

  16. A method to incorporate leakage and head scatter corrections into a tomotherapy inverse treatment planning algorithm

    NASA Astrophysics Data System (ADS)

    Holmes, Timothy W.

    2001-01-01

    A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.

  17. An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data

    USGS Publications Warehouse

    Chavez, P.S.

    1988-01-01

    Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.

  18. [Evaluation of crossing calibration of (123)I-MIBG H/M ration, with the IDW scatter correction method, on different gamma camera systems].

    PubMed

    Kittaka, Daisuke; Takase, Tadashi; Akiyama, Masayuki; Nakazawa, Yasuo; Shinozuka, Akira; Shirai, Muneaki

    2011-01-01

    (123)I-MIBG Heart-to-Mediastinum activity ratio (H/M) is commonly used as an indicator of relative myocardial (123)I-MIBG uptake. H/M ratios reflect myocardial sympathetic nerve function, therefore it is a useful parameter to assess regional myocardial sympathetic denervation in various cardiac diseases. However, H/M ratio values differ by site, gamma camera system, position and size of region of interest (ROI), and collimator. In addition to these factors, 529 keV scatter component may also affect (123)I-MIBG H/M ratio. In this study, we examined whether the H/M ratio shows correlation between two different gamma camera systems and that sought for H/M ratio calculation formula. Moreover, we assessed the feasibility of (123)I Dual Window (IDW) method, which is a scatter correction method, and compared H/M ratios with and without IDW method. H/M ratio displayed a good correlation between two gamma camera systems. Additionally, we were able to create a new H/M calculation formula. These results indicated that the IDW method is a useful scatter correction method for calculating (123)I-MIBG H/M ratios.

  19. [Development of a Striatal and Skull Phantom for Quantitative 123I-FP-CIT SPECT].

    PubMed

    Ishiguro, Masanobu; Uno, Masaki; Miyazaki, Takuma; Kataoka, Yumi; Toyama, Hiroshi; Ichihara, Takashi

    123 Iodine-labelled N-(3-fluoropropyl) -2β-carbomethoxy-3β-(4-iodophenyl) nortropane ( 123 I-FP-CIT) single photon emission computerized tomography (SPECT) images are used for differential diagnosis such as Parkinson's disease (PD). Specific binding ratio (SBR) is affected by scattering and attenuation in SPECT imaging, because gender and age lead to changes in skull density. It is necessary to clarify and correct the influence of the phantom simulating the the skull. The purpose of this study was to develop phantoms that can evaluate scattering and attenuation correction. Skull phantoms were prepared based on the measuring the results of the average computed tomography (CT) value, average skull thickness of 12 males and 16 females. 123 I-FP-CIT SPECT imaging of striatal phantom was performed with these skull phantoms, which reproduced normal and PD. SPECT images, were reconstructed with scattering and attenuation correction. SBR with partial volume effect corrected (SBR act ) and conventional SBR (SBR Bolt ) were measured and compared. The striatum and the skull phantoms along with 123 I-FP-CIT were able to reproduce the normal accumulation and disease state of PD and further those reproduced the influence of skull density on SPECT imaging. The error rate with the true SBR, SBR act was much smaller than SBR Bolt . The effect on SBR could be corrected by scattering and attenuation correction even if the skull density changes with 123 I-FP-CIT on SPECT imaging. The combination of triple energy window method and CT-attenuation correction method would be the best correction method for SBR act .

  20. Corrections for the geometric distortion of the tube detectors on SANS instruments at ORNL

    DOE PAGES

    He, Lilin; Do, Changwoo; Qian, Shuo; ...

    2014-11-25

    Small-angle neutron scattering instruments at the Oak Ridge National Laboratory's High Flux Isotope Reactor were upgraded in area detectors from the large, single volume crossed-wire detectors originally installed to staggered arrays of linear position-sensitive detectors (LPSDs). The specific geometry of the LPSD array requires that approaches to data reduction traditionally employed be modified. Here, two methods for correcting the geometric distortion produced by the LPSD array are presented and compared. The first method applies a correction derived from a detector sensitivity measurement performed using the same configuration as the samples are measured. In the second method, a solid angle correctionmore » is derived that can be applied to data collected in any instrument configuration during the data reduction process in conjunction with a detector sensitivity measurement collected at a sufficiently long camera length where the geometric distortions are negligible. Furthermore, both methods produce consistent results and yield a maximum deviation of corrected data from isotropic scattering samples of less than 5% for scattering angles up to a maximum of 35°. The results are broadly applicable to any SANS instrument employing LPSD array detectors, which will be increasingly common as instruments having higher incident flux are constructed at various neutron scattering facilities around the world.« less

  1. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  2. Frequency-domain method for measuring spectral properties in multiple-scattering media: methemoglobin absorption spectrum in a tissuelike phantom

    NASA Astrophysics Data System (ADS)

    Fishkin, Joshua B.; So, Peter T. C.; Cerussi, Albert E.; Gratton, Enrico; Fantini, Sergio; Franceschini, Maria Angela

    1995-03-01

    We have measured the optical absorption and scattering coefficient spectra of a multiple-scattering medium (i.e., a biological tissue-simulating phantom comprising a lipid colloid) containing methemoglobin by using frequency-domain techniques. The methemoglobin absorption spectrum determined in the multiple-scattering medium is in excellent agreement with a corrected methemoglobin absorption spectrum obtained from a steady-state spectrophotometer measurement of the optical density of a minimally scattering medium. The determination of the corrected methemoglobin absorption spectrum takes into account the scattering from impurities in the methemoglobin solution containing no lipid colloid. Frequency-domain techniques allow for the separation of the absorbing from the scattering properties of multiple-scattering media, and these techniques thus provide an absolute

  3. SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Y; Wu, P; Mao, T

    2016-06-15

    Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filteringmore » the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT images. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  4. Evaluation of a scattering correction method for high energy tomography

    NASA Astrophysics Data System (ADS)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where experimental complexities must be avoided. This approach has been previously tested successfully in the energy range of 100 keV - 6 MeV. In this paper, the kernels are simulated using MCNP in order to take into account both photons and electronic processes in scattering radiation contribution. We present scatter correction results on a large object scanned with a 9 MeV linear accelerator.

  5. Higher Order Heavy Quark Corrections to Deep-Inelastic Scattering

    NASA Astrophysics Data System (ADS)

    Blümlein, Johannes; DeFreitas, Abilio; Schneider, Carsten

    2015-04-01

    The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q2. We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring αs (MZ), the charm quark mass mc, and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.

  6. WE-AB-207A-08: BEST IN PHYSICS (IMAGING): Advanced Scatter Correction and Iterative Reconstruction for Improved Cone-Beam CT Imaging On the TrueBeam Radiotherapy Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, A; Paysan, P; Brehm, M

    2016-06-15

    Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less

  7. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    PubMed

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  8. Improved determination of particulate absorption from combined filter pad and PSICAM measurements.

    PubMed

    Lefering, Ina; Röttgers, Rüdiger; Weeks, Rebecca; Connor, Derek; Utschig, Christian; Heymann, Kerstin; McKee, David

    2016-10-31

    Filter pad light absorption measurements are subject to two major sources of experimental uncertainty: the so-called pathlength amplification factor, β, and scattering offsets, o, for which previous null-correction approaches are limited by recent observations of non-zero absorption in the near infrared (NIR). A new filter pad absorption correction method is presented here which uses linear regression against point-source integrating cavity absorption meter (PSICAM) absorption data to simultaneously resolve both β and the scattering offset. The PSICAM has previously been shown to provide accurate absorption data, even in highly scattering waters. Comparisons of PSICAM and filter pad particulate absorption data reveal linear relationships that vary on a sample by sample basis. This regression approach provides significantly improved agreement with PSICAM data (3.2% RMS%E) than previously published filter pad absorption corrections. Results show that direct transmittance (T-method) filter pad absorption measurements perform effectively at the same level as more complex geometrical configurations based on integrating cavity measurements (IS-method and QFT-ICAM) because the linear regression correction compensates for the sensitivity to scattering errors in the T-method. This approach produces accurate filter pad particulate absorption data for wavelengths in the blue/UV and in the NIR where sensitivity issues with PSICAM measurements limit performance. The combination of the filter pad absorption and PSICAM is therefore recommended for generating full spectral, best quality particulate absorption data as it enables correction of multiple errors sources across both measurements.

  9. Topographic correction realization based on the CBERS-02B image

    NASA Astrophysics Data System (ADS)

    Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua

    2011-08-01

    The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.

  10. SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, C; Jin, M; Ouyang, L

    2015-06-15

    Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less

  11. A library least-squares approach for scatter correction in gamma-ray tomography

    NASA Astrophysics Data System (ADS)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-03-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.

  12. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: proof of concept.

    PubMed

    Lee, Ho; Fahimian, Benjamin P; Xing, Lei

    2017-03-21

    This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method's performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.

  13. Characterization of Scattered X-Ray Photons in Dental Cone-Beam Computed Tomography.

    PubMed

    Yang, Ching-Ching

    2016-01-01

    Scatter is a very important artifact causing factor in dental cone-beam CT (CBCT), which has a major influence on the detectability of details within images. This work aimed to improve the image quality of dental CBCT through scatter correction. Scatter was estimated in the projection domain from the low frequency component of the difference between the raw CBCT projection and the projection obtained by extrapolating the model fitted to the raw projections acquired with 2 different sizes of axial field-of-view (FOV). The function for curve fitting was optimized by using Monte Carlo simulation. To validate the proposed method, an anthropomorphic phantom and a water-filled cylindrical phantom with rod inserts simulating different tissue materials were scanned using 120 kVp, 5 mA and 9-second scanning time covering an axial FOV of 4 cm and 13 cm. The detectability of the CT image was evaluated by calculating the contrast-to-noise ratio (CNR). Beam hardening and cupping artifacts were observed in CBCT images without scatter correction, especially in those acquired with 13 cm FOV. These artifacts were reduced in CBCT images corrected by the proposed method, demonstrating its efficacy on scatter correction. After scatter correction, the image quality of CBCT was improved in terms of target detectability which was quantified as the CNR for rod inserts in the cylindrical phantom. Hopefully the calculations performed in this work can provide a route to reach a high level of diagnostic image quality for CBCT imaging used in oral and maxillofacial structures whilst ensuring patient dose as low as reasonably achievable, which may ultimately make CBCT scan a reliable and safe tool in clinical practice.

  14. N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method

    NASA Astrophysics Data System (ADS)

    Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.

    2018-05-01

    Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.

  15. Conjugate adaptive optics with remote focusing in multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Tao, Xiaodong; Lam, Tuwin; Zhu, Bingzhao; Li, Qinggele; Reinig, Marc R.; Kubby, Joel

    2018-02-01

    The small correction volume for conventional wavefront shaping methods limits their application in biological imaging through scattering media. In this paper, we take advantage of conjugate adaptive optics (CAO) and remote focusing (CAORF) to achieve three-dimensional (3D) scanning through a scattering layer with a single correction. Our results show that the proposed system can provide 10 times wider axial field of view compared with a conventional conjugate AO system when 16,384 segments are used on a spatial light modulator. We demonstrate two-photon imaging with CAORF through mouse skull. The fluorescent microspheres embedded under the scattering layers can be clearly observed after applying the correction.

  16. [Atmospheric correction of HJ-1 CCD data for water imagery based on dark object model].

    PubMed

    Zhou, Li-Guo; Ma, Wei-Chun; Gu, Wan-Hua; Huai, Hong-Yan

    2011-08-01

    The CCD multi-band data of HJ-1A has great potential in inland water quality monitoring, but the precision of atmospheric correction is a premise and necessary procedure for its application. In this paper, a method based on dark pixel for water-leaving radiance retrieving is proposed. Beside the Rayleigh scattering, the aerosol scattering is important to atmospheric correction, the water quality of inland lakes always are case II water and the value of water leaving radiance is not zero. So the synchronous MODIS shortwave infrared data was used to obtain the aerosol parameters, and in virtue of the characteristic that aerosol scattering is relative stabilized in 560 nm, the water-leaving radiance for each visible and near infrared band were retrieved and normalized, accordingly the remotely sensed reflectance of water was computed. The results show that the atmospheric correction method based on the imagery itself is more effective for the retrieval of water parameters for HJ-1A CCD data.

  17. Evaluation of atmospheric correction algorithms for processing SeaWiFS data

    NASA Astrophysics Data System (ADS)

    Ransibrahmanakul, Varis; Stumpf, Richard; Ramachandran, Sathyadev; Hughes, Kent

    2005-08-01

    To enable the production of the best chlorophyll products from SeaWiFS data NOAA (Coastwatch and NOS) evaluated the various atmospheric correction algorithms by comparing the satellite derived water reflectance derived for each algorithm with in situ data. Gordon and Wang (1994) introduced a method to correct for Rayleigh and aerosol scattering in the atmosphere so that water reflectance may be derived from the radiance measured at the top of the atmosphere. However, since the correction assumed near infrared scattering to be negligible in coastal waters an invalid assumption, the method over estimates the atmospheric contribution and consequently under estimates water reflectance for the lower wavelength bands on extrapolation. Several improved methods to estimate near infrared correction exist: Siegel et al. (2000); Ruddick et al. (2000); Stumpf et al. (2002) and Stumpf et al. (2003), where an absorbing aerosol correction is also applied along with an additional 1.01% calibration adjustment for the 412 nm band. The evaluation show that the near infrared correction developed by Stumpf et al. (2003) result in an overall minimum error for U.S. waters. As of July 2004, NASA (SEADAS) has selected this as the default method for the atmospheric correction used to produce chlorophyll products.

  18. Absolutely and uniformly convergent iterative approach to inverse scattering with an infinite radius of convergence

    DOEpatents

    Kouri, Donald J [Houston, TX; Vijay, Amrendra [Houston, TX; Zhang, Haiyan [Houston, TX; Zhang, Jingfeng [Houston, TX; Hoffman, David K [Ames, IA

    2007-05-01

    A method and system for solving the inverse acoustic scattering problem using an iterative approach with consideration of half-off-shell transition matrix elements (near-field) information, where the Volterra inverse series correctly predicts the first two moments of the interaction, while the Fredholm inverse series is correct only for the first moment and that the Volterra approach provides a method for exactly obtaining interactions which can be written as a sum of delta functions.

  19. Exact first order scattering correction for vector radiative transfer in coupled atmosphere and ocean systems

    NASA Astrophysics Data System (ADS)

    Zhai, Peng-Wang; Hu, Yongxiang; Josset, Damien B.; Trepte, Charles R.; Lucker, Patricia L.; Lin, Bing

    2012-06-01

    We have developed a Vector Radiative Transfer (VRT) code for coupled atmosphere and ocean systems based on the successive order of scattering (SOS) method. In order to achieve efficiency and maintain accuracy, the scattering matrix is expanded in terms of the Wigner d functions and the delta fit or delta-M technique is used to truncate the commonly-present large forward scattering peak. To further improve the accuracy of the SOS code, we have implemented the analytical first order scattering treatment using the exact scattering matrix of the medium in the SOS code. The expansion and truncation techniques are kept for higher order scattering. The exact first order scattering correction was originally published by Nakajima and Takana.1 A new contribution of this work is to account for the exact secondary light scattering caused by the light reflected by and transmitted through the rough air-sea interface.

  20. Image Reconstruction for a Partially Collimated Whole Body PET Scanner

    PubMed Central

    Alessio, Adam M.; Schmitz, Ruth E.; MacDonald, Lawrence R.; Wollenweber, Scott D.; Stearns, Charles W.; Ross, Steven G.; Ganin, Alex; Lewellen, Thomas K.; Kinahan, Paul E.

    2008-01-01

    Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary. PMID:19096731

  1. Image Reconstruction for a Partially Collimated Whole Body PET Scanner.

    PubMed

    Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E

    2008-06-01

    Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.

  2. Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals

    NASA Technical Reports Server (NTRS)

    Wang, Meng-Hua; King, Michael D.

    1997-01-01

    We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic Stratocumulus Transition Experiment (ASTEX) conducted near the Azores in June 1992 and compare these results to corresponding retrievals obtained using 0.88 Am. These results provide an example of the Rayleigh scattering effects on thin clouds and further test the Rayleigh correction scheme. Using a nonabsorbing near-infrared wavelength lambda (0.88 Am) in retrieving cloud optical thickness is only applicable over oceans, however, since most land surfaces are highly reflective at 0.88 Am. Hence successful global retrievals of cloud optical thickness should remove Rayleigh scattering effects when using reflectance measurements at 0.66 Am.

  3. Atmospheric correction for inland water based on Gordon model

    NASA Astrophysics Data System (ADS)

    Li, Yunmei; Wang, Haijun; Huang, Jiazhu

    2008-04-01

    Remote sensing technique is soundly used in water quality monitoring since it can receive area radiation information at the same time. But more than 80% radiance detected by sensors at the top of the atmosphere is contributed by atmosphere not directly by water body. Water radiance information is seriously confused by atmospheric molecular and aerosol scattering and absorption. A slight bias of evaluation for atmospheric influence can deduce large error for water quality evaluation. To inverse water composition accurately we have to separate water and air information firstly. In this paper, we studied on atmospheric correction methods for inland water such as Taihu Lake. Landsat-5 TM image was corrected based on Gordon atmospheric correction model. And two kinds of data were used to calculate Raleigh scattering, aerosol scattering and radiative transmission above Taihu Lake. Meanwhile, the influence of ozone and white cap were revised. One kind of data was synchronization meteorology data, and the other one was synchronization MODIS image. At last, remote sensing reflectance was retrieved from the TM image. The effect of different methods was analyzed using in situ measured water surface spectra. The result indicates that measured and estimated remote sensing reflectance were close for both methods. Compared to the method of using MODIS image, the method of using synchronization meteorology is more accurate. And the bias is close to inland water error criterion accepted by water quality inversing. It shows that this method is suitable for Taihu Lake atmospheric correction for TM image.

  4. Simulation tools for scattering corrections in spectrally resolved x-ray computed tomography using McXtrace

    NASA Astrophysics Data System (ADS)

    Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer

    2018-03-01

    Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).

  5. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model) digital phantoms. In addition, PET projection files for different sizes of MOBY phantoms were reconstructed in 6 different conditions including attenuation and scatter corrections. Selected regions were analyzed for these different reconstruction conditions and object sizes. Finally, real mouse data from the real version of the same small animal PET scanner we modeled in our simulations were analyzed for similar reconstruction conditions. Both our IDL and GATE simulations showed that, for small animal PET and SPECT, even the smallest size objects (˜2 cm diameter) showed ˜15% error when both attenuation and scatter were not corrected. However, a simple attenuation correction using a uniform attenuation map and object boundary obtained from emission data significantly reduces this error in non-lung regions (˜1% for smallest size and ˜6% for largest size). In lungs, emissions values were overestimated when only attenuation correction was performed. In addition, we did not observe any significant improvement between the uses of uniform or actual attenuation map (e.g., only ˜0.5% for largest size in PET studies). The scatter correction was not significant for smaller size objects, but became increasingly important for larger sizes objects. These results suggest that for all mouse sizes and most rat sizes, uniform attenuation correction can be performed using emission data only. For smaller sizes up to ˜ 4 cm, scatter correction is not required even in lung regions. For larger sizes if accurate quantization needed, additional transmission scan may be required to estimate an accurate attenuation map for both attenuation and scatter corrections.

  6. Modeling boundary measurements of scattered light using the corrected diffusion approximation

    PubMed Central

    Lehtikangas, Ossi; Tarvainen, Tanja; Kim, Arnold D.

    2012-01-01

    We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. PMID:22435102

  7. Development of PET projection data correction algorithm

    NASA Astrophysics Data System (ADS)

    Bazhanov, P. V.; Kotina, E. D.

    2017-12-01

    Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.

  8. Scatter correction in cone-beam CT via a half beam blocker technique allowing simultaneous acquisition of scatter and image information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ho; Xing Lei; Lee, Rena

    2012-05-15

    Purpose: X-ray scatter incurred to detectors degrades the quality of cone-beam computed tomography (CBCT) and represents a problem in volumetric image guided and adaptive radiation therapy. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, due to missing information resulting from the obstruction of the blocker, such methods require dual scanning or dynamically moving blocker to obtain a complete volumetric image. Here, we propose a half beam blocker-based approach, in conjunction with a total variation (TV) regularized Feldkamp-Davis-Kress (FDK) algorithm, to correct scatter-induced artifacts by simultaneously acquiring image and scatter information frommore » a single-rotation CBCT scan. Methods: A half beam blocker, comprising lead strips, is used to simultaneously acquire image data on one side of the projection data and scatter data on the other half side. One-dimensional cubic B-Spline interpolation/extrapolation is applied to derive patient specific scatter information by using the scatter distributions on strips. The estimated scatter is subtracted from the projection image acquired at the opposite view. With scatter-corrected projections where this subtraction is completed, the FDK algorithm based on a cosine weighting function is performed to reconstruct CBCT volume. To suppress the noise in the reconstructed CBCT images produced by geometric errors between two opposed projections and interpolated scatter information, total variation regularization is applied by a minimization using a steepest gradient descent optimization method. The experimental studies using Catphan504 and anthropomorphic phantoms were carried out to evaluate the performance of the proposed scheme. Results: The scatter-induced shading artifacts were markedly suppressed in CBCT using the proposed scheme. Compared with CBCT without a blocker, the nonuniformity value was reduced from 39.3% to 3.1%. The root mean square error relative to values inside the regions of interest selected from a benchmark scatter free image was reduced from 50 to 11.3. The TV regularization also led to a better contrast-to-noise ratio. Conclusions: An asymmetric half beam blocker-based FDK acquisition and reconstruction technique has been established. The proposed scheme enables simultaneous detection of patient specific scatter and complete volumetric CBCT reconstruction without additional requirements such as prior images, dual scans, or moving strips.« less

  9. SU-F-J-198: A Cross-Platform Adaptation of An a Priori Scatter Correction Algorithm for Cone-Beam Projections to Enable Image- and Dose-Guided Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, A; Casares-Magaz, O; Elstroem, U

    Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less

  10. Atmospheric monitoring in MAGIC and data corrections

    NASA Astrophysics Data System (ADS)

    Fruck, Christian; Gaug, Markus

    2015-03-01

    A method for analyzing returns of a custom-made "micro"-LIDAR system, operated alongside the two MAGIC telescopes is presented. This method allows for calculating the transmission through the atmospheric boundary layer as well as thin cloud layers. This is achieved by applying exponential fits to regions of the back-scattering signal that are dominated by Rayleigh scattering. Making this real-time transmission information available for the MAGIC data stream allows to apply atmospheric corrections later on in the analysis. Such corrections allow for extending the effective observation time of MAGIC by including data taken under adverse atmospheric conditions. In the future they will help reducing the systematic uncertainties of energy and flux.

  11. SU-F-T-142: An Analytical Model to Correct the Aperture Scattered Dose in Clinical Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, B; Liu, S; Zhang, T

    2016-06-15

    Purpose: Apertures or collimators are used to laterally shape proton beams in double scattering (DS) delivery and to sharpen the penumbra in pencil beam (PB) delivery. However, aperture-scattered dose is not included in the current dose calculations of treatment planning system (TPS). The purpose of this study is to provide a method to correct the aperture-scattered dose based on an analytical model. Methods: A DS beam with a non-divergent aperture was delivered using a single-room proton machine. Dose profiles were measured with an ion-chamber scanning in water and a 2-D ion chamber matrix with solid-water buildup at various depths. Themore » measured doses were considered as the sum of the non-contaminated dose and the aperture-scattered dose. The non-contaminated dose was calculated by TPS and subtracted from the measured dose. Aperture scattered-dose was modeled as a 1D Gaussian distribution. For 2-D fields, to calculate the scatter-dose from all the edges of aperture, a sum of weighted distance was used in the model based on the distance from calculation point to aperture edge. The gamma index was calculated between the measured and calculated dose with and without scatter correction. Results: For a beam with range of 23 cm and aperture size of 20 cm, the contribution of the scatter horn was ∼8% of the total dose at 4 cm depth and diminished to 0 at 15 cm depth. The amplitude of scatter-dose decreased linearly with the depth increase. The 1D gamma index (2%/2 mm) between the calculated and measured profiles increased from 63% to 98% for 4 cm depth and from 83% to 98% at 13 cm depth. The 2D gamma index (2%/2 mm) at 4 cm depth has improved from 78% to 94%. Conclusion: Using the simple analytical method the discrepancy between the measured and calculated dose has significantly improved.« less

  12. Biophotonics of skin: method for correction of deep Raman spectra distorted by elastic scattering

    NASA Astrophysics Data System (ADS)

    Roig, Blandine; Koenig, Anne; Perraut, François; Piot, Olivier; Gobinet, Cyril; Manfait, Michel; Dinten, Jean-Marc

    2015-03-01

    Confocal Raman microspectroscopy allows in-depth molecular and conformational characterization of biological tissues non-invasively. Unfortunately, spectral distortions occur due to elastic scattering. Our objective is to correct the attenuation of in-depth Raman peaks intensity by considering this phenomenon, enabling thus quantitative diagnosis. In this purpose, we developed PDMS phantoms mimicking skin optical properties used as tools for instrument calibration and data processing method validation. An optical system based on a fibers bundle has been previously developed for in vivo skin characterization with Diffuse Reflectance Spectroscopy (DRS). Used on our phantoms, this technique allows checking their optical properties: the targeted ones were retrieved. Raman microspectroscopy was performed using a commercial confocal microscope. Depth profiles were constructed from integrated intensity of some specific PDMS Raman vibrations. Acquired on monolayer phantoms, they display a decline which is increasing with the scattering coefficient. Furthermore, when acquiring Raman spectra on multilayered phantoms, the signal attenuation through each single layer is directly dependent on its own scattering property. Therefore, determining the optical properties of any biological sample, obtained with DRS for example, is crucial to correct properly Raman depth profiles. A model, inspired from S.L. Jacques's expression for Confocal Reflectance Microscopy and modified at some points, is proposed and tested to fit the depth profiles obtained on the phantoms as function of the reduced scattering coefficient. Consequently, once the optical properties of a biological sample are known, the intensity of deep Raman spectra distorted by elastic scattering can be corrected with our reliable model, permitting thus to consider quantitative studies for purposes of characterization or diagnosis.

  13. Absorption and scattering of light by nonspherical particles. [in atmosphere

    NASA Technical Reports Server (NTRS)

    Bohren, C. F.

    1986-01-01

    Using the example of the polarization of scattered light, it is shown that the scattering matrices for identical, randomly ordered particles and for spherical particles are unequal. The spherical assumptions of Mie theory are therefore inconsistent with the random shapes and sizes of atmospheric particulates. The implications for corrections made to extinction measurements of forward scattering light are discussed. Several analytical methods are examined as potential bases for developing more accurate models, including Rayleigh theory, Fraunhoffer Diffraction theory, anomalous diffraction theory, Rayleigh-Gans theory, the separation of variables technique, the Purcell-Pennypacker method, the T-matrix method, and finite difference calculations.

  14. Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data

    NASA Technical Reports Server (NTRS)

    Song, S.; Moore, R. K.

    1996-01-01

    The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.

  15. Bistatic scattering from a cone frustum

    NASA Technical Reports Server (NTRS)

    Ebihara, W.; Marhefka, R. J.

    1986-01-01

    The bistatic scattering from a perfectly conducting cone frustum is investigated using the Geometrical Theory of Diffraction (GTD). The first-order GTD edge-diffraction solution has been extended by correcting for its failure in the specular region off the curved surface and in the rim-caustic regions of the endcaps. The corrections are accomplished by the use of transition functions which are developed and introduced into the diffraction coefficients. Theoretical results are verified in the principal plane by comparison with the moment method solution and experimental measurements. The resulting solution for the scattered fields is accurate, easy to apply, and fast to compute.

  16. Fast estimation of first-order scattering in a medical x-ray computed tomography scanner using a ray-tracing technique.

    PubMed

    Liu, Xin

    2014-01-01

    This study describes a deterministic method for simulating the first-order scattering in a medical computed tomography scanner. The method was developed based on a physics model of x-ray photon interactions with matter and a ray tracing technique. The results from simulated scattering were compared to the ones from an actual scattering measurement. Two phantoms with homogeneous and heterogeneous material distributions were used in the scattering simulation and measurement. It was found that the simulated scatter profile was in agreement with the measurement result, with an average difference of 25% or less. Finally, tomographic images with artifacts caused by scatter were corrected based on the simulated scatter profiles. The image quality improved significantly.

  17. Use of the Wigner representation in scattering problems

    NASA Technical Reports Server (NTRS)

    Bemler, E. A.

    1975-01-01

    The basic equations of quantum scattering were translated into the Wigner representation, putting quantum mechanics in the form of a stochastic process in phase space, with real valued probability distributions and source functions. The interpretative picture associated with this representation is developed and stressed and results used in applications published elsewhere are derived. The form of the integral equation for scattering as well as its multiple scattering expansion in this representation are derived. Quantum corrections to classical propagators are briefly discussed. The basic approximation used in the Monte-Carlo method is derived in a fashion which allows for future refinement and which includes bound state production. Finally, as a simple illustration of some of the formalism, scattering is treated by a bound two body problem. Simple expressions for single and double scattering contributions to total and differential cross-sections as well as for all necessary shadow corrections are obtained.

  18. Testing the Perey effect

    DOE PAGES

    Titus, L. J.; Nunes, Filomena M.

    2014-03-12

    Here, the effects of non-local potentials have historically been approximately included by applying a correction factor to the solution of the corresponding equation for the local equivalent interaction. This is usually referred to as the Perey correction factor. In this work we investigate the validity of the Perey correction factor for single-channel bound and scattering states, as well as in transfer (p, d) cross sections. Method: We solve the scattering and bound state equations for non-local interactions of the Perey-Buck type, through an iterative method. Using the distorted wave Born approximation, we construct the T-matrix for (p,d) on 17O, 41Ca,more » 49Ca, 127Sn, 133Sn, and 209Pb at 20 and 50 MeV. As a result, we found that for bound states, the Perey corrected wave function resulting from the local equation agreed well with that from the non-local equation in the interior region, but discrepancies were found in the surface and peripheral regions. Overall, the Perey correction factor was adequate for scattering states, with the exception of a few partial waves corresponding to the grazing impact parameters. These differences proved to be important for transfer reactions. In conclusion, the Perey correction factor does offer an improvement over taking a direct local equivalent solution. However, if the desired accuracy is to be better than 10%, the exact solution of the non-local equation should be pursued.« less

  19. Relativistic corrections to the multiple scattering effect on the Sunyaev-Zel'dovich effect in the isotropic approximation

    NASA Astrophysics Data System (ADS)

    Itoh, Naoki; Kawana, Youhei; Nozawa, Satoshi; Kohyama, Yasuharu

    2001-10-01

    We extend the formalism for the calculation of the relativistic corrections to the Sunyaev-Zel'dovich effect for clusters of galaxies and include the multiple scattering effects in the isotropic approximation. We present the results of the calculations by the Fokker-Planck expansion method as well as by the direct numerical integration of the collision term of the Boltzmann equation. The multiple scattering contribution is found to be very small compared with the single scattering contribution. For high-temperature galaxy clusters of kBTe~15keV, the ratio of both the contributions is -0.2 per cent in the Wien region. In the Rayleigh-Jeans region the ratio is -0.03 per cent. Therefore the multiple scattering contribution is safely neglected for the observed galaxy clusters.

  20. Evaluation of scatter limitation correction: a new method of correcting photopenic artifacts caused by patient motion during whole-body PET/CT imaging.

    PubMed

    Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki

    2016-02-01

    Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.

  1. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: proof of concept

    NASA Astrophysics Data System (ADS)

    Lee, Ho; Fahimian, Benjamin P.; Xing, Lei

    2017-03-01

    This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.

  2. SU-E-J-10: A Moving-Blocker-Based Strategy for Simultaneous Megavoltage and Kilovoltage Scatter Correction in Cone-Beam Computed Tomography Image Acquired During Volumetric Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouyang, L; Lee, H; Wang, J

    2014-06-01

    Purpose: To evaluate a moving-blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods: XML code was generated to enable concurrent CBCT acquisition and VMAT delivery in Varian TrueBeam developer mode. A physical attenuator (i.e., “blocker”) consisting of equal spaced lead strips (3.2mm strip width and 3.2mm gap in between) was mounted between the x-ray source and patient at a source to blocker distance of 232mm. The blocker was simulated to be moving back and forth along the gantry rotation axis during themore » CBCT acquisition. Both MV and kV scatter signal were estimated simultaneously from the blocked regions of the imaging panel, and interpolated into the un-blocked regions. Scatter corrected CBCT was then reconstructed from un-blocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan 600 phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using moving blocker for MV-kV scatter correction. Results: MV scatter greatly degrades the CBCT image quality by increasing the CT number inaccuracy and decreasing the image contrast, in addition to the shading artifacts caused by kV scatter. The artifacts were substantially reduced in the moving blocker corrected CBCT images in both Catphan and pelvis phantoms. Quantitatively, CT number error in selected regions of interest reduced from 377 in the kV-MV contaminated CBCT image to 38 for the Catphan phantom. Conclusions: The moving-blockerbased strategy can successfully correct MV and kV scatter simultaneously in CBCT projection data acquired with concurrent VMAT delivery. This work was supported in part by a grant from the Cancer Prevention and Research Institute of Texas (RP130109) and a grant from the American Cancer Society (RSG-13-326-01-CCE)« less

  3. GENERAL: Scattering Phase Correction for Semiclassical Quantization Rules in Multi-Dimensional Quantum Systems

    NASA Astrophysics Data System (ADS)

    Huang, Wen-Min; Mou, Chung-Yu; Chang, Cheng-Hung

    2010-02-01

    While the scattering phase for several one-dimensional potentials can be exactly derived, less is known in multi-dimensional quantum systems. This work provides a method to extend the one-dimensional phase knowledge to multi-dimensional quantization rules. The extension is illustrated in the example of Bogomolny's transfer operator method applied in two quantum wells bounded by step potentials of different heights. This generalized semiclassical method accurately determines the energy spectrum of the systems, which indicates the substantial role of the proposed phase correction. Theoretically, the result can be extended to other semiclassical methods, such as Gutzwiller trace formula, dynamical zeta functions, and semiclassical Landauer-Büttiker formula. In practice, this recipe enhances the applicability of semiclassical methods to multi-dimensional quantum systems bounded by general soft potentials.

  4. Optical artefact characterization and correction in volumetric scintillation dosimetry

    PubMed Central

    Robertson, Daniel; Hui, Cheukkai; Archambault, Louis; Mohan, Radhe; Beddar, Sam

    2014-01-01

    The goals of this study were (1) to characterize the optical artefacts affecting measurement accuracy in a volumetric liquid scintillation detector, and (2) to develop methods to correct for these artefacts. The optical artefacts addressed were photon scattering, refraction, camera perspective, vignetting, lens distortion, the lens point spread function, stray radiation, and noise in the camera. These artefacts were evaluated by theoretical and experimental means, and specific correction strategies were developed for each artefact. The effectiveness of the correction methods was evaluated by comparing raw and corrected images of the scintillation light from proton pencil beams against validated Monte Carlo calculations. Blurring due to the lens and refraction at the scintillator tank-air interface were found to have the largest effect on the measured light distribution, and lens aberrations and vignetting were important primarily at the image edges. Photon scatter in the scintillator was not found to be a significant source of artefacts. The correction methods effectively mitigated the artefacts, increasing the average gamma analysis pass rate from 66% to 98% for gamma criteria of 2% dose difference and 2 mm distance to agreement. We conclude that optical artefacts cause clinically meaningful errors in the measured light distribution, and we have demonstrated effective strategies for correcting these optical artefacts. PMID:24321820

  5. Scatter Correction with Combined Single-Scatter Simulation and Monte Carlo Simulation Scaling Improved the Visual Artifacts and Quantification in 3-Dimensional Brain PET/CT Imaging with 15O-Gas Inhalation.

    PubMed

    Magota, Keiichi; Shiga, Tohru; Asano, Yukari; Shinyama, Daiki; Ye, Jinghan; Perkins, Amy E; Maniawski, Piotr J; Toyonaga, Takuya; Kobayashi, Kentaro; Hirata, Kenji; Katoh, Chietsugu; Hattori, Naoya; Tamaki, Nagara

    2017-12-01

    In 3-dimensional PET/CT imaging of the brain with 15 O-gas inhalation, high radioactivity in the face mask creates cold artifacts and affects the quantitative accuracy when scatter is corrected by conventional methods (e.g., single-scatter simulation [SSS] with tail-fitting scaling [TFS-SSS]). Here we examined the validity of a newly developed scatter-correction method that combines SSS with a scaling factor calculated by Monte Carlo simulation (MCS-SSS). Methods: We performed phantom experiments and patient studies. In the phantom experiments, a plastic bottle simulating a face mask was attached to a cylindric phantom simulating the brain. The cylindric phantom was filled with 18 F-FDG solution (3.8-7.0 kBq/mL). The bottle was filled with nonradioactive air or various levels of 18 F-FDG (0-170 kBq/mL). Images were corrected either by TFS-SSS or MCS-SSS using the CT data of the bottle filled with nonradioactive air. We compared the image activity concentration in the cylindric phantom with the true activity concentration. We also performed 15 O-gas brain PET based on the steady-state method on patients with cerebrovascular disease to obtain quantitative images of cerebral blood flow and oxygen metabolism. Results: In the phantom experiments, a cold artifact was observed immediately next to the bottle on TFS-SSS images, where the image activity concentrations in the cylindric phantom were underestimated by 18%, 36%, and 70% at the bottle radioactivity levels of 2.4, 5.1, and 9.7 kBq/mL, respectively. At higher bottle radioactivity, the image activity concentrations in the cylindric phantom were greater than 98% underestimated. For the MCS-SSS, in contrast, the error was within 5% at each bottle radioactivity level, although the image generated slight high-activity artifacts around the bottle when the bottle contained significantly high radioactivity. In the patient imaging with 15 O 2 and C 15 O 2 inhalation, cold artifacts were observed on TFS-SSS images, whereas no artifacts were observed on any of the MCS-SSS images. Conclusion: MCS-SSS accurately corrected the scatters in 15 O-gas brain PET when the 3-dimensional acquisition mode was used, preventing the generation of cold artifacts, which were observed immediately next to a face mask on TFS-SSS images. The MCS-SSS method will contribute to accurate quantitative assessments. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  6. NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Sohlberg, A.; Watabe, H.; Iida, H.

    2008-07-01

    Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.

  7. Hounsfield unit recovery in clinical cone beam CT images of the thorax acquired for image guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto; Hansen, Olfred; Brink, Carsten

    2016-08-01

    A comprehensive artefact correction method for clinical cone beam CT (CBCT) images acquired for image guided radiation therapy (IGRT) on a commercial system is presented. The method is demonstrated to reduce artefacts and recover CT-like Hounsfield units (HU) in reconstructed CBCT images of five lung cancer patients. Projection image based artefact corrections of image lag, detector scatter, body scatter and beam hardening are described and applied to CBCT images of five lung cancer patients. Image quality is evaluated through visual appearance of the reconstructed images, HU-correspondence with the planning CT images, and total volume HU error. Artefacts are reduced and CT-like HUs are recovered in the artefact corrected CBCT images. Visual inspection confirms that artefacts are indeed suppressed by the proposed method, and the HU root mean square difference between reconstructed CBCTs and the reference CT images are reduced by 31% when using the artefact corrections compared to the standard clinical CBCT reconstruction. A versatile artefact correction method for clinical CBCT images acquired for IGRT has been developed. HU values are recovered in the corrected CBCT images. The proposed method relies on post processing of clinical projection images, and does not require patient specific optimisation. It is thus a powerful tool for image quality improvement of large numbers of CBCT images.

  8. Estimation of scattering object characteristics for image reconstruction using a nonzero background.

    PubMed

    Jin, Jing; Astheimer, Jeffrey; Waag, Robert

    2010-06-01

    Two methods are described to estimate the boundary of a 2-D penetrable object and the average sound speed in the object. One method is for circular objects centered in the coordinate system of the scattering observation. This method uses an orthogonal function expansion for the scattering. The other method is for noncircular, essentially convex objects. This method uses cross correlation to obtain time differences that determine a family of parabolas whose envelope is the boundary of the object. A curve-fitting method and a phase-based method are described to estimate and correct the offset of an uncentered radial or elliptical object. A method based on the extinction theorem is described to estimate absorption in the object. The methods are applied to calculated scattering from a circular object with an offset and to measured scattering from an offset noncircular object. The results show that the estimated boundaries, sound speeds, and absorption slopes agree very well with independently measured or true values when the assumptions of the methods are reasonably satisfied.

  9. [Spectral scatter correction of coal samples based on quasi-linear local weighted method].

    PubMed

    Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng

    2014-07-01

    The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.

  10. Single-Inclusive Jet Production In Electron-Nucleon Collisions Through Next-To-Next-To-Leading Order In Perturbative QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abelof, Gabriel; Boughezal, Radja; Liu, Xiaohui

    2016-10-17

    We compute the Oσ 2σ 2 s perturbative corrections to inclusive jet production in electron-nucleon collisions. This process is of particular interest to the physics program of a future Electron Ion Collider (EIC). We include all relevant partonic processes, including deep-inelastic scattering contributions, photon-initiated corrections, and parton-parton scattering terms that first appear at this order. Upon integration over the final-state hadronic phase space we validate our results for the deep-inelastic corrections against the known next-to-next-to-leading order (NNLO) structure functions. Our calculation uses the N-jettiness subtraction scheme for performing higher-order computations, and allows for a completely differential description of the deep-inelasticmore » scattering process. We describe the application of this method to inclusive jet production in detail, and present phenomenological results for the proposed EIC. The NNLO corrections have a non-trivial dependence on the jet kinematics and arise from an intricate interplay between all contributing partonic channels.« less

  11. Interference correction by extracting the information of interference dominant regions: Application to near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen

    2014-08-01

    Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.

  12. Quantitation of tumor uptake with molecular breast imaging.

    PubMed

    Bache, Steven T; Kappadath, S Cheenu

    2017-09-01

    We developed scatter and attenuation-correction techniques for quantifying images obtained with Molecular Breast Imaging (MBI) systems. To investigate scatter correction, energy spectra of a 99m Tc point source were acquired with 0-7-cm-thick acrylic to simulate scatter between the detector heads. System-specific scatter correction factor, k, was calculated as a function of thickness using a dual energy window technique. To investigate attenuation correction, a 7-cm-thick rectangular phantom containing 99m Tc-water simulating breast tissue and fillable spheres simulating tumors was imaged. Six spheres 10-27 mm in diameter were imaged with sphere-to-background ratios (SBRs) of 3.5, 2.6, and 1.7 and located at depths of 0.5, 1.5, and 2.5 cm from the center of the water bath for 54 unique tumor scenarios (3 SBRs × 6 sphere sizes × 3 depths). Phantom images were also acquired in-air under scatter- and attenuation-free conditions, which provided ground truth counts. To estimate true counts, T, from each tumor, the geometric mean (GM) of the counts within a prescribed region of interest (ROI) from the two projection images was calculated as T=C1C2eμtF, where C are counts within the square ROI circumscribing each sphere on detectors 1 and 2, μ is the linear attenuation coefficient of water, t is detector separation, and the factor F accounts for background activity. Four unique F definitions-standard GM, background-subtraction GM, MIRD Primer 16 GM, and a novel "volumetric GM"-were investigated. Error in T was calculated as the percentage difference with respect to in-air. Quantitative accuracy using the different GM definitions was calculated as a function of SBR, depth, and sphere size. Sensitivity of quantitative accuracy to ROI size was investigated. We developed an MBI simulation to investigate the robustness of our corrections for various ellipsoidal tumor shapes and detector separations. Scatter correction factor k varied slightly (0.80-0.95) over a compressed breast thickness range of 6-9 cm. Corrected energy spectra recovered general characteristics of scatter-free spectra. Quantitatively, photopeak counts were recovered to <10% compared to in-air conditions after scatter correction. After GM attenuation correction, mean errors (95% confidence interval, CI) for all 54 imaging scenarios were 149% (-154% to +455%), -14.0% (-38.4% to +10.4%), 16.8% (-14.7% to +48.2%), and 2.0% (-14.3 to +18.3%) for the standard GM, background-subtraction GM, MIRD 16 GM, and volumetric GM, respectively. Volumetric GM was less sensitive to SBR and sphere size, while all GM methods were insensitive to sphere depth. Simulation results showed that Volumetric GM method produced a mean error within 5% over all compressed breast thicknesses (3-14 cm), and that the use of an estimated radius for nonspherical tumors increases the 95% CI to at most ±23%, compared with ±16% for spherical tumors. Using DEW scatter- and our Volumetric GM attenuation-correction methodology yielded accurate estimates of tumor counts in MBI over various tumor sizes, shapes, depths, background uptake, and compressed breast thicknesses. Accurate tumor uptake can be converted to radiotracer uptake concentration, allowing three patient-specific metrics to be calculated for quantifying absolute uptake and relative uptake change for assessment of treatment response. © 2017 American Association of Physicists in Medicine.

  13. Does Your Optical Particle Counter Measure What You Think it Does? Calibration and Refractive Index Correction Methods.

    NASA Astrophysics Data System (ADS)

    Rosenberg, Phil; Dean, Angela; Williams, Paul; Dorsey, James; Minikin, Andreas; Pickering, Martyn; Petzold, Andreas

    2013-04-01

    Optical Particle Counters (OPCs) are the de-facto standard for in-situ measurements of airborne aerosol size distributions and small cloud particles over a wide size range. This is particularly the case on airborne platforms where fast response is important. OPCs measure scattered light from individual particles and generally bin particles according to the measured peak amount of light scattered (the OPC's response). Most manufacturers provide a table along with their instrument which indicates the particle diameters which represent the edges of each bin. It is important to correct the particle size reported by OPCs for the refractive index of the particles being measured, which is often not the same as for those used during calibration. However, the OPC's response is not a monotonic function of particle diameter and obvious problems occur when refractive index corrections are attempted, but multiple diameters correspond to the same OPC response. Here we recommend that OPCs are calibrated in terms of particle scattering cross section as this is a monotonic (usually linear) function of an OPC's response. We present a method for converting a bin's boundaries in terms of scattering cross section into a bin centre and bin width in terms of diameter for any aerosol species for which the scattering properties are known. The relationship between diameter and scattering cross section can be arbitrarily complex and does not need to be monotonic; it can be based on Mie-Lorenz theory or any other scattering theory. Software has been provided on the Sourceforge open source repository for scientific users to implement such methods in their own measurement and calibration routines. As a case study data is presented showing data from Passive Cavity Aerosol Spectrometer Probe (PCASP) and a Cloud Droplet Probe (CDP) calibrated using polystyrene latex spheres and glass beads before being deployed as part of the Fennec project to measure airborne dust in the inaccessible regions of the Sahara.

  14. GATE Simulations of Small Animal SPECT for Determination of Scatter Fraction as a Function of Object Size

    NASA Astrophysics Data System (ADS)

    Konik, Arda; Madsen, Mark T.; Sunderland, John J.

    2012-10-01

    In human emission tomography, combined PET/CT and SPECT/CT cameras provide accurate attenuation maps for sophisticated scatter and attenuation corrections. Having proven their potential, these scanners are being adapted for small animal imaging using similar correction approaches. However, attenuation and scatter effects in small animal imaging are substantially less than in human imaging. Hence, the value of sophisticated corrections is not obvious for small animal imaging considering the additional cost and complexity of these methods. In this study, using GATE Monte Carlo package, we simulated the Inveon small animal SPECT (single pinhole collimator) scanner to find the scatter fractions of various sizes of the NEMA-mouse (diameter: 2-5.5 cm , length: 7 cm), NEMA-rat (diameter: 3-5.5 cm, length: 15 cm) and MOBY (diameter: 2.1-5.5 cm, length: 3.5-9.1 cm) phantoms. The simulations were performed for three radionuclides commonly used in small animal SPECT studies:99mTc (140 keV), 111In (171 keV 90% and 245 keV 94%) and 125I (effective 27.5 keV). For the MOBY phantoms, the total Compton scatter fractions ranged (over the range of phantom sizes) from 4-10% for 99mTc (126-154 keV), 7-16% for 111In (154-188 keV), 3-7% for 111In (220-270 keV) and 17-30% for 125I (15-45 keV) including the scatter contributions from the tungsten collimator, lead shield and air (inside and outside the camera heads). For the NEMA-rat phantoms, the scatter fractions ranged from 10-15% (99mTc), 17-23% 111In: 154-188 keV), 8-12% (111In: 220-270 keV) and 32-40% (125I). Our results suggest that energy window methods based on solely emission data are sufficient for all mouse and most rat studies for 99mTc and 111In. However, more sophisticated methods may be needed for 125I.

  15. A simple and fast method for computing the relativistic Compton Scattering Kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Kershaw, David S.; Prasad, Manoj K.; Beason, J. Douglas

    1986-01-01

    The Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution is analytically reduced to a single integral, which can then be rapidly evaluated in a variety of ways. A particularly fast method for numerically computing this single integral is presented. This is, to the authors' knowledge, the first correct computation of the Compton scattering kernel.

  16. Fast analytical scatter estimation using graphics processing units.

    PubMed

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  17. Partial Wave Dispersion Relations: Application to Electron-Atom Scattering

    NASA Technical Reports Server (NTRS)

    Temkin, A.; Drachman, Richard J.

    1999-01-01

    In this Letter we propose the use of partial wave dispersion relations (DR's) as the way of solving the long-standing problem of correctly incorporating exchange in a valid DR for electron-atom scattering. In particular a method is given for effectively calculating the contribution of the discontinuity and/or poles of the partial wave amplitude which occur in the negative E plane. The method is successfully tested in three cases: (i) the analytically solvable exponential potential, (ii) the Hartree potential, and (iii) the S-wave exchange approximation for electron-hydrogen scattering.

  18. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron-electron interactions, application to graphene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-07-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  19. Correction of nonuniform attenuation and image fusion in SPECT imaging by means of separate X-ray CT.

    PubMed

    Kashiwagi, Toru; Yutani, Kenji; Fukuchi, Minoru; Naruse, Hitoshi; Iwasaki, Tadaaki; Yokozuka, Koichi; Inoue, Shinichi; Kondo, Shoji

    2002-06-01

    Improvements in image quality and quantitation measurement, and the addition of detailed anatomical structures are important topics for single-photon emission tomography (SPECT). The goal of this study was to develop a practical system enabling both nonuniform attenuation correction and image fusion of SPECT images by means of high-performance X-ray computed tomography (CT). A SPECT system and a helical X-ray CT system were placed next to each other and linked with Ethernet. To avoid positional differences between the SPECT and X-ray CT studies, identical flat patient tables were used for both scans; body distortion was minimized with laser beams from the upper and lateral directions to detect the position of the skin surface. For the raw projection data of SPECT, a scatter correction was performed with the triple energy window method. Image fusion of the X-ray CT and SPECT images was performed automatically by auto-registration of fiducial markers attached to the skin surface. After registration of the X-ray CT and SPECT images, an X-ray CT-derived attenuation map was created with the calibration curve for 99mTc. The SPECT images were then reconstructed with scatter and attenuation correction by means of a maximum likelihood expectation maximization algorithm. This system was evaluated in torso and cylindlical phantoms and in 4 patients referred for myocardial SPECT imaging with Tc-99m tetrofosmin. In the torso phantom study, the SPECT and X-ray CT images overlapped exactly on the computer display. After scatter and attenuation correction, the artifactual activity reduction in the inferior wall of the myocardium improved. Conversely, the incresed activity around the torso surface and the lungs was reduced. In the abdomen, the liver activity, which was originally uniform, had recovered after scatter and attenuation correction processing. The clinical study also showed good overlapping of cardiac and skin surface outlines on the fused SPECT and X-ray CT images. The effectiveness of the scatter and attenuation correction process was similar to that observed in the phantom study. Because the total time required for computer processing was less than 10 minutes, this method of attenuation correction and image fusion for SPECT images is expected to become popular in clinical practice.

  20. A square-wave wavelength modulation system for automatic background correction in carbon furnace atomic emission spectrometry

    NASA Astrophysics Data System (ADS)

    Bezur, L.; Marshall, J.; Ottaway, J. M.

    A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.

  1. New approach to CT pixel-based photon dose calculations in heterogeneous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, J.W.; Henkelman, R.M.

    The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less

  2. Multivariate qualitative analysis of banned additives in food safety using surface enhanced Raman scattering spectroscopy

    NASA Astrophysics Data System (ADS)

    He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei

    2015-02-01

    A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, S; Meredith, R; Azure, M

    Purpose: To support the phase I trial for toxicity, biodistribution and pharmacokinetics of intra-peritoneal (IP) 212Pb-TCMC-trastuzumab in patients with HER-2 expressing malignancy. A whole body gamma camera imaging method was developed for estimating amount of 212Pb-TCMC-trastuzumab left in the peritoneal cavity. Methods: {sup 212}Pb decays to {sup 212}Bi via beta emission. {sup 212}Bi emits an alpha particle at an average of 6.1 MeV. The 238.6 keV gamma ray with a 43.6% yield can be exploited for imaging. Initial phantom was made of saline bags with 212Pb. Images were collected for 238.6 keV with a medium energy general purpose collimator. Theremore » are other high energy gamma emissions (e.g. 511keV, 8%; 583 keV, 31%) that penetrate the septae of the collimator and contribute scatter into 238.6 keV. An upper scatter window was used for scatter correction for these high energy gammas. Results: A small source containing 212Pb can be easily visualized. Scatter correction on images of a small 212Pb source resulted in a ∼50% reduction in the full width at tenth maximum (FWTM), while change in full width at half maximum (FWHM) was <10%. For photopeak images, substantial scatter around phantom source extended to > 5 cm outside; scatter correction improved image contrast by removing this scatter around the sources. Patient imaging, in the 1st cohort (n=3) showed little redistribution of 212Pb-TCMC-trastuzumab out of the peritoneal cavity. Compared to the early post-treatment images, the 18-hour post-injection images illustrated the shift to more uniform anterior/posterior abdominal distribution and the loss of intensity due to radioactive decay. Conclusion: Use of medium energy collimator, 15% width of 238.6 keV photopeak, and a 7.5% upper scatter window is adequate for quantification of 212Pb radioactivity inside peritoneal cavity for alpha radioimmunotherapy of ovarian cancer. Research Support: AREVA Med, NIH 1UL1RR025777-01.« less

  4. Evaluation of simulation-based scatter correction for 3-D PET cardiac imaging

    NASA Astrophysics Data System (ADS)

    Watson, C. C.; Newport, D.; Casey, M. E.; deKemp, R. A.; Beanlands, R. S.; Schmand, M.

    1997-02-01

    Quantitative imaging of the human thorax poses one of the most difficult challenges for three-dimensional (3-D) (septaless) positron emission tomography (PET), due to the strong attenuation of the annihilation radiation and the large contribution of scattered photons to the data. In [/sup 18/F] fluorodeoxyglucose (FDG) studies of the heart with the patient's arms in the field of view, the contribution of scattered events can exceed 50% of the total detected coincidences. Accurate correction for this scatter component is necessary for meaningful quantitative image analysis and tracer kinetic modeling. For this reason, the authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging. Here, they describe this algorithm and present scatter correction results from human and chest phantom studies.

  5. Depth Dose Measurement using a Scintillating Fiber Optic Dosimeter for Proton Therapy Beam of the Passive-Scattering Mode Having Range Modulator Wheel

    NASA Astrophysics Data System (ADS)

    Hwang, Ui-Jung; Shin, Dongho; Lee, Se Byeong; Lim, Young Kyung; Jeong, Jong Hwi; Kim, Hak Soo; Kim, Ki Hwan

    2018-05-01

    To apply a scintillating fiber dosimetry system to measure the range of a proton therapy beam, a new method was proposed to correct for the quenching effect on measuring an spread out Bragg peak (SOBP) proton beam whose range is modulated by a range modulator wheel. The scintillating fiber dosimetry system was composed of a plastic scintillating fiber (BCF-12), optical fiber (SH 2001), photo multiplier tube (H7546), and data acquisition system (PXI6221 and SCC68). The proton beam was generated by a cyclotron (Proteus-235) in the National Cancer Center in Korea. It operated in the double-scattering mode and the spread out of the Bragg peak was achieved by a spinning range modulation wheel. Bragg peak beams and SOBP beams of various ranges were measured, corrected, and compared to the ion chamber data. For the Bragg peak beam, quenching equation was used to correct the quenching effect. On the proposed process of correcting SOBP beams, the measured data using a scintillating fiber were separated by the Bragg peaks that the SOBP beam contained, and then recomposed again to reconstruct an SOBP after correcting for each Bragg peak. The measured depth-dose curve for the single Bragg peak beam was well corrected by using a simple quenching equation. Correction for SOBP beam was conducted with a newly proposed method. The corrected SOBP signal was in accordance with the results measured with an ion chamber. We propose a new method to correct for the SOBP beam from the quenching effect in a scintillating fiber dosimetry system. This method can be applied to other scintillator dosimetry for radiation beams in which the quenching effect is shown in the scintillator.

  6. Expanding Lorentz and spectrum corrections to large volumes of reciprocal space for single-crystal time-of-flight neutron diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.

    Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less

  7. Expanding Lorentz and spectrum corrections to large volumes of reciprocal space for single-crystal time-of-flight neutron diffraction

    DOE PAGES

    Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.; ...

    2016-03-01

    Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less

  8. A single-scattering correction for the seismo-acoustic parabolic equation.

    PubMed

    Collins, Michael D

    2012-04-01

    An efficient single-scattering correction that does not require iterations is derived and tested for the seismo-acoustic parabolic equation. The approach is applicable to problems involving gradual range dependence in a waveguide with fluid and solid layers, including the key case of a sloping fluid-solid interface. The single-scattering correction is asymptotically equivalent to a special case of a single-scattering correction for problems that only have solid layers [Küsel et al., J. Acoust. Soc. Am. 121, 808-813 (2007)]. The single-scattering correction has a simple interpretation (conservation of interface conditions in an average sense) that facilitated its generalization to problems involving fluid layers. Promising results are obtained for problems in which the ocean bottom interface has a small slope.

  9. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    PubMed

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  10. Radiative-Transfer Modeling of Spectra of Densely Packed Particulate Media

    NASA Astrophysics Data System (ADS)

    Ito, G.; Mishchenko, M. I.; Glotch, T. D.

    2017-12-01

    Remote sensing measurements over a wide range of wavelengths from both ground- and space-based platforms have provided a wealth of data regarding the surfaces and atmospheres of various solar system bodies. With proper interpretations, important properties, such as composition and particle size, can be inferred. However, proper interpretation of such datasets can often be difficult, especially for densely packed particulate media with particle sizes on the order of wavelength of light being used for remote sensing. Radiative transfer theory has often been applied to the study of densely packed particulate media like planetary regoliths and snow, but with difficulty, and here we continue to investigate radiative transfer modeling of spectra of densely packed media. We use the superposition T-matrix method to compute scattering properties of clusters of particles and capture the near-field effects important for dense packing. Then, the scattering parameters from the T-matrix computations are modified with the static structure factor correction, accounting for the dense packing of the clusters themselves. Using these corrected scattering parameters, reflectance (or emissivity via Kirchhoff's Law) is computed with the method of invariance imbedding solution to the radiative transfer equation. For this work we modeled the emissivity spectrum of the 3.3 µm particle size fraction of enstatite, representing some common mineralogical and particle size components of regoliths, in the mid-infrared wavelengths (5 - 50 µm). The modeled spectrum from the T-matrix method with static structure factor correction using moderate packing densities (filling factors of 0.1 - 0.2) produced better fits to the laboratory measurement of corresponding spectrum than the spectrum modeled by the equivalent method without static structure factor correction. Future work will test the method of the superposition T-matrix and static structure factor correction combination for larger particles sizes and polydispersed clusters in search for the most effective modeling of spectra of densely packed particulate media.

  11. WE-DE-207B-10: Library-Based X-Ray Scatter Correction for Dedicated Cone-Beam Breast CT: Clinical Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, L; Zhu, L; Vedantham, S

    Purpose: Scatter contamination is detrimental to image quality in dedicated cone-beam breast CT (CBBCT), resulting in cupping artifacts and loss of contrast in reconstructed images. Such effects impede visualization of breast lesions and the quantitative accuracy. Previously, we proposed a library-based software approach to suppress scatter on CBBCT images. In this work, we quantify the efficacy and stability of this approach using datasets from 15 human subjects. Methods: A pre-computed scatter library is generated using Monte Carlo simulations for semi-ellipsoid breast models and homogeneous fibroglandular/adipose tissue mixture encompassing the range reported in literature. Projection datasets from 15 human subjects thatmore » cover 95 percentile of breast dimensions and fibroglandular volume fraction were included in the analysis. Our investigations indicate that it is sufficient to consider the breast dimensions alone and variation in fibroglandular fraction does not significantly affect the scatter-to-primary ratio. The breast diameter is measured from a first-pass reconstruction; the appropriate scatter distribution is selected from the library; and, deformed by considering the discrepancy in total projection intensity between the clinical dataset and the simulated semi-ellipsoidal breast. The deformed scatter-distribution is subtracted from the measured projections for scatter correction. Spatial non-uniformity (SNU) and contrast-to-noise ratio (CNR) were used as quantitative metrics to evaluate the results. Results: On the 15 patient cases, our method reduced the overall image spatial non-uniformity (SNU) from 7.14%±2.94% (mean ± standard deviation) to 2.47%±0.68% in coronal view and from 10.14%±4.1% to 3.02% ±1.26% in sagittal view. The average contrast to noise ratio (CNR) improved by a factor of 1.49±0.40 in coronal view and by 2.12±1.54 in sagittal view. Conclusion: We demonstrate the robustness and effectiveness of a library-based scatter correction method using patient datasets with large variability in breast dimensions and composition. The high computational efficiency and simplicity in implementation make this attractive for clinical implementation. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less

  12. Dose and scatter characteristics of a novel cone beam CT system for musculoskeletal extremities

    NASA Astrophysics Data System (ADS)

    Zbijewski, W.; Sisniega, A.; Vaquero, J. J.; Muhit, A.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Carrino, J. A.; Siewerdsen, J. H.

    2012-03-01

    A novel cone-beam CT (CBCT) system has been developed with promising capabilities for musculoskeletal imaging (e.g., weight-bearing extremities and combined radiographic / volumetric imaging). The prototype system demonstrates diagnostic-quality imaging performance, while the compact geometry and short scan orbit raise new considerations for scatter management and dose characterization that challenge conventional methods. The compact geometry leads to elevated, heterogeneous x-ray scatter distributions - even for small anatomical sites (e.g., knee or wrist), and the short scan orbit results in a non-uniform dose distribution. These complex dose and scatter distributions were investigated via experimental measurements and GPU-accelerated Monte Carlo (MC) simulation. The combination provided a powerful basis for characterizing dose distributions in patient-specific anatomy, investigating the benefits of an antiscatter grid, and examining distinct contributions of coherent and incoherent scatter in artifact correction. Measurements with a 16 cm CTDI phantom show that the dose from the short-scan orbit (0.09 mGy/mAs at isocenter) varies from 0.16 to 0.05 mGy/mAs at various locations on the periphery (all obtained at 80 kVp). MC estimation agreed with dose measurements within 10-15%. Dose distribution in patient-specific anatomy was computed with MC, confirming such heterogeneity and highlighting the elevated energy deposition in bone (factor of ~5-10) compared to soft-tissue. Scatter-to-primary ratio (SPR) up to ~1.5-2 was evident in some regions of the knee. A 10:1 antiscatter grid was found earlier to result in significant improvement in soft-tissue imaging performance without increase in dose. The results of MC simulations elucidated the mechanism behind scatter reduction in the presence of a grid. A ~3-fold reduction in average SPR was found in the MC simulations; however, a linear grid was found to impart additional heterogeneity in the scatter distribution, mainly due to the increase in the contribution of coherent scatter with increased spatial variation. Scatter correction using MC-generated scatter distributions demonstrated significant improvement in cupping and streaks. Physical experimentation combined with GPU-accelerated MC simulation provided a sophisticated, yet practical approach in identifying low-dose acquisition techniques, optimizing scatter correction methods, and evaluating patientspecific dose.

  13. Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging

    NASA Astrophysics Data System (ADS)

    Eldib, Mootaz; Bini, Jason; Robson, Philip M.; Calcagno, Claudia; Faul, David D.; Tsoumpas, Charalampos; Fayad, Zahi A.

    2015-06-01

    The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use.

  14. An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT

    NASA Astrophysics Data System (ADS)

    Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.

    2009-06-01

    A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.

  15. Retrieval of background surface reflectance with BRD components from pre-running BRDF

    NASA Astrophysics Data System (ADS)

    Choi, Sungwon; Lee, Kyeong-Sang; Jin, Donghyun; Lee, Darae; Han, Kyung-Soo

    2016-10-01

    Many countries try to launch satellite to observe the Earth surface. As important of surface remote sensing is increased, the reflectance of surface is a core parameter of the ground climate. But observing the reflectance of surface by satellite have weakness such as temporal resolution and being affected by view or solar angles. The bidirectional effects of the surface reflectance may make many noises to the time series. These noises can lead to make errors when determining surface reflectance. To correct bidirectional error of surface reflectance, using correction model for normalized the sensor data is necessary. A Bidirectional Reflectance Distribution Function (BRDF) is making accuracy higher method to correct scattering (Isotropic scattering, Geometric scattering, Volumetric scattering). To correct bidirectional error of surface reflectance, BRDF was used in this study. To correct bidirectional error of surface reflectance, we apply Bidirectional Reflectance Distribution Function (BRDF) to retrieve surface reflectance. And we apply 2 steps for retrieving Background Surface Reflectance (BSR). The first step is retrieving Bidirectional Reflectance Distribution (BRD) coefficients. Before retrieving BSR, we did pre-running BRDF to retrieve BRD coefficients to correct scatterings (Isotropic scattering, Geometric scattering, Volumetric scattering). In pre-running BRDF, we apply BRDF with observed surface reflectance of SPOT/VEGETATION (VGT-S1) and angular data to get BRD coefficients for calculating scattering. After that, we apply BRDF again in the opposite direction with BRD coefficients and angular data to retrieve BSR as a second step. As a result, BSR has very similar reflectance to one of VGT-S1. And reflectance in BSR is shown adequate. The highest reflectance of BSR is not over 0.4μm in blue channel, 0.45μm in red channel, 0.55μm in NIR channel. And for validation we compare reflectance of clear sky pixel from SPOT/VGT status map data. As a result of comparing BSR with VGT-S1, bias is from 0.0116 to 0.0158 and RMSE is from 0.0459 to 0.0545. They are very reasonable results, so we confirm that BSR is similar to VGT-S1. And weakness of this study is missing pixel in BSR which are observed less time to retrieve BRD components. If missing pixels are filled, BSR is better to retrieve surface products with more accuracy. And we think that after filling the missing pixel and being more accurate, it can be useful data to retrieve surface product which made by surface reflectance like cloud masking and retrieving aerosol.

  16. Cloaking of arbitrarily shaped objects with homogeneous coatings

    NASA Astrophysics Data System (ADS)

    Forestiere, Carlo; Dal Negro, Luca; Miano, Giovanni

    2014-05-01

    We present a theory for the cloaking of arbitrarily shaped objects and demonstrate electromagnetic scattering cancellation through designed homogeneous coatings. First, in the small-particle limit, we expand the dipole moment of a coated object in terms of its resonant modes. By zeroing the numerator of the resulting rational function, we accurately predict the permittivity values of the coating layer that abates the total scattered power. Then, we extend the applicability of the method beyond the small-particle limit, deriving the radiation corrections of the scattering-cancellation permittivity within a perturbation approach. Our method permits the design of invisibility cloaks for irregularly shaped devices such as complex sensors and detectors.

  17. MO-FG-CAMPUS-JeP1-05: Water Equivalent Path Length Calculations Using Scatter-Corrected Head and Neck CBCT Images to Evaluate Patients for Adaptive Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J; Park, Y; Sharp, G

    Purpose: To establish a method to evaluate the dosimetric impact of anatomic changes in head and neck patients during proton therapy by using scatter-corrected cone-beam CT (CBCT) images. Methods: The water equivalent path length (WEPL) was calculated to the distal edge of PTV contours by using tomographic images available for six head and neck patients received photon therapy. The proton range variation was measured by calculating the difference between the distal WEPLs calculated with the planning CT and weekly treatment CBCT images. By performing an automatic rigid registration, six degrees-of-freedom (DOF) correction was made to the CBCT images to accountmore » for the patient setup uncertainty. For accurate WEPL calculations, an existing CBCT scatter correction algorithm, whose performance was already proven for phantom images, was calibrated for head and neck patient images. Specifically, two different image similarity measures, mutual information (MI) and mean square error (MSE), were tested for the deformable image registration (DIR) in the CBCT scatter correction algorithm. Results: The impact of weight loss was reflected in the distal WEPL differences with the aid of the automatic rigid registration reducing the influence of patient setup uncertainty on the WEPL calculation results. The WEPL difference averaged over distal area was 2.9 ± 2.9 (mm) across all fractions of six patients and its maximum, mostly found at the last available fraction, was 6.2 ± 3.4 (mm). The MSE-based DIR successfully registered each treatment CBCT image to the planning CT image. On the other hand, the MI-based DIR deformed the skin voxels in the planning CT image to the immobilization mask in the treatment CBCT image, most of which was cropped out of the planning CT image. Conclusion: The dosimetric impact of anatomic changes was evaluated by calculating the distal WEPL difference with the existing scatter-correction algorithm appropriately calibrated. Jihun Kim, Yang-Kyun Park, Gregory Sharp, and Brian Winey have received grant support from the NCI Federal Share of program income earned by Massachusetts General Hospital on C06 CA059267, Proton Therapy Research and Treatment Center.« less

  18. Multivariate qualitative analysis of banned additives in food safety using surface enhanced Raman scattering spectroscopy.

    PubMed

    He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei

    2015-02-25

    A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Self-interaction correction in multiple scattering theory: application to transition metal oxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daene, Markus W; Lueders, Martin; Ernst, Arthur

    2009-01-01

    We apply to transition metal monoxides the self-interaction corrected (SIC) local spin density (LSD) approximation, implemented locally in the multiple scattering theory within the Korringa-Kohn-Rostoker (KKR) band structure method. The calculated electronic structure and in particular magnetic moments and energy gaps are discussed in reference to the earlier SIC results obtained within the LMTO-ASA band structure method, involving transformations between Bloch and Wannier representations to solve the eigenvalue problem and calculate the SIC charge and potential. Since the KKR can be easily extended to treat disordered alloys, by invoking the coherent potential approximation (CPA), in this paper we compare themore » CPA approach and supercell calculations to study the electronic structure of NiO with cation vacancies.« less

  20. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but theirmore » level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE Healthcare), followed by a CT scan for attenuation correction (AC). For each experiment, separate images were created including reconstruction with no corrections (NC), with AC, with attenuation and dual-energy window (DEW) scatter correction (ACSC), with attenuation and partial volume correction (PVC) applied (ACPVC), and with attenuation, scatter, and PVC applied (ACSCPVC). The DEW SC method used was modified to account for the presence of the low-energy tail. Results: T-tests showed that the mean error in absolute activity measurement was reduced significantly for AC and ACSC compared to NC for both (hot and cold) datasets (p < 0.001) and that ACSC, ACPVC, and ACSCPVC show significant reductions in mean differences compared to AC (p ≤ 0.001) without increasing the uncertainty (p > 0.4). The effect of SC and PVC was significant in reducing errors over AC in both datasets (p < 0.001 and p < 0.01, respectively), resulting in a mean error of 5% ± 4%. Conclusions: Quantitative measurements of cardiac {sup 99m}Tc activity are achievable using attenuation and scatter corrections, with the authors’ dedicated cardiac SPECT camera. Partial volume corrections offer improvements in measurement accuracy in AC images and ACSC images with elevated background activity; however, these improvements are not significant in ACSC images with low background activity.« less

  1. Projection correlation based view interpolation for cone beam CT: primary fluence restoration in scatter measurement with a moving beam stop array.

    PubMed

    Yan, Hao; Mou, Xuanqin; Tang, Shaojie; Xu, Qiong; Zankl, Maria

    2010-11-07

    Scatter correction is an open problem in x-ray cone beam (CB) CT. The measurement of scatter intensity with a moving beam stop array (BSA) is a promising technique that offers a low patient dose and accurate scatter measurement. However, when restoring the blocked primary fluence behind the BSA, spatial interpolation cannot well restore the high-frequency part, causing streaks in the reconstructed image. To address this problem, we deduce a projection correlation (PC) to utilize the redundancy (over-determined information) in neighbouring CB views. PC indicates that the main high-frequency information is contained in neighbouring angular projections, instead of the current projection itself, which provides a guiding principle that applies to high-frequency information restoration. On this basis, we present the projection correlation based view interpolation (PC-VI) algorithm; that it outperforms the use of only spatial interpolation is validated. The PC-VI based moving BSA method is developed. In this method, PC-VI is employed instead of spatial interpolation, and new moving modes are designed, which greatly improve the performance of the moving BSA method in terms of reliability and practicability. Evaluation is made on a high-resolution voxel-based human phantom realistically including the entire procedure of scatter measurement with a moving BSA, which is simulated by analytical ray-tracing plus Monte Carlo simulation with EGSnrc. With the proposed method, we get visually artefact-free images approaching the ideal correction. Compared with the spatial interpolation based method, the relative mean square error is reduced by a factor of 6.05-15.94 for different slices. PC-VI does well in CB redundancy mining; therefore, it has further potential in CBCT studies.

  2. Transient radiative transfer in a scattering slab considering polarization.

    PubMed

    Yi, Hongliang; Ben, Xun; Tan, Heping

    2013-11-04

    The characteristics of the transient and polarization must be considered for a complete and correct description of short-pulse laser transfer in a scattering medium. A Monte Carlo (MC) method combined with a time shift and superposition principle is developed to simulate transient vector (polarized) radiative transfer in a scattering medium. The transient vector radiative transfer matrix (TVRTM) is defined to describe the transient polarization behavior of short-pulse laser propagating in the scattering medium. According to the definition of reflectivity, a new criterion of reflection at Fresnel surface is presented. In order to improve the computational efficiency and accuracy, a time shift and superposition principle is applied to the MC model for transient vector radiative transfer. The results for transient scalar radiative transfer and steady-state vector radiative transfer are compared with those in published literatures, respectively, and an excellent agreement between them is observed, which validates the correctness of the present model. Finally, transient radiative transfer is simulated considering the polarization effect of short-pulse laser in a scattering medium, and the distributions of Stokes vector in angular and temporal space are presented.

  3. Extending 3D Near-Cloud Corrections from Shorter to Longer Wavelengths

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander; Evans, K. Frank; Varnai, Tamas; Guoyong, Wen

    2014-01-01

    Satellite observations have shown a positive correlation between cloud amount and aerosol optical thickness (AOT) that can be explained by the humidification of aerosols near clouds, and/or by cloud contamination by sub-pixel size clouds and the cloud adjacency effect. The last effect may substantially increase reflected radiation in cloud-free columns, leading to overestimates in the retrieved AOT. For clear-sky areas near boundary layer clouds the main contribution to the enhancement of clear sky reflectance at shorter wavelengths comes from the radiation scattered into clear areas by clouds and then scattered to the sensor by air molecules. Because of the wavelength dependence of air molecule scattering, this process leads to a larger reflectance increase at shorter wavelengths, and can be corrected using a simple two-layer model. However, correcting only for molecular scattering skews spectral properties of the retrieved AOT. Kassianov and Ovtchinnikov proposed a technique that uses spectral reflectance ratios to retrieve AOT in the vicinity of clouds; they assumed that the cloud adjacency effect influences the spectral ratio between reflectances at two wavelengths less than it influences the reflectances themselves. This paper combines the two approaches: It assumes that the 3D correction for the shortest wavelength is known with some uncertainties, and then it estimates the 3D correction for longer wavelengths using a modified ratio method. The new approach is tested with 3D radiances simulated for 26 cumulus fields from Large-Eddy Simulations, supplemented with 40 aerosol profiles. The results showed that (i) for a variety of cumulus cloud scenes and aerosol profiles over ocean the 3D correction due to cloud adjacency effect can be extended from shorter to longer wavelengths and (ii) the 3D corrections for longer wavelengths are not very sensitive to unbiased random uncertainties in the 3D corrections at shorter wavelengths.

  4. Elastic electron scattering from the DNA bases cytosine and thymine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colyer, C. J.; Bellm, S. M.; Lohmann, B.

    2011-10-15

    Cross-section data for electron scattering from biologically relevant molecules are important for the modeling of energy deposition in living tissue. Relative elastic differential cross sections have been measured for cytosine and thymine using the crossed-beam method. These measurements have been performed for six discrete electron energies between 60 and 500 eV and for detection angles between 15 deg. and 130 deg. Calculations have been performed via the screen-corrected additivity rule method and are in good agreement with the present experiment.

  5. Data-driven sensitivity inference for Thomson scattering electron density measurement systems.

    PubMed

    Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro

    2017-01-01

    We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.

  6. Adaptive handling of Rayleigh and Raman scatter of fluorescence data based on evaluation of the degree of spectral overlap

    NASA Astrophysics Data System (ADS)

    Hu, Yingtian; Liu, Chao; Wang, Xiaoping; Zhao, Dongdong

    2018-06-01

    At present the general scatter handling methods are unsatisfactory when scatter and fluorescence seriously overlap in excitation emission matrix. In this study, an adaptive method for scatter handling of fluorescence data is proposed. Firstly, the Raman scatter was corrected by subtracting the baseline of deionized water which was collected in each experiment to adapt to the intensity fluctuations. Then, the degrees of spectral overlap between Rayleigh scatter and fluorescence were classified into three categories based on the distance between the spectral peaks. The corresponding algorithms, including setting to zero, fitting on single or both sides, were implemented after the evaluation of the degree of overlap for individual emission spectra. The proposed method minimized the number of fitting and interpolation processes, which reduced complexity, saved time, avoided overfitting, and most importantly assured the authenticity of data. Furthermore, the effectiveness of this procedure on the subsequent PARAFAC analysis was assessed and compared to Delaunay interpolation by conducting experiments with four typical organic chemicals and real water samples. Using this method, we conducted long-term monitoring of tap water and river water near a dyeing and printing plant. This method can be used for improving adaptability and accuracy in the scatter handling of fluorescence data.

  7. Stopping power of dense plasmas: The collisional method and limitations of the dielectric formalism.

    PubMed

    Clauser, C F; Arista, N R

    2018-02-01

    We present a study of the stopping power of plasmas using two main approaches: the collisional (scattering theory) and the dielectric formalisms. In the former case, we use a semiclassical method based on quantum scattering theory. In the latter case, we use the full description given by the extension of the Lindhard dielectric function for plasmas of all degeneracies. We compare these two theories and show that the dielectric formalism has limitations when it is used for slow heavy ions or atoms in dense plasmas. We present a study of these limitations and show the regimes where the dielectric formalism can be used, with appropriate corrections to include the usual quantum and classical limits. On the other hand, the semiclassical method shows the correct behavior for all plasma conditions and projectile velocity and charge. We consider different models for the ion charge distributions, including bare and dressed ions as well as neutral atoms.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less

  9. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    NASA Astrophysics Data System (ADS)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.

  10. Analytical multiple scattering correction to the Mie theory: Application to the analysis of the lidar signal

    NASA Technical Reports Server (NTRS)

    Flesia, C.; Schwendimann, P.

    1992-01-01

    The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.

  11. Monte Carlo simulations of GeoPET experiments: 3D images of tracer distributions (18F, 124I and 58Co) in Opalinus clay, anhydrite and quartz

    NASA Astrophysics Data System (ADS)

    Zakhnini, Abdelhamid; Kulenkampff, Johannes; Sauerzapf, Sophie; Pietrzyk, Uwe; Lippmann-Pipke, Johanna

    2013-08-01

    Understanding conservative fluid flow and reactive tracer transport in soils and rock formations requires quantitative transport visualization methods in 3D+t. After a decade of research and development we established the GeoPET as a non-destructive method with unrivalled sensitivity and selectivity, with due spatial and temporal resolution by applying Positron Emission Tomography (PET), a nuclear medicine imaging method, to dense rock material. Requirements for reaching the physical limit of image resolution of nearly 1 mm are (a) a high-resolution PET-camera, like our ClearPET scanner (Raytest), and (b) appropriate correction methods for scatter and attenuation of 511 keV—photons in the dense geological material. The latter are by far more significant in dense geological material than in human and small animal body tissue (water). Here we present data from Monte Carlo simulations (MCS) reflecting selected GeoPET experiments. The MCS consider all involved nuclear physical processes of the measurement with the ClearPET-system and allow us to quantify the sensitivity of the method and the scatter fractions in geological media as function of material (quartz, Opalinus clay and anhydrite compared to water), PET isotope (18F, 58Co and 124I), and geometric system parameters. The synthetic data sets obtained by MCS are the basis for detailed performance assessment studies allowing for image quality improvements. A scatter correction method is applied exemplarily by subtracting projections of simulated scattered coincidences from experimental data sets prior to image reconstruction with an iterative reconstruction process.

  12. Lattice operators for scattering of particles with spin

    DOE PAGES

    Prelovsek, S.; Skerbis, U.; Lang, C. B.

    2017-01-30

    We construct operators for simulating the scattering of two hadrons with spin on the lattice. Three methods are shown to give the consistent operators for P N, P V, V N and N N scattering, where P, V and N denote pseudoscalar, vector and nucleon. Explicit expressions for operators are given for all irreducible representations at lowest two relative momenta. Each hadron has a good helicity in the first method. The hadrons are in a certain partial wave L with total spin S in the second method. These enable the physics interpretations of the operators obtained from the general projectionmore » method. The correct transformation properties of the operators in all three methods are proven. Lastly, the total momentum of two hadrons is restricted to zero since parity is a good quantum number in this case.« less

  13. Post-PRK corneal scatter measurements with a scanning confocal slit photon counter

    NASA Astrophysics Data System (ADS)

    Taboada, John; Gaines, David; Perez, Mary A.; Waller, Steve G.; Ivan, Douglas J.; Baldwin, J. Bruce; LoRusso, Frank; Tutt, Ronald C.; Perez, Jose; Tredici, Thomas; Johnson, Dan A.

    2000-06-01

    Increased corneal light scatter or 'haze' has been associated with excimer laser photorefractive surgery of the cornea. The increased scatter can affect visual performance; however, topical steroid treatment post surgery substantially reduces the post PRK scatter. For the treatment and monitoring of the scattering characteristics of the cornea, various methods have been developed to objectively measure the magnitude of the scatter. These methods generally can measure scatter associated with clinically observable levels of haze. For patients with moderate to low PRK corrections receiving steroid treatment, measurement becomes fairly difficult as the haze clinical rating is non observable. The goal of this development was to realize an objective, non-invasive physical measurement that could produce a significant reading for any level including the background present in a normal cornea. As back-scatter is the only readily accessible observable, the instrument is based on this measurement. To achieve this end required the use of a confocal method to bias out the background light that would normally confound conventional methods. A number of subjects with nominal refractive errors in an Air Force study have undergone PRK surgery. A measurable increase in corneal scatter has been observed in these subjects whereas clinical ratings of the haze were noted as level zero. Other favorable aspects of this back-scatter based instrument include an optical capability to perform what is equivalent to an optical A-scan of the anterior chamber. Lens scatter can also be measured.

  14. Evidence for using Monte Carlo calculated wall attenuation and scatter correction factors for three styles of graphite-walled ion chamber.

    PubMed

    McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O

    2004-06-21

    The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.

  15. An empirical model for polarized and cross-polarized scattering from a vegetation layer

    NASA Technical Reports Server (NTRS)

    Liu, H. L.; Fung, A. K.

    1988-01-01

    An empirical model for scattering from a vegetation layer above an irregular ground surface is developed in terms of the first-order solution for like-polarized scattering and the second-order solution for cross-polarized scattering. The effects of multiple scattering within the layer and at the surface-volume boundary are compensated by using a correction factor based on the matrix doubling method. The major feature of this model is that all parameters in the model are physical parameters of the vegetation medium. There are no regression parameters. Comparisons of this empirical model with theoretical matrix-doubling method and radar measurements indicate good agreements in polarization, angular trends, and k sub a up to 4, where k is the wave number and a is the disk radius. The computational time is shortened by a factor of 8, relative to the theoretical model calculation.

  16. Experimental validation of a multi-energy x-ray adapted scatter separation method

    NASA Astrophysics Data System (ADS)

    Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.

    2016-12-01

    Both in radiography and computed tomography (CT), recently emerged energy-resolved x-ray photon counting detectors enable the identification and quantification of individual materials comprising the inspected object. However, the approaches used for these operations require highly accurate x-ray images. The accuracy of the images is severely compromised by the presence of scattered radiation, which leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in CT. The aim of the present study was to experimentally evaluate a recently introduced partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. For this purpose, a prototype x-ray system was used. Several radiographic acquisitions of an anthropomorphic thorax phantom were performed. Reference primary images were obtained via the beam-stop (BS) approach. The attenuation images acquired from PASSSA-corrected data showed a substantial increase in local contrast and internal structure contour visibility when compared to uncorrected images. A substantial reduction of scatter induced bias was also achieved. Quantitatively, the developed method proved to be in relatively good agreement with the BS data. The application of the proposed scatter correction technique lowered the initial normalized root-mean-square error (NRMSE) of 45% between the uncorrected total and the reference primary spectral images by a factor of 9, thus reducing it to around 5%.

  17. Incoherent-scatter computed tomography with monochromatic synchrotron x ray: feasibility of multi-CT imaging system for simultaneous measurement-of fluorescent and incoherent scatter x rays

    NASA Astrophysics Data System (ADS)

    Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.

    1997-10-01

    We describe a new system of incoherent scatter computed tomography (ISCT) using monochromatic synchrotron X rays, and we discuss its potential to be used in in vivo imaging for medical use. The system operates on the basis of computed tomography (CT) of the first generation. The reconstruction method for ISCT uses the least squares method with singular value decomposition. The research was carried out at the BLNE-5A bending magnet beam line of the Tristan Accumulation Ring in KEK, Japan. An acrylic cylindrical phantom of 20-mm diameter containing a cross-shaped channel was imaged. The channel was filled with a diluted iodine solution with a concentration of 200 /spl mu/gI/ml. Spectra obtained with the system's high purity germanium (HPGe) detector separated the incoherent X-ray line from the other notable peaks, i.e., the iK/sub /spl alpha// and K/sub /spl beta/1/ X-ray fluorescent lines and the coherent scattering peak. CT images were reconstructed from projections generated by integrating the counts In the energy window centering around the incoherent scattering peak and whose width was approximately 2 keV. The reconstruction routine employed an X-ray attenuation correction algorithm. The resulting image showed more homogeneity than one without the attenuation correction.

  18. X-ray coherent scattering tomography of textured material (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhu, Zheyuan; Pang, Shuo

    2017-05-01

    Small-angle X-ray scattering (SAXS) measures the signature of angular-dependent coherently scattered X-rays, which contains richer information in material composition and structure compared to conventional absorption-based computed tomography. SAXS image reconstruction method of a 2 or 3 dimensional object based on computed tomography, termed as coherent scattering computed tomography (CSCT), enables the detection of spatially-resolved, material-specific isotropic scattering signature inside an extended object, and provides improved contrast for medical diagnosis, security screening, and material characterization applications. However, traditional CSCT methods assumes materials are fine powders or amorphous, and possess isotropic scattering profiles, which is not generally true for all materials. Anisotropic scatters cannot be captured using conventional CSCT method and result in reconstruction errors. To obtain correct information from the sample, we designed new imaging strategy which incorporates extra degree of detector motion into X-ray scattering tomography for the detection of anisotropic scattered photons from a series of two-dimensional intensity measurements. Using a table-top, narrow-band X-ray source and a panel detector, we demonstrate the anisotropic scattering profile captured from an extended object and the reconstruction of a three-dimensional object. For materials possessing a well-organized crystalline structure with certain symmetry, the scatter texture is more predictable. We will also discuss the compressive schemes and implementation of data acquisition to improve the collection efficiency and accelerate the imaging process.

  19. WE-G-204-06: Grid-Line Artifact Minimization for High Resolution Detectors Using Iterative Residual Scatter Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, R; Bednarek, D; Rudin, S

    2015-06-15

    Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained withoutmore » scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.« less

  20. Large Electroweak Corrections to Vector-Boson Scattering at the Large Hadron Collider.

    PubMed

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu

    2017-06-30

    For the first time full next-to-leading-order electroweak corrections to off-shell vector-boson scattering are presented. The computation features the complete matrix elements, including all nonresonant and off-shell contributions, to the electroweak process pp→μ^{+}ν_{μ}e^{+}ν_{e}jj and is fully differential. We find surprisingly large corrections, reaching -16% for the fiducial cross section, as an intrinsic feature of the vector-boson-scattering processes. We elucidate the origin of these large electroweak corrections upon using the double-pole approximation and the effective vector-boson approximation along with leading-logarithmic corrections.

  1. Reversal of photon-scattering errors in atomic qubits.

    PubMed

    Akerman, N; Kotler, S; Glickman, Y; Ozeri, R

    2012-09-07

    Spontaneous photon scattering by an atomic qubit is a notable example of environment-induced error and is a fundamental limit to the fidelity of quantum operations. In the scattering process, the qubit loses its distinctive and coherent character owing to its entanglement with the photon. Using a single trapped ion, we show that by utilizing the information carried by the photon, we are able to coherently reverse this process and correct for the scattering error. We further used quantum process tomography to characterize the photon-scattering error and its correction scheme and demonstrate a correction fidelity greater than 85% whenever a photon was measured.

  2. Coherent beam control through inhomogeneous media in multi-photon microscopy

    NASA Astrophysics Data System (ADS)

    Paudel, Hari Prasad

    Multi-photon fluorescence microscopy has become a primary tool for high-resolution deep tissue imaging because of its sensitivity to ballistic excitation photons in comparison to scattered excitation photons. The imaging depth of multi-photon microscopes in tissue imaging is limited primarily by background fluorescence that is generated by scattered light due to the random fluctuations in refractive index inside the media, and by reduced intensity in the ballistic focal volume due to aberrations within the tissue and at its interface. We built two multi-photon adaptive optics (AO) correction systems, one for combating scattering and aberration problems, and another for compensating interface aberrations. For scattering correction a MEMS segmented deformable mirror (SDM) was inserted at a plane conjugate to the objective back-pupil plane. The SDM can pre-compensate for light scattering by coherent combination of the scattered light to make an apparent focus even at a depths where negligible ballistic light remains (i.e. ballistic limit). This problem was approached by investigating the spatial and temporal focusing characteristics of a broad-band light source through strongly scattering media. A new model was developed for coherent focus enhancement through or inside the strongly media based on the initial speckle contrast. A layer of fluorescent beads under a mouse skull was imaged using an iterative coherent beam control method in the prototype two-photon microscope to demonstrate the technique. We also adapted an AO correction system to an existing in three-photon microscope in a collaborator lab at Cornell University. In the second AO correction approach a continuous deformable mirror (CDM) is placed at a plane conjugate to the plane of an interface aberration. We demonstrated that this "Conjugate AO" technique yields a large field-of-view (FOV) advantage in comparison to Pupil AO. Further, we showed that the extended FOV in conjugate AO is maintained over a relatively large axial misalignment of the conjugate planes of the CDM and the aberrating interface. This dissertation advances the field of microscopy by providing new models and techniques for imaging deeply within strongly scattering tissue, and by describing new adaptive optics approaches to extending imaging FOV due to sample aberrations.

  3. Analysis of position-dependent Compton scatter in scintimammography with mild compression

    NASA Astrophysics Data System (ADS)

    Williams, M. B.; Narayanan, D.; More, M. J.; Goodale, P. J.; Majewski, S.; Kieper, D. A.

    2003-10-01

    In breast scintigraphy using /sup 99m/Tc-sestamibi the relatively low radiotracer uptake in the breast compared to that in other organs such as the heart results in a large fraction of the detected events being Compton scattered gamma-rays. In this study, our goal was to determine whether generalized conclusions regarding scatter-to-primary ratios at various locations within the breast image are possible, and if so, to use them to make explicit scatter corrections to the breast scintigrams. Energy spectra were obtained from patient scans for contiguous regions of interest (ROIs) centered left to right within the image of the breast, and extending from the chest wall edge of the image to the anterior edge. An anthropomorphic torso phantom with fillable internal organs and a compressed-shape breast containing water only was used to obtain realistic position-dependent scatter-only spectra. For each ROI, the measured patient energy spectrum was fitted with a linear combination of the scatter-only spectrum from the anthropomorphic phantom and the scatter-free spectrum from a point source. We found that although there is a very strong dependence on location within the breast of the scatter-to-primary ratio, the spectra are well modeled by a linear combination of position-dependent scatter-only spectra and a position-independent scatter-free spectrum, resulting in a set of position-dependent correction factors. These correction factors can be used along with measured emission spectra from a given breast to correct for the Compton scatter in the scintigrams. However, the large variation among patients in the magnitude of the position-dependent scatter makes the success of universal correction approaches unlikely.

  4. Neutron crosstalk between liquid scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J. M.; Prasad, M. K.; Snyderman, N. J.

    2015-05-01

    We propose a method to quantify the fractions of neutrons scattering between liquid scintillators. Using a spontaneous fission source, this method can be utilized to quickly characterize an array of liquid scintillators in terms of crosstalk. The point model theory due to Feynman is corrected to account for these multiple scatterings. Using spectral information measured by the liquid scintillators, fractions of multiple scattering can be estimated, and mass reconstruction of fissile materials under investigation can be improved. Monte Carlo simulations of mono-energetic neutron sources were performed to estimate neutron crosstalk. A californium source in an array of liquid scintillators wasmore » modeled to illustrate the improvement of the mass reconstruction.« less

  5. GIXSGUI : a MATLAB toolbox for grazing-incidence X-ray scattering data visualization and reduction, and indexing of buried three-dimensional periodic nanostructured films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang

    GIXSGUIis a MATLAB toolbox that offers both a graphical user interface and script-based access to visualize and process grazing-incidence X-ray scattering data from nanostructures on surfaces and in thin films. It provides routine surface scattering data reduction methods such as geometric correction, one-dimensional intensity linecut, two-dimensional intensity reshapingetc. Three-dimensional indexing is also implemented to determine the space group and lattice parameters of buried organized nanoscopic structures in supported thin films.

  6. Retrieval of the scattering and microphysical properties of aerosols from ground-based optical measurements including polarization. I. Method.

    PubMed

    Vermeulen, A; Devaux, C; Herman, M

    2000-11-20

    A method has been developed for retrieving the scattering and microphysical properties of atmospheric aerosol from measurements of solar transmission, aureole, and angular distribution of the scattered and polarized sky light in the solar principal plane. Numerical simulations of measurements have been used to investigate the feasibility of the method and to test the algorithm's performance. It is shown that the absorption and scattering properties of an aerosol, i.e., the single-scattering albedo, the phase function, and the polarization for single scattering of incident unpolarized light, can be obtained by use of radiative transfer calculations to correct the values of scattered radiance and polarized radiance for multiple scattering, Rayleigh scattering, and the influence of ground. The method requires only measurement of the aerosol's optical thickness and an estimate of the ground's reflectance and does not need any specific assumption about properties of the aerosol. The accuracy of the retrieved phase function and polarization of the aerosols is examined at near-infrared wavelengths (e.g., 0.870 mum). The aerosol's microphysical properties (size distribution and complex refractive index) are derived in a second step. The real part of the refractive index is a strong function of the polarization, whereas the imaginary part is strongly dependent on the sky's radiance and the retrieved single-scattering albedo. It is demonstrated that inclusion of polarization data yields the real part of the refractive index.

  7. Virtual Excitation and Multiple Scattering Correction Terms to the Neutron Index of Refraction for Hydrogen.

    PubMed

    Schoen, K; Snow, W M; Kaiser, H; Werner, S A

    2005-01-01

    The neutron index of refraction is generally derived theoretically in the Fermi approximation. However, the Fermi approximation neglects the effects of the binding of the nuclei of a material as well as multiple scattering. Calculations by Nowak introduced correction terms to the neutron index of refraction that are quadratic in the scattering length and of order 10(-3) fm for hydrogen and deuterium. These correction terms produce a small shift in the final value for the coherent scattering length of H2 in a recent neutron interferometry experiment.

  8. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  9. Electron collisions with phenol: Total, integral, differential, and momentum transfer cross sections and the role of multichannel coupling effects on the elastic channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Romarly F. da; Centro de Ciências Naturais e Humanas, Universidade Federal do ABC, 09210-580 Santo André, São Paulo; Oliveira, Eliane M. de

    2015-03-14

    We report theoretical and experimental total cross sections for electron scattering by phenol (C{sub 6}H{sub 5}OH). The experimental data were obtained with an apparatus based in Madrid and the calculated cross sections with two different methodologies, the independent atom method with screening corrected additivity rule (IAM-SCAR), and the Schwinger multichannel method with pseudopotentials (SMCPP). The SMCPP method in the N{sub open}-channel coupling scheme, at the static-exchange-plus-polarization approximation, is employed to calculate the scattering amplitudes at impact energies ranging from 5.0 eV to 50 eV. We discuss the multichannel coupling effects in the calculated cross sections, in particular how the numbermore » of excited states included in the open-channel space impacts upon the convergence of the elastic cross sections at higher collision energies. The IAM-SCAR approach was also used to obtain the elastic differential cross sections (DCSs) and for correcting the experimental total cross sections for the so-called forward angle scattering effect. We found a very good agreement between our SMCPP theoretical differential, integral, and momentum transfer cross sections and experimental data for benzene (a molecule differing from phenol by replacing a hydrogen atom in benzene with a hydroxyl group). Although some discrepancies were found for lower energies, the agreement between the SMCPP data and the DCSs obtained with the IAM-SCAR method improves, as expected, as the impact energy increases. We also have a good agreement among the present SMCPP calculated total cross section (which includes elastic, 32 inelastic electronic excitation processes and ionization contributions, the latter estimated with the binary-encounter-Bethe model), the IAM-SCAR total cross section, and the experimental data when the latter is corrected for the forward angle scattering effect [Fuss et al., Phys. Rev. A 88, 042702 (2013)].« less

  10. Dependent scattering and absorption by densely packed discrete spherical particles: Effects of complex refractive index

    NASA Astrophysics Data System (ADS)

    Ma, L. X.; Tan, J. Y.; Zhao, J. M.; Wang, F. Q.; Wang, C. A.; Wang, Y. Y.

    2017-07-01

    Due to the dependent scattering and absorption effects, the radiative transfer equation (RTE) may not be suitable for dealing with radiative transfer in dense discrete random media. This paper continues previous research on multiple and dependent scattering in densely packed discrete particle systems, and puts emphasis on the effects of particle complex refractive index. The Mueller matrix elements of the scattering system with different complex refractive indexes are obtained by both electromagnetic method and radiative transfer method. The Maxwell equations are directly solved based on the superposition T-matrix method, while the RTE is solved by the Monte Carlo method combined with the hard sphere model in the Percus-Yevick approximation (HSPYA) to consider the dependent scattering effects. The results show that for densely packed discrete random media composed of medium size parameter particles (equals 6.964 in this study), the demarcation line between independent and dependent scattering has remarkable connections with the particle complex refractive index. With the particle volume fraction increase to a certain value, densely packed discrete particles with higher refractive index contrasts between the particles and host medium and higher particle absorption indexes are more likely to show stronger dependent characteristics. Due to the failure of the extended Rayleigh-Debye scattering condition, the HSPYA has weak effect on the dependent scattering correction at large phase shift parameters.

  11. Correction of motion artefacts and pseudo colour visualization of multispectral light scattering images for optical diagnosis of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula

    2009-10-01

    State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.

  12. Correction of motion artefacts and pseudo colour visualization of multispectral light scattering images for optical diagnosis of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula

    2010-02-01

    State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.

  13. Survey of background scattering from materials found in small-angle neutron scattering.

    PubMed

    Barker, J G; Mildner, D F R

    2015-08-01

    Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3 He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3 He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.

  14. Survey of background scattering from materials found in small-angle neutron scattering

    PubMed Central

    Barker, J. G.; Mildner, D. F. R.

    2015-01-01

    Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088

  15. An Accurate Scatter Measurement and Correction Technique for Cone Beam Breast CT Imaging Using Scanning Sampled Measurement (SSM) Technique.

    PubMed

    Liu, Xinming; Shaw, Chris C; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C; Kappadath, S Cheenu

    2006-02-28

    We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images.Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.

  16. Simulating the influence of scatter and beam hardening in dimensional computed tomography

    NASA Astrophysics Data System (ADS)

    Lifton, J. J.; Carmignato, S.

    2017-10-01

    Cone-beam x-ray computed tomography (XCT) is a radiographic scanning technique that allows the non-destructive dimensional measurement of an object’s internal and external features. XCT measurements are influenced by a number of different factors that are poorly understood. This work investigates how non-linear x-ray attenuation caused by beam hardening and scatter influences XCT-based dimensional measurements through the use of simulated data. For the measurement task considered, both scatter and beam hardening are found to influence dimensional measurements when evaluated using the ISO50 surface determination method. On the other hand, only beam hardening is found to influence dimensional measurements when evaluated using an advanced surface determination method. Based on the results presented, recommendations on the use of beam hardening and scatter correction for dimensional XCT are given.

  17. Soft-photon emission effects and radiative corrections for electromagnetic processes at very high energies

    NASA Technical Reports Server (NTRS)

    Gould, R. J.

    1979-01-01

    Higher-order electromagnetic processes involving particles at ultrahigh energies are discussed, with particular attention given to Compton scattering with the emission of an additional photon (double Compton scattering). Double Compton scattering may have significance in the interaction of a high-energy electron with the cosmic blackbody photon gas. At high energies the cross section for double Compton scattering is large, though this effect is largely canceled by the effects of radiative corrections to ordinary Compton scattering. A similar cancellation takes place for radiative pair production and the associated radiative corrections to the radiationless process. This cancellation is related to the well-known cancellation of the infrared divergence in electrodynamics.

  18. Correction of autofluorescence intensity for epithelial scattering by optical coherence tomography: a phantom study

    NASA Astrophysics Data System (ADS)

    Pahlevaninezhad, H.; Lee, A. M. D.; Hyun, C.; Lam, S.; MacAulay, C.; Lane, P. M.

    2013-03-01

    In this paper, we conduct a phantom study for modeling the autofluorescence (AF) properties of tissue. A combined optical coherence tomography (OCT) and AF imaging system is proposed to measure the strength of the AF signal in terms of the scattering layer thickness and concentration. The combined AF-OCT system is capable of estimating the AF loss due to scattering in the epithelium using the thickness and scattering concentration calculated from the co-registered OCT images. We define a correction factor to account for scattering losses in the epithelium and calculate a scatteringcorrected AF signal. We believe the scattering-corrected AF will reduce the diagnostic false-positives rate in the early detection of airway lesions due to confounding factors such as increased epithelial thickness and inflammations.

  19. A three-dimensional model-based partial volume correction strategy for gated cardiac mouse PET imaging

    NASA Astrophysics Data System (ADS)

    Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.

    2012-07-01

    Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.

  20. A new method for spatial structure detection of complex inner cavities based on 3D γ-photon imaging

    NASA Astrophysics Data System (ADS)

    Xiao, Hui; Zhao, Min; Liu, Jiantang; Liu, Jiao; Chen, Hao

    2018-05-01

    This paper presents a new three-dimensional (3D) imaging method for detecting the spatial structure of a complex inner cavity based on positron annihilation and γ-photon detection. This method first marks carrier solution by a certain radionuclide and injects it into the inner cavity where positrons are generated. Subsequently, γ-photons are released from positron annihilation, and the γ-photon detector ring is used for recording the γ-photons. Finally, the two-dimensional (2D) image slices of the inner cavity are constructed by the ordered-subset expectation maximization scheme and the 2D image slices are merged to the 3D image of the inner cavity. To eliminate the artifact in the reconstructed image due to the scattered γ-photons, a novel angle-traversal model is proposed for γ-photon single-scattering correction, in which the path of the single scattered γ-photon is analyzed from a spatial geometry perspective. Two experiments are conducted to verify the effectiveness of the proposed correction model and the advantage of the proposed testing method in detecting the spatial structure of the inner cavity, including the distribution of gas-liquid multi-phase mixture inside the inner cavity. The above two experiments indicate the potential of the proposed method as a new tool for accurately delineating the inner structures of industrial complex parts.

  1. 3D Tomographic SAR Imaging in Densely Vegetated Mountainous Rural Areas in China and Sweden

    NASA Astrophysics Data System (ADS)

    Feng, L.; Muller, J. P., , Prof

    2017-12-01

    3D SAR Tomography (TomoSAR) and 4D SAR Differential Tomography (Diff-TomoSAR) exploit multi-baseline SAR data stacks to create an important new innovation of SAR Interferometry, to unscramble complex scenes with multiple scatterers mapped into the same SAR cell. In addition to this 3-D shape reconstruction and deformation solution in complex urban/infrastructure areas, and recent cryospheric ice investigations, emerging tomographic remote sensing applications include forest applications, e.g. tree height and biomass estimation, sub-canopy topographic mapping, and even search, rescue and surveillance. However, these scenes are characterized by temporal decorrelation of scatterers, orbital, tropospheric and ionospheric phase distortion and an open issue regarding possible height blurring and accuracy losses for TomoSAR applications particularly in densely vegetated mountainous rural areas. Thus, it is important to develop solutions for temporal decorrelation, orbital, tropospheric and ionospheric phase distortion.We report here on 3D imaging (especially in vertical layers) over densely vegetated mountainous rural areas using 3-D SAR imaging (SAR tomography) derived from data stacks of X-band COSMO-SkyMed Spotlight and L band ALOS-1 PALSAR data stacks over Dujiangyan Dam, Sichuan, China and L and P band airborne SAR data (BioSAR 2008 - ESA) in the Krycklan river catchment, Northern Sweden. The new TanDEM-X 12m DEM is used to assist co - registration of all the data stacks over China first. Then, atmospheric correction is being assessed using weather model data such as ERA-I, MERRA, MERRA-2, WRF; linear phase-topography correction and MODIS spectrometer correction will be compared and ionospheric correction methods are discussed to remove tropospheric and ionospheric delay. Then the new TomoSAR method with the TanDEM-X 12m DEM is described to obtain the number of scatterers inside each pixel, the scattering amplitude and phase of each scatterer and finally extract tomograms (imaging), their 3D positions and motion parameters (deformation). A progress report will be shown on these different aspects.This work is partially supported by the CSC and UCL MAPS Dean prize through a PhD studentship at UCL-MSSL.

  2. Scatter and cross-talk correction for one-day acquisition of 123I-BMIPP and 99mtc-tetrofosmin myocardial SPECT.

    PubMed

    Kaneta, Tomohiro; Kurihara, Hideyuki; Hakamatsuka, Takashi; Ito, Hiroshi; Maruoka, Shin; Fukuda, Hiroshi; Takahashi, Shoki; Yamada, Shogo

    2004-12-01

    123I-15-(p-iodophenyl)-3-(R,S)-methylpentadecanoic acid (BMIPP) and 99mTc-tetrofosmin (TET) are widely used for evaluation of myocardial fatty acid metabolism and perfusion, respectively. ECG-gated TET SPECT is also used for evaluation of myocardial wall motion. These tests are often performed on the same day to minimize both the time required and inconvenience to patients and medical staff. However, as 123I and 99mTc have similar emission energies (159 keV and 140 keV, respectively), it is necessary to consider not only scattered photons, but also primary photons of each radionuclide detected in the wrong window (cross-talk). In this study, we developed and evaluated the effectiveness of a new scatter and cross-talk correction imaging protocol. Fourteen patients with ischemic heart disease or heart failure (8 men and 6 women with a mean age of 69.4 yr, ranging from 45 to 94 yr) were enrolled in this study. In the routine one-day acquisition protocol, BMIPP SPECT was performed in the morning, with TET SPECT performed 4 h later. An additional SPECT was performed just before injection of TET with the energy window for 99mTc. These data correspond to the scatter and cross-talk factor of the next TET SPECT. The correction was performed by subtraction of the scatter and cross-talk factor from TET SPECT. Data are presented as means +/- S.E. Statistical analyses were performed using Wilcoxon's matched-pairs signed-ranks test, and p < 0.05 was considered significant. The percentage of scatter and cross-talk relative to the corrected total count was 26.0 +/- 5.3%. EDV and ESV after correction were significantly greater than those before correction (p = 0.019 and 0.016, respectively). After correction, EF was smaller than that before correction, but the difference was not significant. Perfusion scores (17 segments per heart) were significantly lower after as compared with those before correction (p < 0.001). Scatter and cross-talk correction revealed significant differences in EDV, ESV, and perfusion scores. These observations indicate that scatter and cross-talk correction is required for one-day acquisition of 123I-BMIPP and 99mTc-tetrofosmin SPECT.

  3. Inductively coupled plasma atomic fluorescence spectrometric determination of cadmium, copper, iron, lead, manganese and zinc

    USGS Publications Warehouse

    Sanzolone, R.F.

    1986-01-01

    An inductively coupled plasma atomic fluorescence spectrometric method is described for the determination of six elements in a variety of geological materials. Sixteen reference materials are analysed by this technique to demonstrate its use in geochemical exploration. Samples are decomposed with nitric, hydrofluoric and hydrochloric acids, and the residue dissolved in hydrochloric acid and diluted to volume. The elements are determined in two groups based on compatibility of instrument operating conditions and consideration of crustal abundance levels. Cadmium, Cu, Pb and Zn are determined as a group in the 50-ml sample solution under one set of instrument conditions with the use of scatter correction. Limitations of the scatter correction technique used with the fluorescence instrument are discussed. Iron and Mn are determined together using another set of instrumental conditions on a 1-50 dilution of the sample solution without the use of scatter correction. The ranges of concentration (??g g-1) of these elements in the sample that can be determined are: Cd, 0.3-500; Cu, 0.4-500; Fe, 85-250 000; Mn, 45-100 000; Pb, 5-10 000; and Zn, 0.4-300. The precision of the method is usually less than 5% relative standard deviation (RSD) over a wide concentration range and acceptable accuracy is shown by the agreement between values obtained and those recommended for the reference materials.

  4. Operational atmospheric correction of AVHRR visible and infrared data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vermote, E.; El Saleous, N.; Roger, J.C.

    1995-12-31

    The satellite level radiance is affected by the presence of the atmosphere between the sensor and the target. The ozone and water vapor absorption bands affect the signal recorded by the AVHRR visible and near infrared channels respectively. The Rayleigh scattering mainly affects the visible channel and is more pronounced when dealing with small sun elevations and large view angles. The aerosol scattering affects both channels and is certainly the most challenging term for atmospheric correction because of the spatial and temporal variability of both the type and amount of particles in the atmosphere. This paper presents the equation ofmore » the satellite signal, the scheme to retrieve atmospheric properties and corrections applied to AVHRR observations. The operational process uses TOMS data and a digital elevation model to correct for ozone absorption and rayleigh scattering. The water vapor content is evaluated using the split-window technique that is validated over ocean using 1988 SSM/I data. The aerosol amount retrieval over Ocean is achieved in channels 1 and 2 and compared to sun photometer observations to check consistency of the radiative transfer model and the sensor calibration. Over land, the method developed uses reflectance at 3.75 microns to deduce target reflectance in channel 1 and retrieve aerosol optical thickness that can be extrapolated in channel 2. The method to invert the reflectance at 3.75 microns is based on MODTRAN simulations and is validated by comparison to measurements performed during FIFE 87. Finally, aerosol optical thickness retrieved over Brazil and Eastern US is compared to sun photometer measurements.« less

  5. Limitations on near-surface correction for multicomponent offset VSP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macbeth, C.; Li, X.Y.; Horne, S.

    1994-12-31

    Multicomponent data are degraded due to near-surface scattering and non-ideal or unexpected source behavior. These effects cannot be neglected when interpreting relative wavefield attributes derived from compressional and shear waves. They confuse analyses based on standard scalar procedures and a prima facia interpretation of the vector wavefield properties. Here, the authors highlight two unique polar matrix decompositions for near-surface correction in offset VSPs, consider their inherent mathematical constraints and how they impact on subsurface interpretation. The first method is applied to a four component subset of a six component field data from a configuration of three concentric rings and walkawaymore » source positions forming offset VSPs in the Cymric field, California. The correction appears successful in automatically converting the wavefield into its ideal form, and the qSl polarizations scatter around N15{degree}E in agreement with the layer stripping of Winterstein and Meadows (1991).« less

  6. Energy-loss- and thickness-dependent contrast in atomic-scale electron energy-loss spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Haiyan; Zhu, Ye; Dwyer, Christian

    2014-12-31

    Atomic-scale elemental maps of materials acquired by core-loss inelastic electron scattering often exhibit an undesirable sensitivity to the unavoidable elastic scattering, making the maps counter-intuitive to interpret. Here, we present a systematic study that scrutinizes the energy-loss and sample-thickness dependence of atomic-scale elemental maps acquired using 100 keV incident electrons in a scanning transmission electron microscope. For single-crystal silicon, the balance between elastic and inelastic scattering means that maps generated from the near-threshold Si-L signal (energy loss of 99 eV) show no discernible contrast for a thickness of 0.5λ (λ is the electron mean-free path, here approximately 110 nm). Atmore » greater thicknesses we observe a counter-intuitive “negative” contrast. Only at much higher energy losses is an intuitive “positive” contrast gradually restored. Our quantitative analysis shows that the energy-loss at which a positive contrast is restored depends linearly on the sample thickness. This behavior is in very good agreement with our double-channeling inelastic scattering calculations. We test a recently-proposed experimental method to correct the core-loss inelastic scattering and restore an intuitive “positive” chemical contrast. The method is demonstrated to be reliable over a large range of energy losses and sample thicknesses. The corrected contrast for near-threshold maps is demonstrated to be (desirably) inversely proportional to sample thickness. As a result, implications for the interpretation of atomic-scale elemental maps are discussed.« less

  7. Obtaining the Bidirectional Transfer Distribution Function ofIsotropically Scattering Materials Using an Integrating Sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonsson, Jacob C.; Branden, Henrik

    2006-10-19

    This paper demonstrates a method to determine thebidirectional transfer distribution function (BTDF) using an integratingsphere. Information about the sample's angle dependent scattering isobtained by making transmittance measurements with the sample atdifferent distances from the integrating sphere. Knowledge about theilluminated area of the sample and the geometry of the sphere port incombination with the measured data combines to an system of equationsthat includes the angle dependent transmittance. The resulting system ofequations is an ill-posed problem which rarely gives a physical solution.A solvable system is obtained by using Tikhonov regularization on theill-posed problem. The solution to this system can then be usedmore » to obtainthe BTDF. Four bulk-scattering samples were characterised using both twogoniophotometers and the described method to verify the validity of thenew method. The agreement shown is great for the more diffuse samples.The solution to the low-scattering samples contains unphysicaloscillations, butstill gives the correct shape of the solution. Theorigin of the oscillations and why they are more prominent inlow-scattering samples are discussed.« less

  8. A New Method for Atmospheric Correction of MRO/CRISM Data.

    NASA Astrophysics Data System (ADS)

    Noe Dobrea, Eldar Z.; Dressing, C.; Wolff, M. J.

    2009-09-01

    The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter (MRO) collects hyperspectral images from 0.362 to 3.92 μm at 6.55 nanometers/channel, and at a spatial resolution of 20 m/pixel. The 1-2.6 μm spectral range is often used to identify and map the distribution of hydrous minerals using mineralogically diagnostic bands at 1.4 μm, 1.9 μm, and 2 - 2.5 micron region. Atmospheric correction of the 2-μm CO2 band typically employs the same methodology applied to OMEGA data (Mustard et al., Nature 454, 2008): an atmospheric opacity spectrum, obtained from the ratio of spectra from the base to spectra from the peak of Olympus Mons, is rescaled for each spectrum in the observation to fit the 2-μm CO2 band, and is subsequently used to correct the data. Three important aspects are not considered in this correction: 1) absorptions due to water vapor are improperly accounted for, 2) the band-center of each channel shifts slightly with time, and 3) multiple scattering due to atmospheric aerosols is not considered. The second issue results in miss-registration of the sharp CO2 features in the 2-μm triplet, and hence poor atmospheric correction. This leads to the necessity to ratio all spectra using the spectrum of a spectrally "bland” region in each observation in order to distinguish features 1.9 μm. Here, we present an improved atmospheric correction method, which uses emission phase function (EPF) observations to correct for molecular opacity, and a discrete ordinate radiative transfer algorithm (DISORT - Stamnes et al., Appl. Opt. 27, 1988) to correct for the effects of multiple scattering. This method results in a significant improvement in the correction of the 2-μm CO2 band, allowing us to forgo the use of spectral ratios that affect the spectral shape and preclude the derivation of reflectance values in the data.

  9. CORRECTING FOR INTERSTELLAR SCATTERING DELAY IN HIGH-PRECISION PULSAR TIMING: SIMULATION RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palliyaguru, Nipuni; McLaughlin, Maura; Stinebring, Daniel

    2015-12-20

    Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any methodmore » to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.« less

  10. Infrared weak corrections to strongly interacting gauge boson scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciafaloni, Paolo; Urbano, Alfredo

    2010-04-15

    We evaluate the impact of electroweak corrections of infrared origin on strongly interacting longitudinal gauge boson scattering, calculating all-order resummed expressions at the double log level. As a working example, we consider the standard model with a heavy Higgs. At energies typical of forthcoming experiments (LHC, International Linear Collider, Compact Linear Collider), the corrections are in the 10%-40% range, with the relative sign depending on the initial state considered and on whether or not additional gauge boson emission is included. We conclude that the effect of radiative electroweak corrections should be included in the analysis of longitudinal gauge boson scattering.

  11. Radiance and polarization of multiple scattered light from haze and clouds.

    PubMed

    Kattawar, G W; Plass, G N

    1968-08-01

    The radiance and polarization of multiple scattered light is calculated from the Stokes' vectors by a Monte Carlo method. The exact scattering matrix for a typical haze and for a cloud whose spherical drops have an average radius of 12 mu is calculated from the Mie theory. The Stokes' vector is transformed in a collision by this scattering matrix and the rotation matrix. The two angles that define the photon direction after scattering are chosen by a random process that correctly simulates the actual distribution functions for both angles. The Monte Carlo results for Rayleigh scattering compare favorably with well known tabulated results. Curves are given of the reflected and transmitted radiances and polarizations for both the haze and cloud models and for several solar angles, optical thicknesses, and surface albedos. The dependence on these various parameters is discussed.

  12. ITERATIVE SCATTER CORRECTION FOR GRID-LESS BEDSIDE CHEST RADIOGRAPHY: PERFORMANCE FOR A CHEST PHANTOM.

    PubMed

    Mentrup, Detlef; Jockel, Sascha; Menser, Bernd; Neitzel, Ulrich

    2016-06-01

    The aim of this work was to experimentally compare the contrast improvement factors (CIFs) of a newly developed software-based scatter correction to the CIFs achieved by an antiscatter grid. To this end, three aluminium discs were placed in the lung, the retrocardial and the abdominal areas of a thorax phantom, and digital radiographs of the phantom were acquired both with and without a stationary grid. The contrast generated by the discs was measured in both images, and the CIFs achieved by grid usage were determined for each disc. Additionally, the non-grid images were processed with a scatter correction software. The contrasts generated by the discs were determined in the scatter-corrected images, and the corresponding CIFs were calculated. The CIFs obtained with the grid and with the software were in good agreement. In conclusion, the experiment demonstrates quantitatively that software-based scatter correction allows restoring the image contrast of a non-grid image in a manner comparable with an antiscatter grid. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    NASA Astrophysics Data System (ADS)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  14. Near resonant bubble acoustic cross-section corrections, including examples from oceanography, volcanology, and biomedical ultrasound.

    PubMed

    Ainslie, Michael A; Leighton, Timothy G

    2009-11-01

    The scattering cross-section sigma(s) of a gas bubble of equilibrium radius R(0) in liquid can be written in the form sigma(s)=4piR(0) (2)[(omega(1) (2)omega(2)-1)(2)+delta(2)], where omega is the excitation frequency, omega(1) is the resonance frequency, and delta is a frequency-dependent dimensionless damping coefficient. A persistent discrepancy in the frequency dependence of the contribution to delta from radiation damping, denoted delta(rad), is identified and resolved, as follows. Wildt's [Physics of Sound in the Sea (Washington, DC, 1946), Chap. 28] pioneering derivation predicts a linear dependence of delta(rad) on frequency, a result which Medwin [Ultrasonics 15, 7-13 (1977)] reproduces using a different method. Weston [Underwater Acoustics, NATO Advanced Study Institute Series Vol. II, 55-88 (1967)], using ostensibly the same method as Wildt, predicts the opposite relationship, i.e., that delta(rad) is inversely proportional to frequency. Weston's version of the derivation of the scattering cross-section is shown here to be the correct one, thus resolving the discrepancy. Further, a correction to Weston's model is derived that amounts to a shift in the resonance frequency. A new, corrected, expression for the extinction cross-section is also derived. The magnitudes of the corrections are illustrated using examples from oceanography, volcanology, planetary acoustics, neutron spallation, and biomedical ultrasound. The corrections become significant when the bulk modulus of the gas is not negligible relative to that of the surrounding liquid.

  15. Identifying the theory of dark matter with direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.

    2015-12-01

    Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either 'heavy' or 'light' mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less

  16. Identifying the theory of dark matter with direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.

    2015-12-29

    Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either “heavy” or “light” mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less

  17. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Walsh, Jonathan A.

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  18. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE PAGES

    Romano, Paul K.; Walsh, Jonathan A.

    2018-02-03

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  19. [Discussion of scattering in THz time domain spectrum tests].

    PubMed

    Yan, Fang; Zhang, Zhao-hui; Zhao, Xiao-yan; Su, Hai-xia; Li, Zhi; Zhang, Han

    2014-06-01

    Using THz-TDS to extract the absorption spectrum of a sample is an important branch of various THz applications. Basically, we believe that the THz radiation scatters from sample particles, leading to an obvious baseline increasing with frequencies in its absorption spectrum. The baseline will affect the measurement accuracy due to ambiguous height and pattern of the spectrum. The authors should try to remove the baseline, and eliminate the effects of scattering. In the present paper, we investigated the causes of baselines, reviewed some of scatter mitigating methods and summarized some of research aspects in the future. In order to validate the correctness of these methods, we designed a series of experiments to compare the computational accuracy of molar concentration. The result indicated that the computational accuracy of molar concentration can be improved, which can be the basis of quantitative analysis in further researches. Finally, with comprehensive experimental results, we presented further research directions on THz absorption spectrum that is needed for the removal of scattering effects.

  20. Modifications Of Discrete Ordinate Method For Computations With High Scattering Anisotropy: Comparative Analysis

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2012-01-01

    A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.

  1. Accurate Modeling of Dark-Field Scattering Spectra of Plasmonic Nanostructures.

    PubMed

    Jiang, Liyong; Yin, Tingting; Dong, Zhaogang; Liao, Mingyi; Tan, Shawn J; Goh, Xiao Ming; Allioux, David; Hu, Hailong; Li, Xiangyin; Yang, Joel K W; Shen, Zexiang

    2015-10-27

    Dark-field microscopy is a widely used tool for measuring the optical resonance of plasmonic nanostructures. However, current numerical methods for simulating the dark-field scattering spectra were carried out with plane wave illumination either at normal incidence or at an oblique angle from one direction. In actual experiments, light is focused onto the sample through an annular ring within a range of glancing angles. In this paper, we present a theoretical model capable of accurately simulating the dark-field light source with an annular ring. Simulations correctly reproduce a counterintuitive blue shift in the scattering spectra from gold nanodisks with a diameter beyond 140 nm. We believe that our proposed simulation method can be potentially applied as a general tool capable of simulating the dark-field scattering spectra of plasmonic nanostructures as well as other dielectric nanostructures with sizes beyond the quasi-static limit.

  2. SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, G; Feng, Z; Yin, Y

    2016-06-15

    Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator.more » The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei Xing and Dr. Yong Yang in the Stanford University School of Medicine for this work. This work was jointly supported by NSFC (61471226), Natural Science Foundation for Distinguished Young Scholars of Shandong Province (JQ201516), and China Postdoctoral Science Foundation (2015T80739, 2014M551949).« less

  3. Experimental comparison between performance of the PM and LPM methods in computed radiography

    NASA Astrophysics Data System (ADS)

    Kermani, Aboutaleb; Feghhi, Seyed Amir Hossein; Rokrok, Behrouz

    2018-07-01

    The scatter downgrades the image quality and reduces its information efficiency in quantitative measurement usages when creating projections with ionizing radiation. Therefore, the variety of methods have been applied for scatter reduction and correction of the undesirable effects. As new approaches, the ordinary and localized primary modulation methods have already been used individually through experiments and simulations in medical and industrial computed tomography, respectively. The aim of this study is the evaluation of capabilities and limitations of these methods in comparison with each other. For this mean, the ordinary primary modulation has been implemented in computed radiography for the first time and the potential of both methods has been assessed in thickness measurement as well as scatter to primary signal ratio determination. The comparison results, based on the experimental outputs which obtained using aluminum specimens and continuous X-ray spectra, are to the benefit of the localized primary modulation method because of improved accuracy and higher performance especially at the edges.

  4. Scatter characterization and correction for simultaneous multiple small-animal PET imaging.

    PubMed

    Prasad, Rameshwar; Zaidi, Habib

    2014-04-01

    The rapid growth and usage of small-animal positron emission tomography (PET) in molecular imaging research has led to increased demand on PET scanner's time. One potential solution to increase throughput is to scan multiple rodents simultaneously. However, this is achieved at the expense of deterioration of image quality and loss of quantitative accuracy owing to enhanced effects of photon attenuation and Compton scattering. The purpose of this work is, first, to characterize the magnitude and spatial distribution of the scatter component in small-animal PET imaging when scanning single and multiple rodents simultaneously and, second, to assess the relevance and evaluate the performance of scatter correction under similar conditions. The LabPET™-8 scanner was modelled as realistically as possible using Geant4 Application for Tomographic Emission Monte Carlo simulation platform. Monte Carlo simulations allow the separation of unscattered and scattered coincidences and as such enable detailed assessment of the scatter component and its origin. Simple shape-based and more realistic voxel-based phantoms were used to simulate single and multiple PET imaging studies. The modelled scatter component using the single-scatter simulation technique was compared to Monte Carlo simulation results. PET images were also corrected for attenuation and the combined effect of attenuation and scatter on single and multiple small-animal PET imaging evaluated in terms of image quality and quantitative accuracy. A good agreement was observed between calculated and Monte Carlo simulated scatter profiles for single- and multiple-subject imaging. In the LabPET™-8 scanner, the detector covering material (kovar) contributed the maximum amount of scatter events while the scatter contribution due to lead shielding is negligible. The out-of field-of-view (FOV) scatter fraction (SF) is 1.70, 0.76, and 0.11% for lower energy thresholds of 250, 350, and 400 keV, respectively. The increase in SF ranged between 25 and 64% when imaging multiple subjects (three to five) of different size simultaneously in comparison to imaging a single subject. The spill-over ratio (SOR) increases with increasing the number of subjects in the FOV. Scatter correction improved the SOR for both water and air cold compartments of single and multiple imaging studies. The recovery coefficients for different body parts of the mouse whole-body and rat whole-body anatomical models were improved for multiple imaging studies following scatter correction. The magnitude and spatial distribution of the scatter component in small-animal PET imaging of single and multiple subjects simultaneously were characterized, and its impact was evaluated in different situations. Scatter correction improves PET image quality and quantitative accuracy for single rat and simultaneous multiple mice and rat imaging studies, whereas its impact is insignificant in single mouse imaging.

  5. Modeling ultrasonic transient scattering from biological tissues including their dispersive properties directly in the time domain.

    PubMed

    Norton, G V; Novarini, J C

    2007-06-01

    Ultrasonic imaging in medical applications involves propagation and scattering of acoustic waves within and by biological tissues that are intrinsically dispersive. Analytical approaches for modeling propagation and scattering in inhomogeneous media are difficult and often require extremely simplifying approximations in order to achieve a solution. To avoid such approximations, the direct numerical solution of the wave equation via the method of finite differences offers the most direct tool, which takes into account diffraction and refraction. It also allows for detailed modeling of the real anatomic structure and combination/layering of tissues. In all cases the correct inclusion of the dispersive properties of the tissues can make the difference in the interpretation of the results. However, the inclusion of dispersion directly in the time domain proved until recently to be an elusive problem. In order to model the transient signal a convolution operator that takes into account the dispersive characteristics of the medium is introduced to the linear wave equation. To test the ability of this operator to handle scattering from localized scatterers, in this work, two-dimensional numerical modeling of scattering from an infinite cylinder with physical properties associated with biological tissue is calculated. The numerical solutions are compared with the exact solution synthesized from the frequency domain for a variety of tissues having distinct dispersive properties. It is shown that in all cases, the use of the convolutional propagation operator leads to the correct solution for the scattered field.

  6. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergstrom, P

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less

  7. GEO-LEO reflectance band inter-comparison with BRDF and atmospheric scattering corrections

    NASA Astrophysics Data System (ADS)

    Chang, Tiejun; Xiong, Xiaoxiong Jack; Keller, Graziela; Wu, Xiangqian

    2017-09-01

    The inter-comparison of the reflective solar bands between the instruments onboard a geostationary orbit satellite and onboard a low Earth orbit satellite is very helpful to assess their calibration consistency. GOES-R was launched on November 19, 2016 and Himawari 8 was launched October 7, 2014. Unlike the previous GOES instruments, the Advanced Baseline Imager on GOES-16 (GOES-R became GOES-16 after November 29 when it reached orbit) and the Advanced Himawari Imager (AHI) on Himawari 8 have onboard calibrators for the reflective solar bands. The assessment of calibration is important for their product quality enhancement. MODIS and VIIRS, with their stringent calibration requirements and excellent on-orbit calibration performance, provide good references. The simultaneous nadir overpass (SNO) and ray-matching are widely used inter-comparison methods for reflective solar bands. In this work, the inter-comparisons are performed over a pseudo-invariant target. The use of stable and uniform calibration sites provides comparison with appropriate reflectance level, accurate adjustment for band spectral coverage difference, reduction of impact from pixel mismatching, and consistency of BRDF and atmospheric correction. The site in this work is a desert site in Australia (latitude -29.0 South; longitude 139.8 East). Due to the difference in solar and view angles, two corrections are applied to have comparable measurements. The first is the atmospheric scattering correction. The satellite sensor measurements are top of atmosphere reflectance. The scattering, especially Rayleigh scattering, should be removed allowing the ground reflectance to be derived. Secondly, the angle differences magnify the BRDF effect. The ground reflectance should be corrected to have comparable measurements. The atmospheric correction is performed using a vector version of the Second Simulation of a Satellite Signal in the Solar Spectrum modeling and BRDF correction is performed using a semi-empirical model. AHI band 1 (0.47μm) shows good matching with VIIRS band M3 with difference of 0.15%. AHI band 5 (1.69μm) shows largest difference in comparison with VIIRS M10.

  8. Fringes in FTIR spectroscopy revisited: understanding and modelling fringes in infrared spectroscopy of thin films.

    PubMed

    Konevskikh, Tatiana; Ponossov, Arkadi; Blümel, Reinhold; Lukacs, Rozalia; Kohler, Achim

    2015-06-21

    The appearance of fringes in the infrared spectroscopy of thin films seriously hinders the interpretation of chemical bands because fringes change the relative peak heights of chemical spectral bands. Thus, for the correct interpretation of chemical absorption bands, physical properties need to be separated from chemical characteristics. In the paper at hand we revisit the theory of the scattering of infrared radiation at thin absorbing films. Although, in general, scattering and absorption are connected by a complex refractive index, we show that for the scattering of infrared radiation at thin biological films, fringes and chemical absorbance can in good approximation be treated as additive. We further introduce a model-based pre-processing technique for separating fringes from chemical absorbance by extended multiplicative signal correction (EMSC). The technique is validated by simulated and experimental FTIR spectra. It is further shown that EMSC, as opposed to other suggested filtering methods for the removal of fringes, does not remove information related to chemical absorption.

  9. Experimental and theoretical electron-scattering cross-section data for dichloromethane

    NASA Astrophysics Data System (ADS)

    Krupa, K.; Lange, E.; Blanco, F.; Barbosa, A. S.; Pastega, D. F.; Sanchez, S. d'A.; Bettega, M. H. F.; García, G.; Limão-Vieira, P.; Ferreira da Silva, F.

    2018-04-01

    We report on a combination of experimental and theoretical investigations into the elastic differential cross sections (DCSs) and integral cross sections for electron interactions with dichloromethane, C H2C l2 , in the incident electron energy over the 7.0-30 eV range. Elastic electron-scattering cross-section calculations have been performed within the framework of the Schwinger multichannel method implemented with pseudopotentials (SMCPP), and the independent-atom model with screening-corrected additivity rule including interference-effects correction (IAM-SCAR+I). The present elastic DCSs have been found to agree reasonably well with the results of IAM-SCAR+I calculations above 20 eV and also with the SMC calculations below 30 eV. Although some discrepancies were found for 7 eV, the agreement between the two theoretical methodologies is remarkable as the electron-impact energy increases. Calculated elastic DCSs are also reported up to 10000 eV for scattering angles from 0° to 180° together with total cross section within the IAM-SCAR+I framework.

  10. An evaluation of atmospheric corrections to advanced very high resolution radiometer data

    USGS Publications Warehouse

    Meyer, David; Hood, Joy J.

    1993-01-01

    A data set compiled to analyze vegetation indices is used to evaluate the effect of atmospheric correction to AVHRR measurement in the solar spectrum. Such corrections include cloud screening and "clear sky" corrections. We used the "clouds from AVHRR" (CLAVR) method for cloud detection and evaluated its performance over vegetated targets. Clear sky corrections, designed to reduce the effects of molecular scattering and absorption due to ozone, water vapor, carbon dioxide, and molecular oxygen, were applied to data values determine to be cloud free. Generally, it was found that the screening and correction of the AVHRR data did not affect the maximum NDVI compositing process adversely, while at the same time improving estimates of the land-surface radiances over a compositing period.

  11. Positron scattering from pyridine

    NASA Astrophysics Data System (ADS)

    Stevens, D.; Babij, T. J.; Machacek, J. R.; Buckman, S. J.; Brunger, M. J.; White, R. D.; García, G.; Blanco, F.; Ellis-Gibbings, L.; Sullivan, J. P.

    2018-04-01

    We present a range of cross section measurements for the low-energy scattering of positrons from pyridine, for incident positron energies of less than 20 eV, as well as the independent atom model with the screening corrected additivity rule including interference effects calculation, of positron scattering from pyridine, with dipole rotational excitations accounted for using the Born approximation. Comparisons are made between the experimental measurements and theoretical calculations. For the positronium formation cross section, we also compare with results from a recent empirical model. In general, quite good agreement is seen between the calculations and measurements although some discrepancies remain which may require further investigation. It is hoped that the present study will stimulate development of ab initio level theoretical methods to be applied to this important scattering system.

  12. A Review of the Scattering-Parameter Extraction Method with Clarification of Ambiguity Issues in Relation to Metamaterial Homogenization

    NASA Astrophysics Data System (ADS)

    Arslanagic, S.; Hansen, T. V.; Mortensen, N. A.; Gregersen, A. H.; Sigmund, O.; Ziolkowski, R. W.; Breinbjerg, O.

    2013-04-01

    The scattering parameter extraction method of metamaterial homogenization is reviewed to show that the only ambiguity is the one related to the choice of the branch of the complex logarithmic function (or the complex inverse cosine function), whereas it has no ambiguity for the sign of the wave number and intrinsic impedance. While the method indeed yields two signs of the intrinsic impedance, and thus the wave number, the signs are dependent, and moreover, both sign combinations lead to the same permittivity and permeability, and are thus permissible. This observation is in distinct contrast to a number of statements in the literature where the correct sign of the intrinsic impedance and wave number, resulting from the scattering parameter method, is chosen by imposing additional physical requirements such as passivity. The scattering parameter method is reviewed through an investigation of a uniform plane wave normally incident on a planar slab in free-space, and the severity of the branch ambiguity is illustrated through simulations of a known metamaterial realization. Several approaches for proper branch selection are reviewed and their suitability to metamaterial samples is discussed.

  13. Rayleigh Scattering.

    ERIC Educational Resources Information Center

    Young, Andrew T.

    1982-01-01

    The correct usage of such terminology as "Rayleigh scattering,""Rayleigh lines,""Raman lines," and "Tyndall scattering" is resolved during an historical excursion through the physics of light-scattering by gas molecules. (Author/JN)

  14. A simple method for computing the relativistic Compton scattering kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Prasad, M. K.; Kershaw, D. S.; Beason, J. D.

    1986-01-01

    Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.

  15. Born iterative reconstruction using perturbed-phase field estimates.

    PubMed

    Astheimer, Jeffrey P; Waag, Robert C

    2008-10-01

    A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements.

  16. Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures

    NASA Astrophysics Data System (ADS)

    Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.

    2013-05-01

    An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.

  17. Analysis of Self-Associating Proteins by Singular Value Decomposition of Solution Scattering Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Tim E.; Craig, Bruce A.; Kondrashkina, Elena

    2008-07-08

    We describe a method by which a single experiment can reveal both association model (pathway and constants) and low-resolution structures of a self-associating system. Small-angle scattering data are collected from solutions at a range of concentrations. These scattering data curves are mass-weighted linear combinations of the scattering from each oligomer. Singular value decomposition of the data yields a set of basis vectors from which the scattering curve for each oligomer is reconstructed using coefficients that depend on the association model. A search identifies the association pathway and constants that provide the best agreement between reconstructed and observed data. Using simulatedmore » data with realistic noise, our method finds the correct pathway and association constants. Depending on the simulation parameters, reconstructed curves for each oligomer differ from the ideal by 0.050.99% in median absolute relative deviation. The reconstructed scattering curves are fundamental to further analysis, including interatomic distance distribution calculation and low-resolution ab initio shape reconstruction of each oligomer in solution. This method can be applied to x-ray or neutron scattering data from small angles to moderate (or higher) resolution. Data can be taken under physiological conditions, or particular conditions (e.g., temperature) can be varied to extract fundamental association parameters ({Delta}H{sub ass}, S{sub ass}).« less

  18. Further Improvement of the RITS Code for Pulsed Neutron Bragg-edge Transmission Imaging

    NASA Astrophysics Data System (ADS)

    Sato, H.; Watanabe, K.; Kiyokawa, K.; Kiyanagi, R.; Hara, K. Y.; Kamiyama, T.; Furusaka, M.; Shinohara, T.; Kiyanagi, Y.

    The RITS code is a unique and powerful tool for a whole Bragg-edge transmission spectrum fitting analysis. However, it has had two major problems. Therefore, we have proposed methods to overcome these problems. The first issue is the difference in the crystallite size values between the diffraction and the Bragg-edge analyses. We found the reason was a different definition of the crystal structure factor. It affects the crystallite size because the crystallite size is deduced from the primary extinction effect which depends on the crystal structure factor. As a result of algorithm change, crystallite sizes obtained by RITS drastically approached to crystallite sizes obtained by Rietveld analyses of diffraction data; from 155% to 110%. The second issue is correction of the effect of background neutrons scattered from a specimen. Through neutron transport simulation studies, we found that the background components consist of forward Bragg scattering, double backward Bragg scattering, and thermal diffuse scattering. RITS with the background correction function which was developed through the simulation studies could well reconstruct various simulated and experimental transmission spectra, but refined crystalline microstructural parameters were often distorted. Finally, it was recommended to reduce the background by improving experimental conditions.

  19. Simple aerosol correction technique based on the spectral relationships of the aerosol multiple-scattering reflectances for atmospheric correction over the oceans.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram

    2016-12-26

    An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.

  20. SU-C-209-03: Anti-Scatter Grid-Line Artifact Minimization for Removing the Grid Lines for Three Different Grids Used with a High Resolution CMOS Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, R; Bednarek, D; Rudin, S

    Purpose: Demonstrate the effectiveness of an anti-scatter grid artifact minimization method by removing the grid-line artifacts for three different grids when used with a high resolution CMOS detector. Method: Three different stationary x-ray grids were used with a high resolution CMOS x-ray detector (Dexela 1207, 75 µm pixels, sensitivity area 11.5cm × 6.5cm) to image a simulated artery block phantom (Nuclear Associates, Stenosis/Aneurysm Artery Block 76–705) combined with a frontal head phantom used as the scattering source. The x-ray parameters were 98kVp, 200mA, and 16ms for all grids. With all the three grids, two images were acquired: the first formore » a scatter-less flat field including the grid and the second of the object with the grid which may still have some scatter transmission. Because scatter has a low spatial frequency distribution, it was represented by an estimated constant value as an initial approximation and subtracted from the image of the object with grid before dividing by an average frame of the grid flat-field with no scatter. The constant value was iteratively changed to minimize residual grid-line artifact. This artifact minimization process was used for all the three grids. Results: Anti-scatter grid lines artifacts were successfully eliminated in all the three final images taken with the three different grids. The image contrast and CNR were also compared before and after the correction, and also compared with those from the image of the object when no grid was used. The corrected images showed an increase in CNR of approximately 28%, 33% and 25% for the three grids, as compared to the images when no grid at all was used. Conclusion: Anti-scatter grid-artifact minimization works effectively irrespective of the specifications of the grid when it is used with a high spatial resolution detector. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  1. Iteration of ultrasound aberration correction methods

    NASA Astrophysics Data System (ADS)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  2. Empirical entropic contributions in computational docking: evaluation in APS reductase complexes.

    PubMed

    Chang, Max W; Belew, Richard K; Carroll, Kate S; Olson, Arthur J; Goodsell, David S

    2008-08-01

    The results from reiterated docking experiments may be used to evaluate an empirical vibrational entropy of binding in ligand-protein complexes. We have tested several methods for evaluating the vibrational contribution to binding of 22 nucleotide analogues to the enzyme APS reductase. These include two cluster size methods that measure the probability of finding a particular conformation, a method that estimates the extent of the local energetic well by looking at the scatter of conformations within clustered results, and an RMSD-based method that uses the overall scatter and clustering of all conformations. We have also directly characterized the local energy landscape by randomly sampling around docked conformations. The simple cluster size method shows the best performance, improving the identification of correct conformations in multiple docking experiments. 2008 Wiley Periodicals, Inc.

  3. Effect of scattering on coherent anti-Stokes Raman scattering (CARS) signals.

    PubMed

    Ranasinghesagara, Janaka C; De Vito, Giuseppe; Piazza, Vincenzo; Potma, Eric O; Venugopalan, Vasan

    2017-04-17

    We develop a computational framework to examine the factors responsible for scattering-induced distortions of coherent anti-Stokes Raman scattering (CARS) signals in turbid samples. We apply the Huygens-Fresnel wave-based electric field superposition (HF-WEFS) method combined with the radiating dipole approximation to compute the effects of scattering-induced distortions of focal excitation fields on the far-field CARS signal. We analyze the effect of spherical scatterers, placed in the vicinity of the focal volume, on the CARS signal emitted by different objects (2μm diameter solid sphere, 2μm diameter myelin cylinder and 2μm diameter myelin tube). We find that distortions in the CARS signals arise not only from attenuation of the focal field but also from scattering-induced changes in the spatial phase that modifies the angular distribution of the CARS emission. Our simulations further show that CARS signal attenuation can be minimized by using a high numerical aperture condenser. Moreover, unlike the CARS intensity image, CARS images formed by taking the ratio of CARS signals obtained using x- and y-polarized input fields is relatively insensitive to the effects of spherical scatterers. Our computational framework provide a mechanistic approach to characterizing scattering-induced distortions in coherent imaging of turbid media and may inspire bottom-up approaches for adaptive optical methods for image correction.

  4. Effect of scattering on coherent anti-Stokes Raman scattering (CARS) signals

    PubMed Central

    Ranasinghesagara, Janaka C.; De Vito, Giuseppe; Piazza, Vincenzo; Potma, Eric O.; Venugopalan, Vasan

    2017-01-01

    We develop a computational framework to examine the factors responsible for scattering-induced distortions of coherent anti-Stokes Raman scattering (CARS) signals in turbid samples. We apply the Huygens-Fresnel wave-based electric field superposition (HF-WEFS) method combined with the radiating dipole approximation to compute the effects of scattering-induced distortions of focal excitation fields on the far-field CARS signal. We analyze the effect of spherical scatterers, placed in the vicinity of the focal volume, on the CARS signal emitted by different objects (2μm diameter solid sphere, 2μm diameter myelin cylinder and 2μm diameter myelin tube). We find that distortions in the CARS signals arise not only from attenuation of the focal field but also from scattering-induced changes in the spatial phase that modifies the angular distribution of the CARS emission. Our simulations further show that CARS signal attenuation can be minimized by using a high numerical aperture condenser. Moreover, unlike the CARS intensity image, CARS images formed by taking the ratio of CARS signals obtained using x- and y-polarized input fields is relatively insensitive to the effects of spherical scatterers. Our computational framework provide a mechanistic approach to characterizing scattering-induced distortions in coherent imaging of turbid media and may inspire bottom-up approaches for adaptive optical methods for image correction. PMID:28437941

  5. Application of focused-beam flat-sample method to synchrotron powder X-ray diffraction with anomalous scattering effect

    NASA Astrophysics Data System (ADS)

    Tanaka, M.; Katsuya, Y.; Matsushita, Y.

    2013-03-01

    The focused-beam flat-sample method (FFM), which is a method for high-resolution and rapid synchrotron X-ray powder diffraction measurements by combination of beam focusing optics, a flat shape sample and an area detector, was applied for diffraction experiments with anomalous scattering effect. The advantages of FFM for anomalous diffraction were absorption correction without approximation, rapid data collection by an area detector and good signal-to-noise ratio data by focusing optics. In the X-ray diffraction experiments of CoFe2O4 and Fe3O4 (By FFM) using X-rays near the Fe K absorption edge, the anomalous scattering effect between Fe/Co or Fe2+/Fe3+ can be clearly detected, due to the change of diffraction intensity. The change of observed diffraction intensity as the incident X-ray energy was consistent with the calculation. The FFM is expected to be a method for anomalous powder diffraction.

  6. Rietveld analysis using powder diffraction data with anomalous scattering effect obtained by focused beam flat sample method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanaka, Masahiko, E-mail: masahiko@spring8.or.jp; Katsuya, Yoshio, E-mail: katsuya@spring8.or.jp; Sakata, Osami, E-mail: SAKATA.Osami@nims.go.jp

    2016-07-27

    Focused-beam flat-sample method (FFM) is a new trial for synchrotron powder diffraction method, which is a combination of beam focusing optics, flat shape powder sample and area detectors. The method has advantages for X-ray diffraction experiments applying anomalous scattering effect (anomalous diffraction), because of 1. Absorption correction without approximation, 2. High intensity X-rays of focused incident beams and high signal noise ratio of diffracted X-rays 3. Rapid data collection with area detectors. We applied the FFM to anomalous diffraction experiments and collected synchrotron X-ray powder diffraction data of CoFe{sub 2}O{sub 4} (inverse spinel structure) using X-rays near Fe K absorptionmore » edge, which can distinguish Co and Fe by anomalous scattering effect. We conducted Rietveld analyses with the obtained powder diffraction data and successfully determined the distribution of Co and Fe ions in CoFe{sub 2}O{sub 4} crystal structure.« less

  7. Scattering properties of ultrafast laser-induced refractive index shaping lenticular structures in hydrogels

    NASA Astrophysics Data System (ADS)

    Wozniak, Kaitlin T.; Germer, Thomas A.; Butler, Sam C.; Brooks, Daniel R.; Huxlin, Krystel R.; Ellis, Jonathan D.

    2018-02-01

    We present measurements of light scatter induced by a new ultrafast laser technique being developed for laser refractive correction in transparent ophthalmic materials such as cornea, contact lenses, and/or intraocular lenses. In this new technique, called intra-tissue refractive index shaping (IRIS), a 405 nm femtosecond laser is focused and scanned below the corneal surface, inducing a spatially-varying refractive index change that corrects vision errors. In contrast with traditional laser correction techniques, such as laser in-situ keratomileusis (LASIK) or photorefractive keratectomy (PRK), IRIS does not operate via photoablation, but rather changes the refractive index of transparent materials such as cornea and hydrogels. A concern with any laser eye correction technique is additional scatter induced by the process, which can adversely affect vision, especially at night. The goal of this investigation is to identify sources of scatter induced by IRIS and to mitigate possible effects on visual performance in ophthalmic applications. Preliminary light scattering measurements on patterns written into hydrogel showed four sources of scatter, differentiated by distinct behaviors: (1) scattering from scanned lines; (2) scattering from stitching errors, resulting from adjacent scanning fields not being aligned to one another; (3) diffraction from Fresnel zone discontinuities; and (4) long-period variations in the scans that created distinct diffraction peaks, likely due to inconsistent line spacing in the writing instrument. By knowing the nature of these different scattering errors, it will now be possible to modify and optimize the design of IRIS structures to mitigate potential deficits in visual performance in human clinical trials.

  8. Indetermination of particle sizing by laser diffraction in the anomalous size ranges

    NASA Astrophysics Data System (ADS)

    Pan, Linchao; Ge, Baozhen; Zhang, Fugen

    2017-09-01

    The laser diffraction method is widely used to measure particle size distributions. It is generally accepted that the scattering angle becomes smaller and the angles to the location of the main peak of scattered energy distributions in laser diffraction instruments shift to smaller values with increasing particle size. This specific principle forms the foundation of the laser diffraction method. However, this principle is not entirely correct for non-absorbing particles in certain size ranges and these particle size ranges are called anomalous size ranges. Here, we derive the analytical formulae for the bounds of the anomalous size ranges and discuss the influence of the width of the size segments on the signature of the Mie scattering kernel. This anomalous signature of the Mie scattering kernel will result in an indetermination of the particle size distribution when measured by laser diffraction instruments in the anomalous size ranges. By using the singular-value decomposition method we interpret the mechanism of occurrence of this indetermination in detail and then validate its existence by using inversion simulations.

  9. Rayleigh, Compton and K-shell radiative resonant Raman scattering in 83Bi for 88.034 keV γ-rays

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjeev; Sharma, Veena; Mehta, D.; Singh, Nirmal

    2007-11-01

    The Rayleigh, Compton and K-shell radiative resonant Raman scattering cross-sections for the 88.034 keV γ-rays have been measured in the 83Bi (K-shell binding energy = 90.526 keV) element. The measurements have been performed at 130° scattering angle using reflection-mode geometrical arrangement involving the 109Cd radioisotope as photon source and an LEGe detector. Computer simulations were exercised to determine distributions of the incident and emission angles, which were further used in evaluation of the absorption corrections for the incident and emitted photons in the target. The measured cross-sections for the Rayleigh scattering are compared with the modified form-factors (MFs) corrected for the anomalous-scattering factors (ASFs) and the S-matrix calculations; and those for the Compton scattering are compared with the Klein-Nishina cross-sections corrected for the non-relativistic Hartree-Fock incoherent scattering function S(x, Z). The ratios of the measured KL2, KL3, KM and KN2,3 radiative resonant Raman scattering cross-sections are found to be in general agreement with those of the corresponding measured fluorescence transition probabilities.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prior, P; Timmins, R; Wells, R G

    Dual isotope SPECT allows simultaneous measurement of two different tracers in vivo. With In111 (emission energies of 171keV and 245keV) and Tc99m (140keV), quantification of Tc99m is degraded by cross talk from the In111 photons that scatter and are detected at an energy corresponding to Tc99m. TEW uses counts recorded in two narrow windows surrounding the Tc99m primary window to estimate scatter. Iterative TEW corrects for the bias introduced into the TEW estimate resulting from un-scattered counts detected in the scatter windows. The contamination in the scatter windows is iteratively estimated and subtracted as a fraction of the scatter-corrected primarymore » window counts. The iterative TEW approach was validated with a small-animal SPECT/CT camera using a 2.5mL plastic container holding thoroughly mixed Tc99m/In111 activity fractions of 0.15, 0.28, 0.52, 0.99, 2.47 and 6.90. Dose calibrator measurements were the gold standard. Uncorrected for scatter, the Tc99m activity was over-estimated by as much as 80%. Unmodified TEW underestimated the Tc99m activity by 13%. With iterative TEW corrections applied in projection space, the Tc99m activity was estimated within 5% of truth across all activity fractions above 0.15. This is an improvement over the non-iterative TEW, which could not sufficiently correct for scatter in the 0.15 and 0.28 phantoms.« less

  11. Noninvasive OCT imaging of the retinal morphology and microvasculature based on the combination of the phase and amplitude method

    NASA Astrophysics Data System (ADS)

    Qin, Lin; Fan, Shanhui; Zhou, Chuanqing

    2017-04-01

    To implement the optical coherence tomography (OCT) angiography on the low scanning speed OCT system, we developed a joint phase and amplitude method to generate 3-D angiograms by analysing the frequency distribution of signals from non-moving and moving scatterers and separating the signals from the tissue and blood flow with high-pass filter dynamically. This approach firstly compensates the sample motion between adjacent A-lines. Then according to the corrected phase information, we used a histogram method to determine the bulk non-moving tissue phases dynamically, which is regarded as the cut-off frequency of a high-pass filter, and separated the moving and non-moving scatters using the mentioned high-pass filter. The reconstructed image can visualize the components of moving scatters flowing, and enables volumetric flow mapping combined with the corrected phase information. Furthermore, retinal and choroidal blood vessels can be simultaneously obtained by separating the B-scan into retinal part and choroidal parts using a simple segmentation algorithm along the RPE. After the compensation of axial displacements between neighbouring images, three-dimensional vasculature of ocular vessels has been visualized. Experiments were performed to demonstrate the effectiveness of the proposed method for 3-D vasculature imaging of human retina and choroid. The results revealed depth-resolved vasculatures in retina and choroid, suggesting that our approach can be used for noninvasive and three-dimensional angiography with a low-speed clinical OCT, and it has a great potential for clinic application.

  12. Direct perturbation theory for the dark soliton solution to the nonlinear Schroedinger equation with normal dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Jialu; Yang Chunnuan; Cai Hao

    2007-04-15

    After finding the basic solutions of the linearized nonlinear Schroedinger equation by the method of separation of variables, the perturbation theory for the dark soliton solution is constructed by linear Green's function theory. In application to the self-induced Raman scattering, the adiabatic corrections to the soliton's parameters are obtained and the remaining correction term is given as a pure integral with respect to the continuous spectral parameter.

  13. UAV remote sensing atmospheric degradation image restoration based on multiple scattering APSF estimation

    NASA Astrophysics Data System (ADS)

    Qiu, Xiang; Dai, Ming; Yin, Chuan-li

    2017-09-01

    Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.

  14. Radiometric calibration of Landsat Thematic Mapper multispectral images

    USGS Publications Warehouse

    Chavez, P.S.

    1989-01-01

    A main problem encountered in radiometric calibration of satellite image data is correcting for atmospheric effects. Without this correction, an image digital number (DN) cannot be converted to a surface reflectance value. In this paper the accuracy of a calibration procedure, which includes a correction for atmospheric scattering, is tested. Two simple methods, a stand-alone and an in situ sky radiance measurement technique, were used to derive the HAZE DN values for each of the six reflectance Thematic Mapper (TM) bands. The DNs of two Landsat TM images of Phoenix, Arizona were converted to surface reflectances. -from Author

  15. WE-AB-204-10: Evaluation of a Novel Dedicated Breast PET System (Mammi-PET)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Z; Swanson, T; O’Connor, M

    2015-06-15

    Purpose: To evaluate the performance characteristics of a novel dedicated breast PET system (Mammi-PET, Oncovision). The system has 2 detector rings giving axial/transaxial field of view of 8/17 cm. Each ring consists of 12 monolithic LYSO modules coupled to PSPMTs. Methods: Uniformity, sensitivity, energy and spatial resolution were measured according to NEMA standards. Count rate performance was investigated using a source of F-18 (1384uCi) decayed over 5 half-lives. A prototype PET phantom was imaged for 20 min to evaluate image quality, recovery coefficients and partial volume effects. Under an IRB-approved protocol, 11 patients who just underwent whole body PET/CT examsmore » were imaged prone with the breast pendulant at 5–10 minutes/breast. Image quality was assessed with and without scatter/attenuation correction and using different reconstruction algorithms. Results: Integral/differential uniformity were 9.8%/6.0% respectively. System sensitivity was 2.3% on axis, 2.2% and 2.8% at 3.8 cm and 7.8 cm off-axis. Mean energy resolution of all modules was 23.3%. Spatial resolution (FWHM) was 1.82 mm and 2.90 mm on axis and 5.8 cm off axis. Three cylinders (14 mm diameter) in the PET phantom were filled with activity concentration ratios of 4:1, 3:1, and 2:1 relative to the background. Measured cylinder to background ratios were 2.6, 1.8 and 1.5 (without corrections) and 3.6, 2.3 and 1.5 (with attenuation/scatter correction). Five cylinders (14, 10, 6, 4 and 2 mm diameter) each with an activity ratio of 4:1 were measured and showed recovery coefficients of 1, 0.66, 0.45, 0.18 and 0.18 (without corrections), and 1, 0.53, 0.30, 0.13 and 0 (with attenuation/scatter correction). Optimal phantom image quality was obtained with 3D MLEM algorithm, >20 iterations and without attenuation/scatter correction. Conclusion: The MAMMI system demonstrated good performance characteristics. Further work is needed to determine the optimal reconstruction parameters for qualitative and quantitative applications.« less

  16. Multiple Acquisition InSAR Analysis: Persistent Scatterer and Small Baseline Approaches

    NASA Astrophysics Data System (ADS)

    Hooper, A.

    2006-12-01

    InSAR techniques that process data from multiple acquisitions enable us to form time series of deformation and also allow us to reduce error terms present in single interferograms. There are currently two broad categories of methods that deal with multiple images: persistent scatterer methods and small baseline methods. The persistent scatterer approach relies on identifying pixels whose scattering properties vary little with time and look angle. Pixels that are dominated by a singular scatterer best meet these criteria; therefore, images are processed at full resolution to both increase the chance of there being only one dominant scatterer present, and to reduce the contribution from other scatterers within each pixel. In images where most pixels contain multiple scatterers of similar strength, even at the highest possible resolution, the persistent scatterer approach is less optimal, as the scattering characteristics of these pixels vary substantially with look angle. In this case, an approach that interferes only pairs of images for which the difference in look angle is small makes better sense, and resolution can be sacrificed to reduce the effects of the look angle difference by band-pass filtering. This is the small baseline approach. Existing small baseline methods depend on forming a series of multilooked interferograms and unwrapping each one individually. This approach fails to take advantage of two of the benefits of processing multiple acquisitions, however, which are usually embodied in persistent scatterer methods: the ability to find and extract the phase for single-look pixels with good signal-to-noise ratio that are surrounded by noisy pixels, and the ability to unwrap more robustly in three dimensions, the third dimension being that of time. We have developed, therefore, a new small baseline method to select individual single-look pixels that behave coherently in time, so that isolated stable pixels may be found. After correction for various error terms, the phase values of the selected pixels are unwrapped using a new three-dimensional algorithm. We apply our small baseline method to an area in southern Iceland that includes Katla and Eyjafjallajökull volcanoes, and retrieve a time series of deformation that shows transient deformation due to intrusion of magma beneath Eyjafjallajökull. We also process the data using the Stanford method for persistent scatterers (StaMPS) for comparison.

  17. Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Jin, Guanghu; Dong, Zhen

    2018-04-01

    Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.

  18. Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.

    PubMed

    Tam, W G; Zardecki, A

    1982-07-01

    Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.

  19. The Schwinger Variational Method

    NASA Technical Reports Server (NTRS)

    Huo, Winifred M.

    1995-01-01

    Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. The application of the Schwinger variational (SV) method to e-molecule collisions and molecular photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions. Since this is not a review of cross section data, cross sections are presented only to server as illustrative examples. In the SV method, the correct boundary condition is automatically incorporated through the use of Green's function. Thus SV calculations can employ basis functions with arbitrary boundary conditions. The iterative Schwinger method has been used extensively to study molecular photoionization. For e-molecule collisions, it is used at the static exchange level to study elastic scattering and coupled with the distorted wave approximation to study electronically inelastic scattering.

  20. Improved Convergence Rate of Multi-Group Scattering Moment Tallies for Monte Carlo Neutron Transport Codes

    NASA Astrophysics Data System (ADS)

    Nelson, Adam

    Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.

  1. Assessment of polarization effect on aerosol retrievals from MODIS

    NASA Astrophysics Data System (ADS)

    Korkin, S.; Lyapustin, A.

    2010-12-01

    Light polarization affects the total intensity of scattered radiation. In this work, we compare aerosol retrievals performed by code MAIAC [1] with and without taking polarization into account. The MAIAC retrievals are based on the look-up tables (LUT). For this work, MAIAC was run using two different LUTs, the first one generated using the scalar code SHARM [2], and the second one generated with the vector code Modified Vector Discrete Ordinates Method (MVDOM). MVDOM is a new code suitable for computations with highly anisotropic phase functions, including cirrus clouds and snow [3]. To this end, the solution of the vector radiative transfer equation (VRTE) is represented as a sum of anisotropic and regular components. The anisotropic component is evaluated in the Small Angle Modification of the Spherical Harmonics Method (MSH) [4]. The MSH is formulated in the frame of reference of the solar beam where z-axis lies along the solar beam direction. In this case, the MSH solution for anisotropic part is nearly symmetric in azimuth, and is computed analytically. In scalar case, this solution coincides with the Goudsmit-Saunderson small-angle approximation [5]. To correct for an analytical separation of the anisotropic part of the signal, the transfer equation for the regular part contains a correction source function term [6]. Several examples of polarization impact on aerosol retrievals over different surface types will be presented. 1. Lyapustin A., Wang Y., Laszlo I., Kahn R., Korkin S., Remer L., Levy R., and Reid J. S. Multi-Angle Implementation of Atmospheric Correction (MAIAC): Part 2. Aerosol Algorithm. J. Geophys. Res., submitted (2010). 2. Lyapustin A., Muldashev T., Wang Y. Code SHARM: fast and accurate radiative transfer over spatially variable anisotropic surfaces. In: Light Scattering Reviews 5. Chichester: Springer, 205 - 247 (2010). 3. Budak, V.P., Korkin S.V. On the solution of a vectorial radiative transfer equation in an arbitrary three-dimensional turbid medium with anisotropic scattering. JQSRT, 109, 220-234 (2008). 4. Budak V.P., Sarmin S.E. Solution of radiative transfer equation by the method of spherical harmonics in the small angle modification. Atmospheric and Oceanic Optics, 3, 898-903 (1990). 5. Goudsmit S., Saunderson J.L. Multiple scattering of electrons. Phys. Rev., 57, 24-29 (1940). 6. Budak V.P, Klyuykov D.A., Korkin S.V. Convergence acceleration of radiative transfer equation solution at strongly anisotropic scattering. In: Light Scattering Reviews 5. Chichester: Springer, 147 - 204 (2010).

  2. On the Compton scattering redistribution function in plasma

    NASA Astrophysics Data System (ADS)

    Madej, J.; Różańska, A.; Majczyna, A.; Należyty, M.

    2017-08-01

    Compton scattering is the dominant opacity source in hot neutron stars, accretion discs around black holes and hot coronae. We collected here a set of numerical expressions of the Compton scattering redistribution functions (RFs) for unpolarized radiation, which are more exact than the widely used Kompaneets equation. The principal aim of this paper is the presentation of the RF by Guilbert, which is corrected for the computational errors in the original paper. This corrected RF was used in the series of papers on model atmosphere computations of hot neutron stars. We have also organized four existing algorithms for the RF computations into a unified form ready to use in radiative transfer and model atmosphere codes. The exact method by Nagirner & Poutanen was numerically compared to all other algorithms in a very wide spectral range from hard X-rays to radio waves. Sample computations of the Compton scattering RFs in thermal plasma were done for temperatures corresponding to the atmospheres of bursting neutron stars and hot intergalactic medium. Our formulae are also useful to study the Compton scattering of unpolarized microwave background radiation in hot intracluster gas and the Sunyaev-Zeldovich effect. We conclude that the formulae by Guilbert and the exact quantum mechanical formulae yield practically the same RFs for gas temperatures relevant to the atmospheres of X-ray bursting neutron stars, T ≤ 108 K.

  3. Monte Carlo generator ELRADGEN 2.0 for simulation of radiative events in elastic ep-scattering of polarized particles

    NASA Astrophysics Data System (ADS)

    Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.

    2012-07-01

    The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.

  4. Validation of a Multimodality Flow Phantom and Its Application for Assessment of Dynamic SPECT and PET Technologies.

    PubMed

    Gabrani-Juma, Hanif; Clarkin, Owen J; Pourmoghaddas, Amir; Driscoll, Brandon; Wells, R Glenn; deKemp, Robert A; Klein, Ran

    2017-01-01

    Simple and robust techniques are lacking to assess performance of flow quantification using dynamic imaging. We therefore developed a method to qualify flow quantification technologies using a physical compartment exchange phantom and image analysis tool. We validate and demonstrate utility of this method using dynamic PET and SPECT. Dynamic image sequences were acquired on two PET/CT and a cardiac dedicated SPECT (with and without attenuation and scatter corrections) systems. A two-compartment exchange model was fit to image derived time-activity curves to quantify flow rates. Flowmeter measured flow rates (20-300 mL/min) were set prior to imaging and were used as reference truth to which image derived flow rates were compared. Both PET cameras had excellent agreement with truth ( [Formula: see text]). High-end PET had no significant bias (p > 0.05) while lower-end PET had minimal slope bias (wash-in and wash-out slopes were 1.02 and 1.01) but no significant reduction in precision relative to high-end PET (<15% vs. <14% limits of agreement, p > 0.3). SPECT (without scatter and attenuation corrections) slope biases were noted (0.85 and 1.32) and attributed to camera saturation in early time frames. Analysis of wash-out rates from non-saturated, late time frames resulted in excellent agreement with truth ( [Formula: see text], slope = 0.97). Attenuation and scatter corrections did not significantly impact SPECT performance. The proposed phantom, software and quality assurance paradigm can be used to qualify imaging instrumentation and protocols for quantification of kinetic rate parameters using dynamic imaging.

  5. Inverse Compton Scattering in Mildly Relativistic Plasma

    NASA Technical Reports Server (NTRS)

    Molnar, S. M.; Birkinshaw, M.

    1998-01-01

    We investigated the effect of inverse Compton scattering in mildly relativistic static and moving plasmas with low optical depth using Monte Carlo simulations, and calculated the Sunyaev-Zel'dovich effect in the cosmic background radiation. Our semi-analytic method is based on a separation of photon diffusion in frequency and real space. We use Monte Carlo simulation to derive the intensity and frequency of the scattered photons for a monochromatic incoming radiation. The outgoing spectrum is determined by integrating over the spectrum of the incoming radiation using the intensity to determine the correct weight. This method makes it possible to study the emerging radiation as a function of frequency and direction. As a first application we have studied the effects of finite optical depth and gas infall on the Sunyaev-Zel'dovich effect (not possible with the extended Kompaneets equation) and discuss the parameter range in which the Boltzmann equation and its expansions can be used. For high temperature clusters (k(sub B)T(sub e) greater than or approximately equal to 15 keV) relativistic corrections based on a fifth order expansion of the extended Kompaneets equation seriously underestimate the Sunyaev-Zel'dovich effect at high frequencies. The contribution from plasma infall is less important for reasonable velocities. We give a convenient analytical expression for the dependence of the cross-over frequency on temperature, optical depth, and gas infall speed. Optical depth effects are often more important than relativistic corrections, and should be taken into account for high-precision work, but are smaller than the typical kinematic effect from cluster radial velocities.

  6. Born iterative reconstruction using perturbed-phase field estimates

    PubMed Central

    Astheimer, Jeffrey P.; Waag, Robert C.

    2008-01-01

    A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements. PMID:19062873

  7. Polarized radiative transfer considering thermal emission in semitransparent media

    NASA Astrophysics Data System (ADS)

    Ben, Xun; Yi, Hong-Liang; Tan, He-Ping

    2014-09-01

    The characteristics of the polarization must be considered for a complete and correct description of radiation transfer in a scattering medium. Observing and identifying the polarizition characteristics of the thermal emission of a hot semitransparent medium have a major significance to analyze the optical responses of the medium for different temperatures. In this paper, a Monte Carlo method is developed for polarzied radiative transfer in a semitransparent medium. There are mainly two kinds of mechanisms leading to polarization of light: specular reflection on the Fresnel boundary and scattering by particles. The determination of scattering direction is the key to solve polarized radiative transfer problem using the Monte Carlo method. An optimized rejection method is used to calculate the scattering angles. In the model, the treatment of specular reflection is also considered, and in the process of tracing photons, the normalization must be applied to the Stokes vector when scattering, reflection, or transmission occurs. The vector radiative transfer matrix (VRTM) is defined and solved using Monte Carlo strategy, by which all four Stokes elements can be determined. Our results for Rayleigh scattering and Mie scattering are compared well with published data. The accuracy of the developed Monte Carlo method is shown to be good enough for the solution to vector radiative transfer. Polarization characteristics of thermal emission in a hot semitransparent medium is investigated, and results show that the U and V parameters of Stokes vector are equal to zero, an obvious peak always appear in the Q curve instead of the I curve, and refractive index has a completely different effect on I from Q.

  8. [Experimental research of turbidity influence on water quality monitoring of COD in UV-visible spectroscopy].

    PubMed

    Tang, Bin; Wei, Biao; Wu, De-Cao; Mi, De-Ling; Zhao, Jing-Xiao; Feng, Peng; Jiang, Shang-Hai; Mao, Ben-Jiang

    2014-11-01

    Eliminating turbidity is a direct effect spectroscopy detection of COD key technical problems. This stems from the UV-visible spectroscopy detected key quality parameters depend on an accurate and effective analysis of water quality parameters analytical model, and turbidity is an important parameter that affects the modeling. In this paper, we selected formazine turbidity solution and standard solution of potassium hydrogen phthalate to study the turbidity affect of UV--visible absorption spectroscopy detection of COD, at the characteristics wavelength of 245, 300, 360 and 560 nm wavelength point several characteristics with the turbidity change in absorbance method of least squares curve fitting, thus analyzes the variation of absorbance with turbidity. The results show, In the ultraviolet range of 240 to 380 nm, as the turbidity caused by particle produces compounds to the organics, it is relatively complicated to test the turbidity affections on the water Ultraviolet spectra; in the visible region of 380 to 780 nm, the turbidity of the spectrum weakens with wavelength increases. Based on this, this paper we study the multiplicative scatter correction method affected by the turbidity of the water sample spectra calibration test, this method can correct water samples spectral affected by turbidity. After treatment, by comparing the spectra before, the results showed that the turbidity caused by wavelength baseline shift points have been effectively corrected, and features in the ultraviolet region has not diminished. Then we make multiplicative scatter correction for the three selected UV liquid-visible absorption spectroscopy, experimental results shows that on the premise of saving the characteristic of the Ultraviolet-Visible absorption spectrum of water samples, which not only improve the quality of COD spectroscopy detection SNR, but also for providing an efficient data conditioning regimen for establishing an accurate of the chemical measurement methods.

  9. Accuracy of Rhenium-188 SPECT/CT activity quantification for applications in radionuclide therapy using clinical reconstruction methods.

    PubMed

    Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna

    2017-07-20

    The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors  <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors  >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.

  10. Incorporation of a two metre long PET scanner in STIR

    NASA Astrophysics Data System (ADS)

    Tsoumpas, C.; Brain, C.; Dyke, T.; Gold, D.

    2015-09-01

    The Explorer project aims to investigate the potential benefits of a total-body 2 metre long PET scanner. The following investigation incorporates this scanner in STIR library and demonstrates the capabilities and weaknesses of existing reconstruction (FBP and OSEM) and single scatter simulation algorithms. It was found that sensible images are reconstructed but at the expense of high memory and processing time demands. FBP requires 4 hours on a core; OSEM: 2 hours per iteration if ran in parallel on 15-cores of a high performance computer. The single scatter simulation algorithm shows that on a short scale, up to a fifth of the scanner length, the assumption that the scatter between direct rings is similar to the scatter between the oblique rings is approximately valid. However, for more extreme cases this assumption is not longer valid, which illustrates that consideration of the oblique rings within the single scatter simulation will be necessary, if this scatter correction is the method of choice.

  11. Dead time corrections for inbeam γ-spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.

    2017-08-01

    Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.

  12. Direct perturbation theory for the dark soliton solution to the nonlinear Schrödinger equation with normal dispersion.

    PubMed

    Yu, Jia-Lu; Yang, Chun-Nuan; Cai, Hao; Huang, Nian-Ning

    2007-04-01

    After finding the basic solutions of the linearized nonlinear Schrödinger equation by the method of separation of variables, the perturbation theory for the dark soliton solution is constructed by linear Green's function theory. In application to the self-induced Raman scattering, the adiabatic corrections to the soliton's parameters are obtained and the remaining correction term is given as a pure integral with respect to the continuous spectral parameter.

  13. SU-D-12A-07: Optimization of a Moving Blocker System for Cone-Beam Computed Tomography Scatter Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouyang, L; Yan, H; Jia, X

    2014-06-01

    Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated bymore » Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.« less

  14. Microwave analog experiments on optically soft spheroidal scatterers with weak electromagnetic signature

    NASA Astrophysics Data System (ADS)

    Saleh, H.; Charon, J.; Dauchet, J.; Tortel, H.; Geffrin, J.-M.

    2017-07-01

    Light scattering by optically soft particles is being theoretically investigated in many radiative studies. An interest is growing up to develop approximate methods when the resolution of Maxwell's equations is impractical due to time and/or memory size problems with objects of complex geometries. The participation of experimental studies is important to assess novel approximations when no reference solution is available. The microwave analogy represents an efficient solution to perform such electromagnetic measurements in controlled conditions. In this paper, we take advantage of the particular features of our microwave device to present an extensive experimental study on the electromagnetic scattering by spheroidal particles analogs with low refractive indices, as a first step toward the assessment of micro-organisms with low refractive index and heterogeneities. The spheroidal analogs are machined from a low density material and they mimic soft particles of interest to the light scattering community. The measurements are confronted to simulations obtained with Finite Element Method and T-Matrix method. A good agreement is obtained even with refractive index as low as 1.13. Scattered signals of low intensities are correctly measured and the position of the targets is precisely controlled. The forward scattering measurements show high sensitivity to noise and require careful extraction. The configuration of the measurement device reveals different technical requirements between forward and backward scattering directions. The results open interesting perspectives about novel measurement procedures as well as about the use of high prototyping technologies to manufacture analogs of precise refractive indices and shapes.

  15. SU-F-J-211: Scatter Correction for Clinical Cone-Beam CT System Using An Optimized Stationary Beam Blocker with a Single Scan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, X; Zhang, Z; Xie, Y

    Purpose: X-ray scatter photons result in significant image quality degradation of cone-beam CT (CBCT). Measurement based algorithms using beam blocker directly acquire the scatter samples and achieve significant improvement on the quality of CBCT image. Within existing algorithms, single-scan and stationary beam blocker proposed previously is promising due to its simplicity and practicability. Although demonstrated effectively on tabletop system, the blocker fails to estimate the scatter distribution on clinical CBCT system mainly due to the gantry wobble. In addition, the uniform distributed blocker strips in our previous design results in primary data loss in the CBCT system and leads tomore » the image artifacts due to data insufficiency. Methods: We investigate the motion behavior of the beam blocker in each projection and design an optimized non-uniform blocker strip distribution which accounts for the data insufficiency issue. An accurate scatter estimation is then achieved from the wobble modeling. Blocker wobble curve is estimated using threshold-based segmentation algorithms in each projection. In the blocker design optimization, the quality of final image is quantified using the number of the primary data loss voxels and the mesh adaptive direct search algorithm is applied to minimize the objective function. Scatter-corrected CT images are obtained using the optimized blocker. Results: The proposed method is evaluated using Catphan@504 phantom and a head patient. On the Catphan©504, our approach reduces the average CT number error from 115 Hounsfield unit (HU) to 11 HU in the selected regions of interest, and improves the image contrast by a factor of 1.45 in the high-contrast regions. On the head patient, the CT number error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial non-uniformity is decreased from 27% to 5% after correction. Conclusion: The proposed optimized blocker design is practical and attractive for CBCT guided radiation therapy. This work is supported by grants from Guangdong Innovative Research Team Program of China (Grant No. 2011S013), National 863 Programs of China (Grant Nos. 2012AA02A604 and 2015AA043203), the National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917)« less

  16. A Q-Band Free-Space Characterization of Carbon Nanotube Composites

    PubMed Central

    Hassan, Ahmed M.; Garboczi, Edward J.

    2016-01-01

    We present a free-space measurement technique for non-destructive non-contact electrical and dielectric characterization of nano-carbon composites in the Q-band frequency range of 30 GHz to 50 GHz. The experimental system and error correction model accurately reconstruct the conductivity of composite materials that are either thicker than the wave penetration depth, and therefore exhibit negligible microwave transmission (less than −40 dB), or thinner than the wave penetration depth and, therefore, exhibit significant microwave transmission. This error correction model implements a fixed wave propagation distance between antennas and corrects the complex scattering parameters of the specimen from two references, an air slab having geometrical propagation length equal to that of the specimen under test, and a metallic conductor, such as an aluminum plate. Experimental results were validated by reconstructing the relative dielectric permittivity of known dielectric materials and then used to determine the conductivity of nano-carbon composite laminates. This error correction model can simplify routine characterization of thin conducting laminates to just one measurement of scattering parameters, making the method attractive for research, development, and for quality control in the manufacturing environment. PMID:28057959

  17. Reconstructive correction of aberrations in nuclear particle spectrographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berz, M.; Joh, K.; Nolen, J.A.

    A method is presented that allows the reconstruction of trajectories in particle spectrographs and the reconstructive correction of residual aberrations that otherwise limit the resolution. Using a computed or fitted high order transfer map that describes the uncorrected aberrations of the spectrograph, it is possible to calculate a map via an analytic recursion relation that allows the computation of the corrected data of interest such as reaction energy and scattering angle as well as the reconstructed trajectories in terms of position measurements in two planes near the focal plane. The technique is only limited by the accuracy of the positionmore » measurements, the incoherent spot sizes, and the accuracy of the transfer map. In practice the method can be expressed as an inversion of a nonlinear map and implemented in the differential algebraic framework. The method is applied to correct residual aberrations in the S800 spectrograph which is under construction at the National Superconducting Cyclotron Laboratory at Michigan State University and to two other high resolution spectrographs.« less

  18. Threshold e- p⟶ nνe scattering and the electron neutrino mass

    NASA Astrophysics Data System (ADS)

    Ciborowski, Jacek; Rembieliński, Jakub

    2017-12-01

    The most precise measurement of the electron neutrino mass has been obtained from the shape of the electron energy spectrum near the endpoint in tritium decay. The Mainz and Troitsk experiments indicated an excess instead of expected depletion of counts in that region. Results derived from such measurements are subject to numerous atomic corrections which are absent in the scattering e- p ⟶ nνe. This new idea is presented in the article, with its advantages and difficulties, and is compared to the method of tritium decay.

  19. Compact Polarimetry in a Low Frequency Spaceborne Context

    NASA Technical Reports Server (NTRS)

    Truong-Loi, M-L.; Freeman, A.; Dubois-Fernandez, P.; Pottier, E.

    2011-01-01

    Compact polarimetry has been shown to be an interesting alternative mode to full polarimetry when global coverage and revisit time are key issues. It consists on transmitting a single polarization, while receiving on two. Several critical points have been identified, one being the Faraday rotation (FR) correction and the other the calibration. When a low frequency electromagnetic wave travels through the ionosphere, it undergoes a rotation of the polarization plane about the radar line of sight for a linearly polarized wave, and a simple phase shift for a circularly polarized wave. In a low frequency radar, the only possible choice of the transmit polarization is the circular one, in order to guaranty that the scattering element on the ground is illuminated with a constant polarization independently of the ionosphere state. This will allow meaningful time series analysis, interferometry as long as the Faraday rotation effect is corrected for the return path. In full-polarimetric (FP) mode, two techniques allow to estimate the FR: Freeman method using linearly polarized data, and Bickel and Bates theory based on the transformation of the measured scattering matrix to a circular basis. In CP mode, an alternate procedure is presented which relies on the bare surface scattering properties. These bare surfaces are selected by the conformity coefficient, invariant with FR. This coefficient is compared to other published classifications to show its potential in distinguishing three different scattering types: surface, doublebounce and volume. The performances of the bare surfaces selection and FR estimation are evaluated on PALSAR and airborne data. Once the bare surfaces are selected and Faraday angle estimated over them, the correction can be applied over the whole scene. The algorithm is compared with both FP techniques. In the last part of the paper, the calibration of a CP system from the point of view of classical matrix transformation methods in polarimetry is proposed.

  20. DISSOLVED ORGANIC FLUOROPHORES IN SOUTHEASTERN US COASTAL WATERS: CORRECTION METHOD FOR ELIMINATING RAYLEIGH AND RAMAN SCATTERING PEAKS IN EXCITATION-EMISSION MATRICES

    EPA Science Inventory

    Fluorescence-based observations provide useful, sensitive information concerning the nature and distribution of colored dissolved organic matter (CDOM) in coastal and freshwater environments. The excitation-emission matrix (EEM) technique has become widely used for evaluating sou...

  1. Evaluation of various energy windows at different radionuclides for scatter and attenuation correction in nuclear medicine.

    PubMed

    Asgari, Afrouz; Ashoor, Mansour; Sohrabpour, Mostafa; Shokrani, Parvaneh; Rezaei, Ali

    2015-05-01

    Improving signal to noise ratio (SNR) and qualified images by the various methods is very important for detecting the abnormalities at the body organs. Scatter and attenuation of photons by the organs lead to errors in radiopharmaceutical estimation as well as degradation of images. The choice of suitable energy window and the radionuclide have a key role in nuclear medicine which appearing the lowest scatter fraction as well as having a nearly constant linear attenuation coefficient as a function of phantom thickness. The energy windows of symmetrical window (SW), asymmetric window (ASW), high window (WH) and low window (WL) using Tc-99m and Sm-153 radionuclide with solid water slab phantom (RW3) and Teflon bone phantoms have been compared, and Matlab software and Monte Carlo N-Particle (MCNP4C) code were modified to simulate these methods and obtaining the amounts of FWHM and full width at tenth maximum (FWTM) using line spread functions (LSFs). The experimental data were obtained from the Orbiter Scintron gamma camera. Based on the results of the simulation as well as experimental work, the performance of WH and ASW display of the results, lowest scatter fraction as well as constant linear attenuation coefficient as a function of phantom thickness. WH and ASW were optimal windows in nuclear medicine imaging for Tc-99m in RW3 phantom and Sm-153 in Teflon bone phantom. Attenuation correction was done for WH and ASW optimal windows and for these radionuclides using filtered back projection algorithm. Results of simulation and experimental show that very good agreement between the set of experimental with simulation as well as theoretical values with simulation data were obtained which was nominally less than 7.07 % for Tc-99m and less than 8.00 % for Sm-153. Corrected counts were not affected by the thickness of scattering material. The Simulated results of Line Spread Function (LSF) for Sm-153 and Tc-99m in phantom based on four windows and TEW method were indicated that the FWHM and FWTM values were approximately the same in TEW method and WH and ASW, but the sensitivity at the optimal window was more than that of the other one. The suitable determination of energy window width on the energy spectra can be useful in optimal design to improve efficiency and contrast. It is found that the WH is preferred to the ASW and the ASW is preferred to the SW.

  2. Retrieval of atmospheric properties from hyper and multispectral imagery with the FLAASH atmospheric correction algorithm

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald

    2005-10-01

    Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.

  3. SeaWinds Scatterometer Wind Vector Retrievals Within Hurricanes Using AMSR and NEXRAD to Perform Corrections for Precipitation Effects: Comparison of AMSR and NEXRAD Retrievals of Rain

    NASA Technical Reports Server (NTRS)

    Weissman, David E.; Hristova-Veleva, Svetla; Callahan, Philip

    2006-01-01

    The opportunity provided by satellite scatterometers to measure ocean surface winds in strong storms and hurricanes is diminished by the errors in the received backscatter (SIGMA-0) caused by the attenuation, scattering and surface roughening produced by heavy rain. Providing a good rain correction is a very challenging problem, particularly at Ku band (13.4 GHz) where rain effects are strong. Corrections to the scatterometer measurements of ocean surface winds can be pursued with either of two different methods: empirical or physical modeling. The latter method is employed in this study because of the availability of near simultaneous and collocated measurements provided by the MIDORI-II suite of instruments. The AMSR was designed to measure atmospheric water-related parameters on a spatial scale comparable to the SeaWinds scatterometer. These quantities can be converted into volumetric attenuation and scattering at the Ku-band frequency of SeaWinds. Optimal estimates of the volume backscatter and attenuation require a knowledge of the three dimensional distribution of reflectivity on a scale comparable to that of the precipitation. Studies selected near the US coastline enable the much higher resolution NEXRAD reflectivity measurements evaluate the AMSR estimates. We are also conducting research into the effects of different beam geometries and nonuniform beamfilling of precipitation within the field-of-view of the AMSR and the scatterometer. Furthermore, both AMSR and NEXRAD estimates of atmospheric correction can be used to produce corrected SIGMA-0s, which are then input to the JPL wind retrieval algorithm.

  4. [Rapid Identification of Epicarpium Citri Grandis via Infrared Spectroscopy and Fluorescence Spectrum Imaging Technology Combined with Neural Network].

    PubMed

    Pan, Sha-sha; Huang, Fu-rong; Xiao, Chi; Xian, Rui-yi; Ma, Zhi-guo

    2015-10-01

    To explore rapid reliable methods for detection of Epicarpium citri grandis (ECG), the experiment using Fourier Transform Attenuated Total Reflection Infrared Spectroscopy (FTIR/ATR) and Fluorescence Spectrum Imaging Technology combined with Multilayer Perceptron (MLP) Neural Network pattern recognition, for the identification of ECG, and the two methods are compared. Infrared spectra and fluorescence spectral images of 118 samples, 81 ECG and 37 other kinds of ECG, are collected. According to the differences in tspectrum, the spectra data in the 550-1 800 cm(-1) wavenumber range and 400-720 nm wavelength are regarded as the study objects of discriminant analysis. Then principal component analysis (PCA) is applied to reduce the dimension of spectroscopic data of ECG and MLP Neural Network is used in combination to classify them. During the experiment were compared the effects of different methods of data preprocessing on the model: multiplicative scatter correction (MSC), standard normal variable correction (SNV), first-order derivative(FD), second-order derivative(SD) and Savitzky-Golay (SG). The results showed that: after the infrared spectra data via the Savitzky-Golay (SG) pretreatment through the MLP Neural Network with the hidden layer function as sigmoid, we can get the best discrimination of ECG, the correct percent of training set and testing set are both 100%. Using fluorescence spectral imaging technology, corrected by the multiple scattering (MSC) results in the pretreatment is the most ideal. After data preprocessing, the three layers of the MLP Neural Network of the hidden layer function as sigmoid function can get 100% correct percent of training set and 96.7% correct percent of testing set. It was shown that the FTIR/ATR and fluorescent spectral imaging technology combined with MLP Neural Network can be used for the identification study of ECG and has the advantages of rapid, reliable effect.

  5. Interplay of threshold resummation and hadron mass corrections in deep inelastic processes

    DOE PAGES

    Accardi, Alberto; Anderle, Daniele P.; Ringer, Felix

    2015-02-01

    We discuss hadron mass corrections and threshold resummation for deep-inelastic scattering lN-->l'X and semi-inclusive annihilation e +e - → hX processes, and provide a prescription how to consistently combine these two corrections respecting all kinematic thresholds. We find an interesting interplay between threshold resummation and target mass corrections for deep-inelastic scattering at large values of Bjorken x B. In semi-inclusive annihilation, on the contrary, the two considered corrections are relevant in different kinematic regions and do not affect each other. A detailed analysis is nonetheless of interest in the light of recent high precision data from BaBar and Belle onmore » pion and kaon production, with which we compare our calculations. For both deep inelastic scattering and single inclusive annihilation, the size of the combined corrections compared to the precision of world data is shown to be large. Therefore, we conclude that these theoretical corrections are relevant for global QCD fits in order to extract precise parton distributions at large Bjorken x B, and fragmentation functions over the whole kinematic range.« less

  6. SU-E-J-135: Feasibility of Using Quantitative Cone Beam CT for Proton Adaptive Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jingqian, W; Wang, Q; Zhang, X

    2015-06-15

    Purpose: To investigate the feasibility of using scatter corrected cone beam CT (CBCT) for proton adaptive planning. Methods: Phantom study was used to evaluate the CT number difference between the planning CT (pCT), quantitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units using adaptive scatter kernel superposition (ASKS) technique, and raw CBCT (rCBCT). After confirming the CT number accuracy, prostate patients, each with a pCT and several sets of weekly CBCT, were investigated for this study. Spot scanning proton treatment plans were independently generated on pCT, qCBCT and rCBCT. The treatment plans were then recalculated on all images. Dose-volume-histogrammore » (DVH) parameters and gamma analysis were used to compare between dose distributions. Results: Phantom study suggested that Hounsfield unit accuracy for different materials are within 20 HU for qCBCT and over 250 HU for rCBCT. For prostate patients, proton dose could be calculated accurately on qCBCT but not on rCBCT. When the original plan was recalculated on qCBCT, tumor coverage was maintained when anatomy was consistent with pCT. However, large dose variance was observed when patient anatomy change. Adaptive plan using qCBCT was able to recover tumor coverage and reduce dose to normal tissue. Conclusion: It is feasible to use qu antitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units for proton dose calculation and adaptive planning in proton therapy. Partly supported by Varian Medical Systems.« less

  7. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied.more » (authors)« less

  8. Optical characterization of pancreatic normal and tumor tissues with double integrating sphere system

    NASA Astrophysics Data System (ADS)

    Kiris, Tugba; Akbulut, Saadet; Kiris, Aysenur; Gucin, Zuhal; Karatepe, Oguzhan; Bölükbasi Ates, Gamze; Tabakoǧlu, Haşim Özgür

    2015-03-01

    In order to develop minimally invasive, fast and precise diagnostic and therapeutic methods in medicine by using optical methods, first step is to examine how the light propagates, scatters and transmitted through medium. So as to find out appropriate wavelengths, it is required to correctly determine the optical properties of tissues. The aim of this study is to measure the optical properties of both cancerous and normal ex-vivo pancreatic tissues. Results will be compared to detect how cancerous and normal tissues respond to different wavelengths. Double-integrating-sphere system and computational technique inverse adding doubling method (IAD) were used in the study. Absorption and reduced scattering coefficients of normal and cancerous pancreatic tissues have been measured within the range of 500-650 nm. Statistical significant differences between cancerous and normal tissues have been obtained at 550 nm and 630 nm for absorption coefficients. On the other hand; there were no statistical difference found for scattering coefficients at any wavelength.

  9. Optical phase conjugation (OPC)-assisted isotropic focusing.

    PubMed

    Jang, Mooseok; Sentenac, Anne; Yang, Changhuei

    2013-04-08

    Isotropic optical focusing - the focusing of light with axial confinement that matches its lateral confinement, is important for a broad range of applications. Conventionally, such focusing is achieved by overlapping the focused beams from a pair of opposite-facing microscope objective lenses. However the exacting requirements for the alignment of the objective lenses and the method's relative intolerance to sample turbidity have significantly limited its utility. In this paper, we present an optical phase conjugation (OPC)-assisted isotropic focusing method that can address both challenges. We exploit the time-reversal nature of OPC playback to naturally guarantee the overlap of the two focused beams even when the objective lenses are significantly misaligned (up to 140 microns transversely and 80 microns axially demonstrated). The scattering correction capability of OPC also enabled us to accomplish isotropic focusing through thick scattering samples (demonstrated with samples of ~7 scattering mean free paths). This method can potentially improve 4Pi microscopy and 3D microstructure patterning.

  10. Application of modern radiative transfer tools to model laboratory quartz emissivity

    NASA Astrophysics Data System (ADS)

    Pitman, Karly M.; Wolff, Michael J.; Clayton, Geoffrey C.

    2005-08-01

    Planetary remote sensing of regolith surfaces requires use of theoretical models for interpretation of constituent grain physical properties. In this work, we review and critically evaluate past efforts to strengthen numerical radiative transfer (RT) models with comparison to a trusted set of nadir incidence laboratory quartz emissivity spectra. By first establishing a baseline statistical metric to rate successful model-laboratory emissivity spectral fits, we assess the efficacy of hybrid computational solutions (Mie theory + numerically exact RT algorithm) to calculate theoretical emissivity values for micron-sized α-quartz particles in the thermal infrared (2000-200 cm-1) wave number range. We show that Mie theory, a widely used but poor approximation to irregular grain shape, fails to produce the single scattering albedo and asymmetry parameter needed to arrive at the desired laboratory emissivity values. Through simple numerical experiments, we show that corrections to single scattering albedo and asymmetry parameter values generated via Mie theory become more necessary with increasing grain size. We directly compare the performance of diffraction subtraction and static structure factor corrections to the single scattering albedo, asymmetry parameter, and emissivity for dense packing of grains. Through these sensitivity studies, we provide evidence that, assuming RT methods work well given sufficiently well-quantified inputs, assumptions about the scatterer itself constitute the most crucial aspect of modeling emissivity values.

  11. Removal of atmospheric effects from satellite imagery of the oceans.

    PubMed

    Gordon, H R

    1978-05-15

    In attempting to observe the color of the ocean from satellites, it is necessary to remove the effects of atmospheric and sea surface scattering from the upward radiance at high altitude in order to observe only those photons which were backscattered out of the ocean and hence contain information about subsurface conditions. The observations that (1) the upward radiance from the unwanted photons can be divided into those resulting from Rayleigh scattering alone and those resulting from aerosol scattering alone, (2) the aerosol scattering phase function should be nearly independent of wavelength, and (3) the Rayleigh component can be computed without a knowledge of the sea surface roughness are combined to yield an algorithm for removing a large portion of this unwanted radiance from satellite imagery of the ocean. It is assumed that the ocean is totally absorbing in a band of wavelengths around 750 nm and shown that application of the proposed algorithm to correct the radiance at a wavelength lambda requires only the ratio () of the aerosol optical thickness at lambda to that at about 750 nm. The accuracy to which the correction can be made as a function of the accuracy to which can be found is in detail. A possible method of finding from satellite measurements alone is suggested.

  12. Demonstration of a novel technique to measure two-photon exchange effects in elastic e±p scattering

    NASA Astrophysics Data System (ADS)

    Moteabbed, M.; Niroula, M.; Raue, B. A.; Weinstein, L. B.; Adikaram, D.; Arrington, J.; Brooks, W. K.; Lachniet, J.; Rimal, Dipak; Ungaro, M.; Afanasev, A.; Adhikari, K. P.; Aghasyan, M.; Amaryan, M. J.; Anefalos Pereira, S.; Avakian, H.; Ball, J.; Baltzell, N. A.; Battaglieri, M.; Batourine, V.; Bedlinskiy, I.; Bennett, R. P.; Biselli, A. S.; Bono, J.; Boiarinov, S.; Briscoe, W. J.; Burkert, V. D.; Carman, D. S.; Celentano, A.; Chandavar, S.; Cole, P. L.; Collins, P.; Contalbrigo, M.; Cortes, O.; Crede, V.; D'Angelo, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Doughty, D.; Dupre, R.; Egiyan, H.; Fassi, L. El; Eugenio, P.; Fedotov, G.; Fegan, S.; Fersch, R.; Fleming, J. A.; Gevorgyan, N.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Gohn, W.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guler, N.; Guo, L.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Harrison, N.; Heddle, D.; Hicks, K.; Ho, D.; Holtrop, M.; Hyde, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Jo, H. S.; Joo, K.; Keller, D.; Khandaker, M.; Kim, A.; Klein, F. J.; Koirala, S.; Kubarovsky, A.; Kubarovsky, V.; Kuhn, S. E.; Kuleshov, S. V.; Lewis, S.; Lu, H. Y.; MacCormick, M.; MacGregor, I. J. D.; Martinez, D.; Mayer, M.; McKinnon, B.; Mineeva, T.; Mirazita, M.; Mokeev, V.; Montgomery, R. A.; Moriya, K.; Moutarde, H.; Munevar, E.; Munoz Camacho, C.; Nadel-Turonski, P.; Nasseripour, R.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Osipenko, M.; Ostrovidov, A. I.; Pappalardo, L. L.; Paremuzyan, R.; Park, K.; Park, S.; Phelps, E.; Phillips, J. J.; Pisano, S.; Pogorelko, O.; Pozdniakov, S.; Price, J. W.; Procureur, S.; Protopopescu, D.; Puckett, A. J. R.; Ripani, M.; Rosner, G.; Rossi, P.; Sabatié, F.; Saini, M. S.; Salgado, C.; Schott, D.; Schumacher, R. A.; Seder, E.; Seraydaryan, H.; Sharabian, Y. G.; Smith, E. S.; Smith, G. D.; Sober, D. I.; Sokhan, D.; Stepanyan, S.; Strauch, S.; Tang, W.; Taylor, C. E.; Tian, Ye; Tkachenko, S.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Wood, M. H.; Zachariou, N.; Zana, L.; Zhang, J.; Zhao, Z. W.; Zonta, I.

    2013-08-01

    Background: The discrepancy between proton electromagnetic form factors extracted using unpolarized and polarized scattering data is believed to be a consequence of two-photon exchange (TPE) effects. However, the calculations of TPE corrections have significant model dependence, and there is limited direct experimental evidence for such corrections.Purpose: The TPE contributions depend on the sign of the lepton charge in e±p scattering, but the luminosities of secondary positron beams limited past measurement at large scattering angles, where the TPE effects are believe to be most significant. We present the results of a new experimental technique for making direct e±p comparisons, which has the potential to make precise measurements over a broad range in Q2 and scattering angles.Methods: We use the Jefferson Laboratory electron beam and the Hall B photon tagger to generate a clean but untagged photon beam. The photon beam impinges on a converter foil to generate a mixed beam of electrons, positrons, and photons. A chicane is used to separate and recombine the electron and positron beams while the photon beam is stopped by a photon blocker. This provides a combined electron and positron beam, with energies from 0.5 to 3.2 GeV, which impinges on a liquid hydrogen target. The large acceptance CLAS detector is used to identify and reconstruct elastic scattering events, determining both the initial lepton energy and the sign of the scattered lepton.Results: The data were collected in two days with a primary electron beam energy of only 3.3 GeV, limiting the data from this run to smaller values of Q2 and scattering angle. Nonetheless, this measurement yields a data sample for e±p with statistics comparable to those of the best previous measurements. We have shown that we can cleanly identify elastic scattering events and correct for the difference in acceptance for electron and positron scattering. Because we ran with only one polarity for the chicane, we are unable to study the difference between the incoming electron and positron beams. This systematic effect leads to the largest uncertainty in the final ratio of positron to electron scattering: R=1.027±0.005±0.05 for =0.206 GeV2 and 0.830⩽ɛ⩽0.943.Conclusions: We have demonstrated that the tertiary e± beam generated using this technique provides the opportunity for dramatically improved comparisons of e±p scattering, covering a significant range in both Q2 and scattering angle. Combining data with different chicane polarities will allow for detailed studies of the difference between the incoming e+ and e- beams.

  13. Fully relativistic form factor for Thomson scattering.

    PubMed

    Palastro, J P; Ross, J S; Pollock, B; Divol, L; Froula, D H; Glenzer, S H

    2010-03-01

    We derive a fully relativistic form factor for Thomson scattering in unmagnetized plasmas valid to all orders in the normalized electron velocity, beta[over ]=v[over ]/c. The form factor is compared to a previously derived expression where the lowest order electron velocity, beta[over], corrections are included [J. Sheffield, (Academic Press, New York, 1975)]. The beta[over ] expansion approach is sufficient for electrostatic waves with small phase velocities such as ion-acoustic waves, but for electron-plasma waves the phase velocities can be near luminal. At high phase velocities, the electron motion acquires relativistic corrections including effective electron mass, relative motion of the electrons and electromagnetic wave, and polarization rotation. These relativistic corrections alter the scattered emission of thermal plasma waves, which manifest as changes in both the peak power and width of the observed Thomson-scattered spectra.

  14. Diaphragm correction factors for the FAC-IR-300 free-air ionization chamber.

    PubMed

    Mohammadi, Seyed Mostafa; Tavakoli-Anbaran, Hossein

    2018-02-01

    A free-air ionization chamber FAC-IR-300, designed by the Atomic Energy Organization of Iran, is used as the primary Iranian national standard for the photon air kerma. For accurate air kerma measurements, the contribution from the scattered photons to the total energy released in the collecting volume must be eliminated. One of the sources of scattered photons is the chamber's diaphragm. In this paper, the diaphragm scattering correction factor, k dia , and the diaphragm transmission correction factor, k tr , were introduced. These factors represent corrections to the measured charge (or current) for the photons scattered from the diaphragm surface and the photons penetrated through the diaphragm volume, respectively. The k dia and k tr values were estimated by Monte Carlo simulations. The simulations were performed for the mono-energetic photons in the energy range of 20 - 300keV. According to the simulation results, in this energy range, the k dia values vary between 0.9997 and 0.9948, and k tr values decrease from 1.0000 to 0.9965. The corrections grow in significance with increasing energy of the primary photons. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Transmittance and scattering during wound healing after refractive surgery

    NASA Astrophysics Data System (ADS)

    Mar, Santiago; Martinez-Garcia, C.; Blanco, J. T.; Torres, R. M.; Gonzalez, V. R.; Najera, S.; Rodriguez, G.; Merayo, J. M.

    2004-10-01

    Photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK) are frequent techniques performed to correct ametropia. Both methods have been compared in their way of healing but there is not comparison about transmittance and light scattering during this process. Scattering in corneal wound healing is due to three parameters: cellular size and density, and the size of scar. Increase in the scattering angular width implies a decrease the contrast sensitivity. During wound healing keratocytes activation is induced and these cells become into fibroblasts and myofibroblasts. Hens were operated using PRK and LASIK techniques. Animals used in this experiment were euthanized, and immediately their corneas were removed and placed carefully into a cornea camera support. All optical measurements have been done with a scatterometer constructed in our laboratory. Scattering measurements are correlated with the transmittance -- the smaller transmittance is the bigger scattering is. The aim of this work is to provide experimental data of the corneal transparency and scattering, in order to supply data that they allow generate a more complete model of the corneal transparency.

  16. High-fidelity artifact correction for cone-beam CT imaging of the brain

    NASA Astrophysics Data System (ADS)

    Sisniega, A.; Zbijewski, W.; Xu, J.; Dang, H.; Stayman, J. W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.

    2015-02-01

    CT is the frontline imaging modality for diagnosis of acute traumatic brain injury (TBI), involving the detection of fresh blood in the brain (contrast of 30-50 HU, detail size down to 1 mm) in a non-contrast-enhanced exam. A dedicated point-of-care imaging system based on cone-beam CT (CBCT) could benefit early detection of TBI and improve direction to appropriate therapy. However, flat-panel detector (FPD) CBCT is challenged by artifacts that degrade contrast resolution and limit application in soft-tissue imaging. We present and evaluate a fairly comprehensive framework for artifact correction to enable soft-tissue brain imaging with FPD CBCT. The framework includes a fast Monte Carlo (MC)-based scatter estimation method complemented by corrections for detector lag, veiling glare, and beam hardening. The fast MC scatter estimation combines GPU acceleration, variance reduction, and simulation with a low number of photon histories and reduced number of projection angles (sparse MC) augmented by kernel de-noising to yield a runtime of ~4 min per scan. Scatter correction is combined with two-pass beam hardening correction. Detector lag correction is based on temporal deconvolution of the measured lag response function. The effects of detector veiling glare are reduced by deconvolution of the glare response function representing the long range tails of the detector point-spread function. The performance of the correction framework is quantified in experiments using a realistic head phantom on a testbench for FPD CBCT. Uncorrected reconstructions were non-diagnostic for soft-tissue imaging tasks in the brain. After processing with the artifact correction framework, image uniformity was substantially improved, and artifacts were reduced to a level that enabled visualization of ~3 mm simulated bleeds throughout the brain. Non-uniformity (cupping) was reduced by a factor of 5, and contrast of simulated bleeds was improved from ~7 to 49.7 HU, in good agreement with the nominal blood contrast of 50 HU. Although noise was amplified by the corrections, the contrast-to-noise ratio (CNR) of simulated bleeds was improved by nearly a factor of 3.5 (CNR = 0.54 without corrections and 1.91 after correction). The resulting image quality motivates further development and translation of the FPD-CBCT system for imaging of acute TBI.

  17. Radiative corrections to elastic proton-electron scattering measured in coincidence

    NASA Astrophysics Data System (ADS)

    Gakh, G. I.; Konchatnij, M. I.; Merenkov, N. P.; Tomasi-Gustafsson, E.

    2017-05-01

    The differential cross section for elastic scattering of protons on electrons at rest is calculated, taking into account the QED radiative corrections to the leptonic part of interaction. These model-independent radiative corrections arise due to emission of the virtual and real soft and hard photons as well as to vacuum polarization. We analyze an experimental setup when both the final particles are recorded in coincidence and their energies are determined within some uncertainties. The kinematics, the cross section, and the radiative corrections are calculated and numerical results are presented.

  18. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    PubMed

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  19. Applications of the Hybrid Theory to the Scattering of Electrons from HE+ and Li++ and Resonances in these Systems

    NASA Technical Reports Server (NTRS)

    Bhatia, Anand K.

    2008-01-01

    Applications of the hybrid theory to the scattering of electrons from Ile+ and Li++ and resonances in these systems, A. K. Bhatia, NASA/Goddard Space Flight Center- The Hybrid theory of electron-hydrogen elastic scattering [I] is applied to the S-wave scattering of electrons from He+ and Li++. In this method, both short-range and long-range correlations are included in the Schrodinger equation at the same time. Phase shifts obtained in this calculation have rigorous lower bounds to the exact phase shifts and they are compared with those obtained using the Feshbach projection operator formalism [2], the close-coupling approach [3], and Harris-Nesbet method [4]. The agreement among all the calculations is very good. These systems have doubly-excited or Feshbach resonances embedded in the continuum. The resonance parameters for the lowest ' S resonances in He and Li+ are calculated and they are compared with the results obtained using the Feshbach projection operator formalism [5,6]. It is concluded that accurate resonance parameters can be obtained by the present method, which has the advantage of including corrections due to neighboring resonances and the continuum in which these resonances are embedded.

  20. Metadata-assisted nonuniform atmospheric scattering model of image haze removal for medium-altitude unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Liu, Chunlei; Ding, Wenrui; Li, Hongguang; Li, Jiankun

    2017-09-01

    Haze removal is a nontrivial work for medium-altitude unmanned aerial vehicle (UAV) image processing because of the effects of light absorption and scattering. The challenges are attributed mainly to image distortion and detail blur during the long-distance and large-scale imaging process. In our work, a metadata-assisted nonuniform atmospheric scattering model is proposed to deal with the aforementioned problems of medium-altitude UAV. First, to better describe the real atmosphere, we propose a nonuniform atmospheric scattering model according to the aerosol distribution, which directly benefits the image distortion correction. Second, considering the characteristics of long-distance imaging, we calculate the depth map, which is an essential clue to modeling, on the basis of UAV metadata information. An accurate depth map reduces the color distortion compared with the depth of field obtained by other existing methods based on priors or assumptions. Furthermore, we use an adaptive median filter to address the problem of fuzzy details caused by the global airlight value. Experimental results on both real flight and synthetic images demonstrate that our proposed method outperforms four other existing haze removal methods.

  1. Experimental testing of four correction algorithms for the forward scattering spectrometer probe

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.

    1992-01-01

    Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.

  2. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  3. Neutron scattering from 208Pb at 30.4 and 40.0 MeV and isospin dependence of the nucleon optical potential

    NASA Astrophysics Data System (ADS)

    Devito, R. P.; Khoa, Dao T.; Austin, Sam M.; Berg, U. E. P.; Loc, Bui Minh

    2012-02-01

    Background: Analysis of data involving nuclei far from stability often requires the optical potential (OP) for neutron scattering. Because neutron data are seldom available, whereas proton scattering data are more abundant, it is useful to have estimates of the difference of the neutron and proton optical potentials. This information is contained in the isospin dependence of the nucleon OP. Here we attempt to provide it for the nucleon-208Pb system.Purpose: The goal of this paper is to obtain accurate n+208Pb scattering data and use it, together with existing p+208Pb and 208Pb(p,n)208BiIAS* data, to obtain an accurate estimate of the isospin dependence of the nucleon OP at energies in the 30-60-MeV range.Method: Cross sections for n+208Pb scattering were measured at 30.4 and 40.0 MeV, with a typical relative (normalization) accuracy of 2-4% (3%). An angular range of 15∘ to 130∘ was covered using the beam-swinger time-of-flight system at Michigan State University. These data were analyzed by a consistent optical-model study of the neutron data and of elastic p+208Pb scattering at 45 and 54 MeV. These results were combined with a coupled-channel analysis of the 208Pb(p,n) reaction at 45 MeV, exciting the 0+ isobaric analog state (IAS) in 208Bi.Results: The new data and analysis give an accurate estimate of the isospin impurity of the nucleon-208Pb OP at 30.4 MeV caused by the Coulomb correction to the proton OP. The corrections to the real proton OP given by the CH89 global systematics were found to be only a few percent, whereas for the imaginary potential it was greater than 20% at the nuclear surface. On the basis of the analysis of the measured elastic n+208Pb data at 40 MeV, a Coulomb correction of similar strength and shape was also predicted for the p+208Pb OP at energies around 54 MeV.Conclusions: Accurate neutron scattering data can be used in combination with proton scattering data and (p,n) charge exchange data leading to the IAS to obtain reliable estimates of the isospin impurity of the nucleon OP.

  4. Morphology supporting function: attenuation correction for SPECT/CT, PET/CT, and PET/MR imaging

    PubMed Central

    Lee, Tzu C.; Alessio, Adam M.; Miyaoka, Robert M.; Kinahan, Paul E.

    2017-01-01

    Both SPECT, and in particular PET, are unique in medical imaging for their high sensitivity and direct link to a physical quantity, i.e. radiotracer concentration. This gives PET and SPECT imaging unique capabilities for accurately monitoring disease activity for the purposes of clinical management or therapy development. However, to achieve a direct quantitative connection between the underlying radiotracer concentration and the reconstructed image values several confounding physical effects have to be estimated, notably photon attenuation and scatter. With the advent of dual-modality SPECT/CT, PET/CT, and PET/MR scanners, the complementary CT or MR image data can enable these corrections, although there are unique challenges for each combination. This review covers the basic physics underlying photon attenuation and scatter and summarizes technical considerations for multimodal imaging with regard to PET and SPECT quantification and methods to address the challenges for each multimodal combination. PMID:26576737

  5. Accurate optimization of amino acid form factors for computing small-angle X-ray scattering intensity of atomistic protein structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Dudu; Yang, Sichun; Lu, Lanyuan

    2016-06-20

    Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested againstmore » other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.« less

  6. Three-dimensional dosimetry of small megavoltage radiation fields using radiochromic gels and optical CT scanning

    NASA Astrophysics Data System (ADS)

    Babic, Steven; McNiven, Andrea; Battista, Jerry; Jordan, Kevin

    2009-04-01

    The dosimetry of small fields as used in stereotactic radiotherapy, radiosurgery and intensity-modulated radiation therapy can be challenging and inaccurate due to partial volume averaging effects and possible disruption of charged particle equilibrium. Consequently, there exists a need for an integrating, tissue equivalent dosimeter with high spatial resolution to avoid perturbing the radiation beam and artificially broadening the measured beam penumbra. In this work, radiochromic ferrous xylenol-orange (FX) and leuco crystal violet (LCV) micelle gels were used to measure relative dose factors (RDFs), percent depth dose profiles and relative lateral beam profiles of 6 MV x-ray pencil beams of diameter 28.1, 9.8 and 4.9 mm. The pencil beams were produced via stereotactic collimators mounted on a Varian 2100 EX linear accelerator. The gels were read using optical computed tomography (CT). Data sets were compared quantitatively with dosimetric measurements made with radiographic (Kodak EDR2) and radiochromic (GAFChromic® EBT) film, respectively. Using a fast cone-beam optical CT scanner (Vista™), corrections for diffusion in the FX gel data yielded RDFs that were comparable to those obtained by minimally diffusing LCV gels. Considering EBT film-measured RDF data as reference, cone-beam CT-scanned LCV gel data, corrected for scattered stray light, were found to be in agreement within 0.5% and -0.6% for the 9.8 and 4.9 mm diameter fields, respectively. The validity of the scattered stray light correction was confirmed by general agreement with RDF data obtained from the same LCV gel read out with a laser CT scanner that is less prone to the acceptance of scattered stray light. Percent depth dose profiles and lateral beam profiles were found to agree within experimental error for the FX gel (corrected for diffusion), LCV gel (corrected for scattered stray light), and EBT and EDR2 films. The results from this study reveal that a three-dimensional dosimetry method utilizing optical CT-scanned radiochromic gels allows for the acquisition of a self-consistent volumetric data set in a single exposure, with sufficient spatial resolution to accurately characterize small fields.

  7. Prediction of e± elastic scattering cross-section ratio based on phenomenological two-photon exchange corrections

    NASA Astrophysics Data System (ADS)

    Qattan, I. A.

    2017-06-01

    I present a prediction of the e± elastic scattering cross-section ratio, Re+e-, as determined using a new parametrization of the two-photon exchange (TPE) corrections to electron-proton elastic scattering cross section σR. The extracted ratio is compared to several previous phenomenological extractions, TPE hadronic calculations, and direct measurements from the comparison of electron and positron scattering. The TPE corrections and the ratio Re+e- show a clear change of sign at low Q2, which is necessary to explain the high-Q2 form factors discrepancy while being consistent with the known Q2→0 limit. While my predictions are in generally good agreement with previous extractions, TPE hadronic calculations, and existing world data including the recent two measurements from the CLAS and VEPP-3 Novosibirsk experiments, they are larger than the new OLYMPUS measurements at larger Q2 values.

  8. Resonance treatment using pin-based pointwise energy slowing-down method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sooyoung, E-mail: csy0321@unist.ac.kr; Lee, Changho, E-mail: clee@anl.gov; Lee, Deokjung, E-mail: deokjung@unist.ac.kr

    A new resonance self-shielding method using a pointwise energy solution has been developed to overcome the drawbacks of the equivalence theory. The equivalence theory uses a crude resonance scattering source approximation, and assumes a spatially constant scattering source distribution inside a fuel pellet. These two assumptions cause a significant error, in that they overestimate the multi-group effective cross sections, especially for {sup 238}U. The new resonance self-shielding method solves pointwise energy slowing-down equations with a sub-divided fuel rod. The method adopts a shadowing effect correction factor and fictitious moderator material to model a realistic pointwise energy solution. The slowing-down solutionmore » is used to generate the multi-group cross section. With various light water reactor problems, it was demonstrated that the new resonance self-shielding method significantly improved accuracy in the reactor parameter calculation with no compromise in computation time, compared to the equivalence theory.« less

  9. Methods for assessing forward and backward light scatter in patients with cataract.

    PubMed

    Crnej, Alja; Hirnschall, Nino; Petsoglou, Con; Findl, Oliver

    2017-08-01

    To compare objective methods for assessing backward and forward light scatter and psychophysical tests in patients with cataracts. Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom. Prospective case series. This study included patients scheduled for cataract surgery. Lens opacities were grouped into predominantly nuclear sclerotic, cortical, posterior subcapsular, and mixed cataracts. Backward light scatter was assessed using a rotating Scheimpflug imaging technique (Pentacam HR), forward light scatter using a straylight meter (C-Quant), and straylight using the double-pass method (Optical Quality Analysis System, point-spread function [PSF] meter). The results were correlated with visual acuity under photopic conditions as well as photopic and mesopic contrast sensitivity. The study comprised 56 eyes of 56 patients. The mean age of the 23 men and 33 women was 71 years (range 48 to 84 years). Two patients were excluded. Of the remaining, 15 patients had predominantly nuclear sclerotic cataracts, 13 had cortical cataracts, 11 had posterior subcapsular cataracts, and 15 had mixed cataracts. Correlations between devices were low. The highest correlation was between PSF meter measurements and Scheimpflug measurements (r = 0.32). The best correlation between corrected distance visual acuity was with the PSF meter (r = 0.45). Forward and backward light-scatter measurements cannot be used interchangeably. Scatter as an aspect of quality of vision was independent of acuity. Measuring forward light scatter with the straylight meter can be a useful additional tool in preoperative decision-making. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  10. REPLY: Reply to 'Comment on "Electron-phonon scattering in Sn-doped In2O3 FET nanowires probed by temperature-dependent measurements"'

    NASA Astrophysics Data System (ADS)

    Berengue, Olivia M.; Chiquito, Adenilson J.; Pozzi, Livia P.; Lanfredi, Alexandre J. C.; Leite, Edson R.

    2009-11-01

    In this reply we discuss the use of two and four-probe methods in the resistivity measurements of ITO nanowires. We pointed out that the results obtained by using two or four probe methods are indistinguishable in our case. Additionally we present the correct values for resistivity and consequently for the density of electrons.

  11. Geometrical correction of the e-beam proximity effect for raster scan systems

    NASA Astrophysics Data System (ADS)

    Belic, Nikola; Eisenmann, Hans; Hartmann, Hans; Waas, Thomas

    1999-06-01

    Increasing demands on pattern fidelity and CD accuracy in e- beam lithography require a correction of the e-beam proximity effect. The new needs are mainly coming from OPC at mask level and x-ray lithography. The e-beam proximity limits the achievable resolution and affects neighboring structures causing under- or over-exposion depending on the local pattern densities and process settings. Methods to compensate for this unequilibrated does distribution usually use a dose modulation or multiple passes. In general raster scan systems are not able to apply variable doses in order to compensate for the proximity effect. For system of this kind a geometrical modulation of the original pattern offers a solution for compensation of line edge deviations due to the proximity effect. In this paper a new method for the fast correction of the e-beam proximity effect via geometrical pattern optimization is described. The method consists of two steps. In a first step the pattern dependent dose distribution caused by back scattering is calculated by convolution of the pattern with the long range part of the proximity function. The restriction to the long range part result in a quadratic sped gain in computing time for the transformation. The influence of the short range part coming from forward scattering is not pattern dependent and can therefore be determined separately in a second step. The second calculation yields the dose curve at the border of a written structure. The finite gradient of this curve leads to an edge displacement depending on the amount of underground dosage at the observed position which was previously determined in the pattern dependent step. This unintended edge displacement is corrected by splitting the line into segments and shifting them by multiples of the writers address grid to the opposite direction.

  12. Non-cancellation of electroweak logarithms in high-energy scattering

    DOE PAGES

    Manohar, Aneesh V.; Shotwell, Brian; Bauer, Christian W.; ...

    2015-01-01

    We study electroweak Sudakov corrections in high energy scattering, and the cancellation between real and virtual Sudakov corrections. Numerical results are given for the case of heavy quark production by gluon collisions involving the rates gg→t¯t, b¯b, t¯bW, t¯tZ, b¯bZ, t¯tH, b¯bH. Gauge boson virtual corrections are related to real transverse gauge boson emission, and Higgs virtual corrections to Higgs and longitudinal gauge boson emission. At the LHC, electroweak corrections become important in the TeV regime. At the proposed 100TeV collider, electroweak interactions enter a new regime, where the corrections are very large and need to be resummed.

  13. Laser-plasma interactions in magnetized environment

    NASA Astrophysics Data System (ADS)

    Shi, Yuan; Qin, Hong; Fisch, Nathaniel J.

    2018-05-01

    Propagation and scattering of lasers present new phenomena and applications when the plasma medium becomes strongly magnetized. With mega-Gauss magnetic fields, scattering of optical lasers already becomes manifestly anisotropic. Special angles exist where coherent laser scattering is either enhanced or suppressed, as we demonstrate using a cold-fluid model. Consequently, by aiming laser beams at special angles, one may be able to optimize laser-plasma coupling in magnetized implosion experiments. In addition, magnetized scattering can be exploited to improve the performance of plasma-based laser pulse amplifiers. Using the magnetic field as an extra control variable, it is possible to produce optical pulses of higher intensity, as well as compress UV and soft x-ray pulses beyond the reach of other methods. In even stronger giga-Gauss magnetic fields, laser-plasma interaction enters a relativistic-quantum regime. Using quantum electrodynamics, we compute a modified wave dispersion relation, which enables correct interpretation of Faraday rotation measurements of strong magnetic fields.

  14. Quadratic electroweak corrections for polarized Moller scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Aleksejevs, S. Barkanova, Y. Kolomensky, E. Kuraev, V. Zykunov

    2012-01-01

    The paper discusses the two-loop (NNLO) electroweak radiative corrections to the parity violating electron-electron scattering asymmetry induced by squaring one-loop diagrams. The calculations are relevant for the ultra-precise 11 GeV MOLLER experiment planned at Jefferson Laboratory and experiments at high-energy future electron colliders. The imaginary parts of the amplitudes are taken into consideration consistently in both the infrared-finite and divergent terms. The size of the obtained partial correction is significant, which indicates a need for a complete study of the two-loop electroweak radiative corrections in order to meet the precision goals of future experiments.

  15. Numerical time-domain electromagnetics based on finite-difference and convolution

    NASA Astrophysics Data System (ADS)

    Lin, Yuanqu

    Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.

  16. Improved scatterer property estimates from ultrasound backscatter for small gate lengths using a gate-edge correction factor

    NASA Astrophysics Data System (ADS)

    Oelze, Michael L.; O'Brien, William D.

    2004-11-01

    Backscattered rf signals used to construct conventional ultrasound B-mode images contain frequency-dependent information that can be examined through the backscattered power spectrum. The backscattered power spectrum is found by taking the magnitude squared of the Fourier transform of a gated time segment corresponding to a region in the scattering volume. When a time segment is gated, the edges of the gated regions change the frequency content of the backscattered power spectrum due to truncating of the waveform. Tapered windows, like the Hanning window, and longer gate lengths reduce the relative contribution of the gate-edge effects. A new gate-edge correction factor was developed that partially accounted for the edge effects. The gate-edge correction factor gave more accurate estimates of scatterer properties at small gate lengths compared to conventional windowing functions. The gate-edge correction factor gave estimates of scatterer properties within 5% of actual values at very small gate lengths (less than 5 spatial pulse lengths) in both simulations and from measurements on glass-bead phantoms. While the gate-edge correction factor gave higher accuracy of estimates at smaller gate lengths, the precision of estimates was not improved at small gate lengths over conventional windowing functions. .

  17. Novel measuring strategies in neutron interferometry

    NASA Astrophysics Data System (ADS)

    Bonse, Ulrich; Wroblewski, Thomas

    1985-04-01

    Angular misalignment of a sample in a single crystal neutron interferometer leads to systematic errors of the effective sample thickness and in this way to errors in the determination of the coherent scattering length. The misalignment can be determined and the errors can be corrected by a second measurement at a different angular sample position. Furthermore, a method has been developed which allows supervision of the wavelength during the measurements. These two techniques were tested by determining the scattering length of copper. A value of bc = 7.66(4) fm was obtained which is in excellent agreement with previous measurements.

  18. Generalized model screening potentials for Fermi-Dirac plasmas

    NASA Astrophysics Data System (ADS)

    Akbari-Moghanjoughi, M.

    2016-04-01

    In this paper, some properties of relativistically degenerate quantum plasmas, such as static ion screening, structure factor, and Thomson scattering cross-section, are studied in the framework of linearized quantum hydrodynamic theory with the newly proposed kinetic γ-correction to Bohm term in low frequency limit. It is found that the correction has a significant effect on the properties of quantum plasmas in all density regimes, ranging from solid-density up to that of white dwarf stars. It is also found that Shukla-Eliasson attractive force exists up to a few times the density of metals, and the ionic correlations are seemingly apparent in the radial distribution function signature. Simplified statically screened attractive and repulsive potentials are presented for zero-temperature Fermi-Dirac plasmas, valid for a wide range of quantum plasma number-density and atomic number values. Moreover, it is observed that crystallization of white dwarfs beyond a critical core number-density persists with this new kinetic correction, but it is shifted to a much higher number-density value of n0 ≃ 1.94 × 1037 cm-3 (1.77 × 1010 gr cm-3), which is nearly four orders of magnitude less than the nuclear density. It is found that the maximal Thomson scattering with the γ-corrected structure factor is a remarkable property of white dwarf stars. However, with the new γ-correction, the maximal scattering shifts to the spectrum region between hard X-ray and low-energy gamma-rays. White dwarfs composed of higher atomic-number ions are observed to maximally Thomson-scatter at slightly higher wavelengths, i.e., they maximally scatter slightly low-energy photons in the presence of correction.

  19. Quark-hadron duality constraints on $$\\gamma Z$$ box corrections to parity-violating elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Nathan L.; Blunden, Peter G.; Melnitchouk, Wally

    2015-12-08

    We examine the interference \\gamma Z box corrections to parity-violating elastic electron--proton scattering in the light of the recent observation of quark-hadron duality in parity-violating deep-inelastic scattering from the deuteron, and the approximate isospin independence of duality in the electromagnetic nucleon structure functions down to Q 2 \\approx 1 GeV 2. Assuming that a similar behavior also holds for the \\gamma Z proton structure functions, we find that duality constrains the γ Z box correction to the proton's weak charge to be Re V γ Z V = (5.4 \\pm 0.4) \\times 10 -3 at the kinematics of the Qmore » weak experiment. Within the same model we also provide estimates of the γ Z corrections for future parity-violating experiments, such as MOLLER at Jefferson Lab and MESA at Mainz.« less

  20. Forward multiple scattering corrections as function of detector field of view

    NASA Astrophysics Data System (ADS)

    Zardecki, A.; Deepak, A.

    1983-06-01

    The theoretical formulations are given for an approximate method based on the solution of the radiative transfer equation in the small angle approximation. The method is approximate in the sense that an approximation is made in addition to the small angle approximation. Numerical results were obtained for multiple scattering effects as functions of the detector field of view, as well as the size of the detector's aperture for three different values of the optical depth tau (=1.0, 4.0 and 10.0). Three cases of aperture size were considered--namely, equal to or smaller or larger than the laser beam diameter. The contrast between the on-axis intensity and the received power for the last three cases is clearly evident.

  1. An empirical correction for moderate multiple scattering in super-heterodyne light scattering.

    PubMed

    Botin, Denis; Mapa, Ludmila Marotta; Schweinfurth, Holger; Sieber, Bastian; Wittenberg, Christopher; Palberg, Thomas

    2017-05-28

    Frequency domain super-heterodyne laser light scattering is utilized in a low angle integral measurement configuration to determine flow and diffusion in charged sphere suspensions showing moderate to strong multiple scattering. We introduce an empirical correction to subtract the multiple scattering background and isolate the singly scattered light. We demonstrate the excellent feasibility of this simple approach for turbid suspensions of transmittance T ≥ 0.4. We study the particle concentration dependence of the electro-kinetic mobility in low salt aqueous suspension over an extended concentration regime and observe a maximum at intermediate concentrations. We further use our scheme for measurements of the self-diffusion coefficients in the fluid samples in the absence or presence of shear, as well as in polycrystalline samples during crystallization and coarsening. We discuss the scope and limits of our approach as well as possible future applications.

  2. Ambient dose equivalent and effective dose from scattered x-ray spectra in mammography for Mo/Mo, Mo/Rh and W/Rh anode/filter combinations.

    PubMed

    Künzel, R; Herdade, S B; Costa, P R; Terini, R A; Levenhagen, R S

    2006-04-21

    In this study, scattered x-ray distributions were produced by irradiating a tissue equivalent phantom under clinical mammographic conditions by using Mo/Mo, Mo/Rh and W/Rh anode/filter combinations, for 25 and 30 kV tube voltages. Energy spectra of the scattered x-rays have been measured with a Cd(0.9)Zn(0.1)Te (CZT) detector for scattering angles between 30 degrees and 165 degrees . Measurement and correction processes have been evaluated through the comparison between the values of the half-value layer (HVL) and air kerma calculated from the corrected spectra and measured with an ionization chamber in a nonclinical x-ray system with a W/Mo anode/filter combination. The shape of the corrected x-ray spectra measured in the nonclinical system was also compared with those calculated using semi-empirical models published in the literature. Scattered x-ray spectra measured in the clinical x-ray system have been characterized through the calculation of HVL and mean photon energy. Values of the air kerma, ambient dose equivalent and effective dose have been evaluated through the corrected x-ray spectra. Mean conversion coefficients relating the air kerma to the ambient dose equivalent and to the effective dose from the scattered beams for Mo/Mo, Mo/Rh and W/Rh anode/filter combinations were also evaluated. Results show that for the scattered radiation beams the ambient dose equivalent provides an overestimate of the effective dose by a factor of about 5 in the mammography energy range. These results can be used in the control of the dose limits around a clinical unit and in the calculation of more realistic protective shielding barriers in mammography.

  3. A novel scatter separation method for multi-energy x-ray imaging

    NASA Astrophysics Data System (ADS)

    Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.

    2016-06-01

    X-ray imaging coupled with recently emerged energy-resolved photon counting detectors provides the ability to differentiate material components and to estimate their respective thicknesses. However, such techniques require highly accurate images. The presence of scattered radiation leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in computed tomography (CT). The aim of the present study was to introduce and evaluate a partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. This evaluation was carried out with the aid of numerical simulations provided by an internal simulation tool, Sindbad-SFFD. A simplified numerical thorax phantom placed in a CT geometry was used. The attenuation images and CT slices obtained from corrected data showed a remarkable increase in local contrast and internal structure detectability when compared to uncorrected images. Scatter induced bias was also substantially decreased. In terms of quantitative performance, the developed approach proved to be quite accurate as well. The average normalized root-mean-square error between the uncorrected projections and the reference primary projections was around 23%. The application of PASSSA reduced this error to around 5%. Finally, in terms of voxel value accuracy, an increase by a factor  >10 was observed for most inspected volumes-of-interest, when comparing the corrected and uncorrected total volumes.

  4. Electron kinetic effects on optical diagnostics in fusion plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirnov, V. V.; Den Hartog, D. J.; Duff, J.

    At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP) and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. We calculate electron thermal corrections to the interferometric phase and polarization state of an EM wave propagating along tangential and poloidal chords (Faraday and Cotton-Mouton polarimetry) and perform analysis of the degree of polarization for incoherent TS. The precision of the previous lowest order linear in τ = T{sub e}/m{sub e}c{sup 2} model may be insufficient; we present a more precise model with τ{sup 2}-order corrections to satisfy themore » high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH and RF current drive effects. The classical problem of degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of T{sup e} measurement relevant to ITER operational scenarios.« less

  5. Correction factors for the NMi free-air ionization chamber for medium-energy x-rays calculated with the Monte Carlo method.

    PubMed

    Grimbergen, T W; van Dijk, E; de Vries, W

    1998-11-01

    A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range.

  6. Theoretical interpretation of the Venus 1.05-micron CO2 band and the Venus 0.8189-micron H2O line.

    NASA Technical Reports Server (NTRS)

    Regas, J. L.; Giver, L. P.; Boese, R. W.; Miller, J. H.

    1972-01-01

    The synthetic-spectrum technique was used in the analysis. The synthetic spectra were constructed with a model which takes into account both isotropic scattering and the inhomogeneity in the Venus atmosphere. The Potter-Hansen correction factor was used to correct for anisotropic scattering. The synthetic spectra obtained are, therefore, the first which contain all the essential physics of line formation. The results confirm Potter's conclusion that the Venus cloud tops resemble terrestrial cirrus or stratus clouds in their scattering properties.

  7. Experimental measurements with Monte Carlo corrections and theoretical calculations of neutron inelastic scattering cross section of 115In

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Xiao, Jun; Luo, Xiaobing

    2016-10-01

    The neutron inelastic scattering cross section of 115In has been measured by the activation technique at neutron energies of 2.95, 3.94, and 5.24 MeV with the neutron capture cross sections of 197Au as an internal standard. The effects of multiple scattering and flux attenuation were corrected using the Monte Carlo code GEANT4. Based on the experimental values, the 115In neutron inelastic scattering cross sections data were theoretically calculated between the 1 and 15 MeV with the TALYS software code, the theoretical results of this study are in reasonable agreement with the available experimental results.

  8. Observations Regarding Scatter Fraction and NEC Measurements for Small Animal PET

    NASA Astrophysics Data System (ADS)

    Yang, Yongfeng; Cherry, S. R.

    2006-02-01

    The goal of this study was to evaluate the magnitude and origin of scattered radiation in a small-animal PET scanner and to assess the impact of these findings on noise equivalent count rate (NECR) measurements, a metric often used to optimize scanner acquisition parameters and to compare one scanner with another. The scatter fraction (SF) was measured for line sources in air and line sources placed within a mouse-sized phantom (25 mm /spl phi//spl times/70 mm) and a rat-sized phantom (60 mm /spl phi//spl times/150 mm) on the microPET II small-animal PET scanner. Measurements were performed for lower energy thresholds ranging from 150-450 keV and a fixed upper energy threshold of 750 keV. Four different methods were compared for estimating the SF. Significant scatter fractions were measured with just the line source in the field of view, with the spatial distribution of these events consistent with scatter from the gantry and room environment. For mouse imaging, this component dominates over object scatter, and the measured SF is strongly method dependent. The environmental SF rapidly increases as the lower energy threshold decreases and can be more than 30% for an open energy window of 150-750 keV. The object SF originating from the mouse phantom is about 3-4% and does not change significantly as the lower energy threshold increases. The object SF for the rat phantom ranges from 10 to 35% for different energy windows and increases as the lower energy threshold decreases. Because the measured SF is highly dependent on the method, and there is as yet no agreed upon standard for animal PET, care must be exercised when comparing NECR for small objects between different scanners. Differences may be methodological rather than reflecting any relevant difference in the performance of the scanner. Furthermore, these results have implications for scatter correction methods when the majority of the detected scatter does not arise from the object itself.

  9. Effects of atmospheric aerosols on scattering reflected visible light from earth resource features

    NASA Technical Reports Server (NTRS)

    Noll, K. E.; Tschantz, B. A.; Davis, W. T.

    1972-01-01

    The vertical variations in atmospheric light attenuation under ambient conditions were identified, and a method through which aerial photographs of earth features might be corrected to yield quantitative information about the actual features was provided. A theoretical equation was developed based on the Bouguer-Lambert extinction law and basic photographic theory.

  10. Rapid scatter estimation for CBCT using the Boltzmann transport equation

    NASA Astrophysics Data System (ADS)

    Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh

    2014-03-01

    Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.

  11. WE-AB-207A-02: John’s Equation Based Consistency Condition and Incomplete Projection Restoration Upon Circular Orbit CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, J; Qi, H; Wu, S

    Purpose: In transmitted X-ray tomography imaging, projections are sometimes incomplete due to a variety of reasons, such as geometry inaccuracy, defective detector cells, etc. To address this issue, we have derived a direct consistency condition based on John’s Equation, and proposed a method to effectively restore incomplete projections based on this consistency condition. Methods: Through parameter substitutions, we have derived a direct consistency condition equation from John’s equation, in which the left side is only projection derivative of view and the right side is projection derivative of other geometrical parameters. Based on this consistency condition, a projection restoration method ismore » proposed, which includes five steps: 1) Forward projecting reconstructed image and using linear interpolation to estimate the incomplete projections as the initial result; 2) Performing Fourier transform on the projections; 3) Restoring the incomplete frequency data using the consistency condition equation; 4) Performing inverse Fourier transform; 5) Repeat step 2)∼4) until our criteria is met to terminate the iteration. Results: A beam-blocking-based scatter correction case and a bad-pixel correction case were used to demonstrate the efficacy and robustness of our restoration method. The mean absolute error (MAE), signal noise ratio (SNR) and mean square error (MSE) were employed as our evaluation metrics of the reconstructed images. For the scatter correction case, the MAE is reduced from 63.3% to 71.7% with 4 iterations. Compared with the existing Patch’s method, the MAE of our method is further reduced by 8.72%. For the bad-pixel case, the SNR of the reconstructed image by our method is increased from 13.49% to 21.48%, with the MSE being decreased by 45.95%, compared with linear interpolation method. Conclusion: Our studies have demonstrated that our restoration method based on the new consistency condition could effectively restore the incomplete projections, especially for their high frequency component.« less

  12. Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion

    NASA Astrophysics Data System (ADS)

    Jakobsen, M.; Wu, R. S.

    2016-12-01

    Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.

  13. Spectral peculiarities of electromagnetic wave scattering by Veselago's cylinders

    NASA Astrophysics Data System (ADS)

    Sukhov, S. V.; Shevyakhov, N. S.

    2006-03-01

    The results are presented of spectral calculations of extinction cross-section for scattering of E- and H-polarized electromagnetic waves by cylinders made of Veselago material. The insolvency of previously developed models of scattering is demonstrated. It is shown that correct description of scattering requires separate consideration of both electric and magnetic subsystems.

  14. Spectral peculiarities of electromagnetic wave scattered by Veselago's cylinders

    NASA Astrophysics Data System (ADS)

    Sukhov, S. V.; Shevyakhov, N. S.

    2005-09-01

    The results are presented of spectral calculations of extinction cross-section for scattering of E- and H-polarized electromagnetic waves by cylinders made of Veselago material. The insolvency of previously developed models of scattering is demonstrated. It is shown that correct description of scattering requires separate consideration of both electric and magnetic subsystems.

  15. Analytically based photon scatter modeling for a multipinhole cardiac SPECT camera.

    PubMed

    Pourmoghaddas, Amir; Wells, R Glenn

    2016-11-01

    Dedicated cardiac SPECT scanners have improved performance over standard gamma cameras allowing reductions in acquisition times and/or injected activity. One approach to improving performance has been to use pinhole collimators, but this can cause position-dependent variations in attenuation, sensitivity, and spatial resolution. CT attenuation correction (AC) and an accurate system model can compensate for many of these effects; however, scatter correction (SC) remains an outstanding issue. In addition, in cameras using cadmium-zinc-telluride-based detectors, a large portion of unscattered photons is detected with reduced energy (low-energy tail). Consequently, application of energy-based SC approaches in these cameras leads to a higher increase in noise than with standard cameras due to the subtraction of true counts detected in the low-energy tail. Model-based approaches with parallel-hole collimator systems accurately calculate scatter based on the physics of photon interactions in the patient and camera and generate lower-noise estimates of scatter than energy-based SC. In this study, the accuracy of a model-based SC method was assessed using physical phantom studies on the GE-Discovery NM530c and its performance was compared to a dual energy window (DEW)-SC method. The analytical photon distribution (APD) method was used to calculate the distribution of probabilities that emitted photons will scatter in the surrounding scattering medium and be subsequently detected. APD scatter calculations for 99m Tc-SPECT (140 ± 14 keV) were validated with point-source measurements and 15 anthropomorphic cardiac-torso phantom experiments and varying levels of extra-cardiac activity causing scatter inside the heart. The activity inserted into the myocardial compartment of the phantom was first measured using a dose calibrator. CT images were acquired on an Infinia Hawkeye (GE Healthcare) SPECT/CT and coregistered with emission data for AC. For comparison, DEW scatter projections (120 ± 6 keV ) were also extracted from the acquired list-mode SPECT data. Either APD or DEW scatter projections were subtracted from corresponding 140 keV measured projections and then reconstructed with AC (APD-SC and DEW-SC). Quantitative accuracy of the activity measured in the heart for the APD-SC and DEW-SC images was assessed against dose calibrator measurements. The difference between modeled and acquired projections was measured as the root-mean-squared-error (RMSE). APD-modeled projections for a clinical cardiac study were also evaluated. APD-modeled projections showed good agreement with SPECT measurements and had reduced noise compared to DEW scatter estimates. APD-SC reduced mean error in activity measurement compared to DEW-SC in images and the reduction was statistically significant where the scatter fraction (SF) was large (mean SF = 28.5%, T-test p = 0.007). APD-SC reduced measurement uncertainties as well; however, the difference was not found to be statistically significant (F-test p > 0.5). RMSE comparisons showed that elevated levels of scatter did not significantly contribute to a change in RMSE (p > 0.2). Model-based APD scatter estimation is feasible for dedicated cardiac SPECT scanners with pinhole collimators. APD-SC images performed better than DEW-SC images and improved the accuracy of activity measurement in high-scatter scenarios.

  16. Influence of local-field corrections on Thomson scattering in collision-dominated two-component plasmas.

    PubMed

    Fortmann, Carsten; Wierling, August; Röpke, Gerd

    2010-02-01

    The dynamic structure factor, which determines the Thomson scattering spectrum, is calculated via an extended Mermin approach. It incorporates the dynamical collision frequency as well as the local-field correction factor. This allows to study systematically the impact of electron-ion collisions as well as electron-electron correlations due to degeneracy and short-range interaction on the characteristics of the Thomson scattering signal. As such, the plasmon dispersion and damping width is calculated for a two-component plasma, where the electron subsystem is completely degenerate. Strong deviations of the plasmon resonance position due to the electron-electron correlations are observed at increasing Brueckner parameters r(s). These results are of paramount importance for the interpretation of collective Thomson scattering spectra, as the determination of the free electron density from the plasmon resonance position requires a precise theory of the plasmon dispersion. Implications due to different approximations for the electron-electron correlation, i.e., different forms of the one-component local-field correction, are discussed.

  17. Atmospheric scattering corrections to solar radiometry

    NASA Technical Reports Server (NTRS)

    Box, M. A.; Deepak, A.

    1979-01-01

    Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. This paper discusses the correction factors needed to account for the diffuse (i,e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle of less than 5 deg) and relatively clear skies (optical depths less than 0.4), it is shown that the total diffuse contribution represents approximately 1% of the total intensity.

  18. Unified aeroacoustics analysis for high speed turboprop aerodynamics and noise. Volume 5: Propagation of propeller tone noise through a fuselage boundary layer

    NASA Technical Reports Server (NTRS)

    Magliozzi, B.; Hanson, D. B.

    1991-01-01

    An analysis of tone noise propagation through a boundary layer and fuselage scattering effects was derived. This analysis is a three dimensional and the complete wave field is solved by matching analytical expressions for the incident and scattered waves in the outer flow to a numerical solution in the boundary layer flow. The outer wave field is constructed analytically from an incident wave appropriate to the source and a scattered wave in the standard Hankel function form. For the incident wave, an existing function - domain propeller noise radiation theory is used. In the boundary layer region, the wave equation is solved by numerical methods. The theoretical analysis is embodied in a computer program which allows the calculation of correction factors for the fuselage scattering and boundary layer refraction effects. The effects are dependent on boundary layer profile, flight speed, and frequency. Corrections can be derived for any point on the fuselage, including those on the opposite side from the source. The theory was verified using limited cases and by comparing calculations with available measurements from JetStar tests of model prop-fans. For the JetStar model scale, the boundary layer refraction effects produce moderate fuselage pressure reinforcements aft of and near the plane of rotation and significant attenuation forward of the plane of rotation at high flight speeds. At lower flight speeds, the calculated boundary layer effects result in moderate amplification over the fuselage area of interest. Apparent amplification forward of the plane of rotation is a result of effective changes in the source directivity due to boundary layer refraction effects. Full scale effects are calculated to be moderate, providing fuselage pressure amplification of about 5 dB at the peak noise location. Evaluation using available noise measurements was made under high-speed, high-altitude flight conditions. Comparisons of calculations made of free field noise, using a current frequency-domain propeller noise prediction method, and fuselage effects using this new procedure show good agreement with fuselage measurements over a wide range of flight speeds and frequencies. Correction factors for the JetStar measurements made on the fuselage are provided in an Appendix.

  19. Sequential weighted Wiener estimation for extraction of key tissue parameters in color imaging: a phantom study

    NASA Astrophysics Data System (ADS)

    Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan

    2014-12-01

    Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.

  20. Quantum mechanical generalized phase-shift approach to atom-surface scattering: a Feshbach projection approach to dealing with closed channel effects.

    PubMed

    Maji, Kaushik; Kouri, Donald J

    2011-03-28

    We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.

  1. Mask patterning process using the negative tone chemically amplified resist TOK OEBR-CAN024

    NASA Astrophysics Data System (ADS)

    Irmscher, Mathias; Beyer, Dirk; Butschke, Joerg; Hudek, Peter; Koepernik, Corinna; Plumhoff, Jason; Rausa, Emmanuel; Sato, Mitsuru; Voehringer, Peter

    2004-08-01

    Optimized process parameters using the TOK OEBR-CAN024 resist for high chrome load patterning have been determined. A tight linearity tolerance for opaque and clear features, independent on the local pattern density, was the goal of our process integration work. For this purpose we evaluated a new correction method taking into account electron scattering and process influences. The method is based on matching of measured pattern geometry by iterative back-simulation using multiple Gauss and/or exponential functions. The obtained control function acts as input for the proximity correction software PROXECCO. Approaches with different pattern oversize and two Cr thicknesses were accomplished and the results have been reported. Isolated opaque and clear lines could be realized in a very tight linearity range. The increasing line width of small dense lines, induced by the etching process, could be corrected only partially.

  2. Determination of morphological parameters of biological cells by analysis of scattered-light distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burger, D.E.

    1979-11-01

    The extraction of morphological parameters from biological cells by analysis of light-scatter patterns is described. A light-scattering measurement system has been designed and constructed that allows one to visually examine and photographically record biological cells or cell models and measure the light-scatter pattern of an individual cell or cell model. Using a laser or conventional illumination, the imaging system consists of a modified microscope with a 35 mm camera attached to record the cell image or light-scatter pattern. Models of biological cells were fabricated. The dynamic range and angular distributions of light scattered from these models was compared to calculatedmore » distributions. Spectrum analysis techniques applied on the light-scatter data give the sought after morphological cell parameters. These results compared favorably to shape parameters of the fabricated cell models confirming the mathematical model procedure. For nucleated biological material, correct nuclear and cell eccentricity as well as the nuclear and cytoplasmic diameters were determined. A method for comparing the flow equivalent of nuclear and cytoplasmic size to the actual dimensions is shown. This light-scattering experiment provides baseline information for automated cytology. In its present application, it involves correlating average size as measured in flow cytology to the actual dimensions determined from this technique. (ERB)« less

  3. Methods for modeling non-equilibrium degenerate statistics and quantum-confined scattering in 3D ensemble Monte Carlo transport simulations

    NASA Astrophysics Data System (ADS)

    Crum, Dax M.; Valsaraj, Amithraj; David, John K.; Register, Leonard F.; Banerjee, Sanjay K.

    2016-12-01

    Particle-based ensemble semi-classical Monte Carlo (MC) methods employ quantum corrections (QCs) to address quantum confinement and degenerate carrier populations to model tomorrow's ultra-scaled metal-oxide-semiconductor-field-effect-transistors. Here, we present the most complete treatment of quantum confinement and carrier degeneracy effects in a three-dimensional (3D) MC device simulator to date, and illustrate their significance through simulation of n-channel Si and III-V FinFETs. Original contributions include our treatment of far-from-equilibrium degenerate statistics and QC-based modeling of surface-roughness scattering, as well as considering quantum-confined phonon and ionized-impurity scattering in 3D. Typical MC simulations approximate degenerate carrier populations as Fermi distributions to model the Pauli-blocking (PB) of scattering to occupied final states. To allow for increasingly far-from-equilibrium non-Fermi carrier distributions in ultra-scaled and III-V devices, we instead generate the final-state occupation probabilities used for PB by sampling the local carrier populations as function of energy and energy valley. This process is aided by the use of fractional carriers or sub-carriers, which minimizes classical carrier-carrier scattering intrinsically incompatible with degenerate statistics. Quantum-confinement effects are addressed through quantum-correction potentials (QCPs) generated from coupled Schrödinger-Poisson solvers, as commonly done. However, we use these valley- and orientation-dependent QCPs not just to redistribute carriers in real space, or even among energy valleys, but also to calculate confinement-dependent phonon, ionized-impurity, and surface-roughness scattering rates. FinFET simulations are used to illustrate the contributions of each of these QCs. Collectively, these quantum effects can substantially reduce and even eliminate otherwise expected benefits of considered In0.53Ga0.47 As FinFETs over otherwise identical Si FinFETs despite higher thermal velocities in In0.53Ga0.47 As. It also may be possible to extend these basic uses of QCPs, however calculated, to still more computationally efficient drift-diffusion and hydrodynamic simulations, and the basic concepts even to compact device modeling.

  4. A Scattered Light Correction to Color Images Taken of Europa by the Galileo Spacecraft: Initial Results

    NASA Astrophysics Data System (ADS)

    Phillips, C. B.; Valenti, M.

    2009-12-01

    Jupiter's moon Europa likely possesses an ocean of liquid water beneath its icy surface, but estimates of the thickness of the surface ice shell vary from a few kilometers to tens of kilometers. Color images of Europa reveal the existence of a reddish, non-ice component associated with a variety of geological features. The composition and origin of this material is uncertain, as is its relationship to Europa's various landforms. Published analyses of Galileo Near Infrared Mapping Spectrometer (NIMS) observations indicate the presence of highly hydrated sulfate compounds. This non-ice material may also bear biosignatures or other signs of biotic material. Additional spectral information from the Galileo Solid State Imager (SSI) could further elucidate the nature of the surface deposits, particularly when combined with information from the NIMS. However, little effort has been focused on this approach because proper calibration of the color image data is challenging, requiring both skill and patience to process the data and incorporate the appropriate scattered light correction. We are currently working to properly calibrate the color SSI data. The most important and most difficult issue to address in the analysis of multispectral SSI data entails using thorough calibrations and a correction for scattered light. Early in the Galileo mission, studies of the Galileo SSI data for the moon revealed discrepancies of up to 10% in relative reflectance between images containing scattered light and images corrected for scattered light. Scattered light adds a wavelength-dependent low-intensity brightness factor to pixels across an image. For example, a large bright geological feature located just outside the field of view of an image will scatter extra light onto neighboring pixels within the field of view. Scattered light can be seen as a dim halo surrounding an image that includes a bright limb, and can also come from light scattered inside the camera by dirt, edges, and the interfaces of lenses. Because of the wavelength dependence of this effect, a scattered light correction must be performed on any SSI multispectral dataset before quantitative spectral analysis can be done. The process involves using a point-spread function for each filter that helps determine the amount of scattered light expected for a given pixel based on its location and the model attenuation factor for that pixel. To remove scattered light for a particular image taken through a particular filter, the Fourier transform of the attenuation function, which is the point spread function for that filter, is convolved with the Fourier transform of the image at the same wavelength. The result is then filtered for noise in the frequency domain, and then transformed back to the spatial domain. This results in a version of the original image that would have been taken without the scattered light contribution. We will report on our initial results from this calibration.

  5. Generalization of the Hartree-Fock approach to collision processes

    NASA Astrophysics Data System (ADS)

    Hahn, Yukap

    1997-06-01

    The conventional Hartree and Hartree-Fock approaches for bound states are generalized to treat atomic collision processes. All the single-particle orbitals, for both bound and scattering states, are determined simultaneously by requiring full self-consistency. This generalization is achieved by introducing two Ansäauttze: (a) the weak asymptotic boundary condition, which maintains the correct scattering energy and target orbitals with correct number of nodes, and (b) square integrable amputated scattering functions to generate self-consistent field (SCF) potentials for the target orbitals. The exact initial target and final-state asymptotic wave functions are not required and thus need not be specified a priori, as they are determined simultaneously by the SCF iterations. To check the asymptotic behavior of the solution, the theory is applied to elastic electron-hydrogen scattering at low energies. The solution is found to be stable and the weak asymptotic condition is sufficient to produce the correct scattering amplitudes. The SCF potential for the target orbital shows the strong penetration by the projectile electron during the collision, but the exchange term tends to restore the original form. Potential applicabilities of this extension are discussed, including the treatment of ionization and shake-off processes.

  6. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stankovic, Uros; Herk, Marcel van; Ploeger, Lennert S.

    Purpose: Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive inmore » previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. Methods: The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different acquisition scenarios. Parameters used in the phantom study weret{sub cup} for nonuniformity and contrast-to-noise ratio (CNR) for soft tissue visibility. Clinical scans were evaluated in an observer study in which four experienced radiotherapy technologists rated soft tissue visibility and uniformity of scans with and without the grid. Results: The proposed angle dependent gain correction algorithm suppressed the visible ring artifacts. Grid had a beneficial impact on nonuniformity, contrast to noise ratio, and Hounsfield unit accuracy for both scanning geometries. The nonuniformity reduced by 90% for head sized object and 91% for pelvic-sized object. CNR improved compared to no corrections on average by a factor 2.8 for the head sized object, and 2.2 for the pelvic sized phantom. Grid outperformed software correction alone, but adding additional software correction to the grid was overall the best strategy. In the observer study, a significant improvement was found in both soft tissue visibility and nonuniformity of scans when grid is used. Conclusions: The evaluated fiber-interspaced grid improved the image quality of the CBCT system for broad range of imaging conditions. Clinical scans show significant improvement in soft tissue visibility and uniformity without the need to increase the imaging dose.« less

  7. On the far-field computation of acoustic radiation forces.

    PubMed

    Martin, P A

    2017-10-01

    It is known that the steady acoustic radiation force on a scatterer due to incident time-harmonic waves can be calculated by evaluating certain integrals of velocity potentials over a sphere surrounding the scatterer. The goal is to evaluate these integrals using far-field approximations and appropriate limits. Previous derivations are corrected, clarified, and generalized. Similar corrections are made to textbook derivations of optical theorems.

  8. Corrections on energy spectrum and scatterings for fast neutron radiography at NECTAR facility

    NASA Astrophysics Data System (ADS)

    Liu, Shu-Quan; Bücherl, Thomas; Li, Hang; Zou, Yu-Bin; Lu, Yuan-Rong; Guo, Zhi-Yu

    2013-11-01

    Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM- II in Technische Universität München (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections.

  9. Correction of the spectral calibration of the Joint European Torus core light detecting and ranging Thomson scattering diagnostic using ray tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hawke, J.; Scannell, R.; Maslov, M.

    2013-10-15

    This work isolated the cause of the observed discrepancy between the electron temperature (T{sub e}) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. Themore » application of these correction factors resulted in an increase in the observed T{sub e}, resulting in the partial if not complete removal of the observed discrepancy in the measured T{sub e} between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics.« less

  10. A comparison of observed and analytically derived remote sensing penetration depths for turbid water

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; Usry, J. W.; Witte, W. G.; Whitlock, C. H.; Guraus, E. A.

    1981-01-01

    The depth to which sunlight will penetrate in turbid waters was investigated. The tests were conducted in water with a single scattering albedo range, and over a range of solar elevation angles. Two different techniques were used to determine the depth of light penetration. It showed little change in the depth of sunlight penetration with changing solar elevation angle. A comparison of the penetration depths indicates that the best agreement between the two methods was achieved when the quasisingle scattering relationship was not corrected for solar angle. It is concluded that sunlight penetration is dependent on inherent water properties only.

  11. Absolute cross section measurements for the scattering of low- and intermediate-energy electrons from PF3. I. Elastic scattering

    NASA Astrophysics Data System (ADS)

    Hishiyama, N.; Hoshino, M.; Blanco, F.; García, G.; Tanaka, H.

    2017-12-01

    We report absolute elastic differential cross sections (DCSs) for electron collisions with phosphorus trifluoride, PF3, molecules (e- + PF3) in the impact energy range of 2.0-200 eV and over a scattering angle range of 10°-150°. Measured angular distributions of scattered electron intensities were normalized by reference to the elastic DCSs of He. Corresponding integral and momentum-transfer cross sections were derived by extrapolating the angular range from 0° to 180° with the help of a modified phase-shift analysis. In addition, due to the large dipole moment of the considered molecule, the dipole-Born correction for the forward scattering angles has also been applied. As a part of this study, independent atom model calculations in combination with screening corrected additivity rule were also performed for elastic and inelastic (electronic excitation plus ionization) scattering using a complex optical potential method. Rotational excitation cross sections have been estimated with a dipole-Born approximation procedure. Vibrational excitations are not considered in this calculation. Theoretical data, at the differential and integral levels, were found to reasonably agree with the present experimental results. Furthermore, we explore the systematics of the elastic DCSs for the four-atomic trifluoride molecules of XF3 (X = B, N, and P) and central P-atom in PF3, showing that, owing to the comparatively small effect of the F-atoms, the present angular distributions of elastic DCSs are essentially dominated by the characteristic of the central P-atom at lower impact energies. Finally, these quantitative results for e- - PF3 collisions were compiled together with the previous data available in the literature in order to obtain a cross section dataset for modeling purposes. To comprehensively describe such a considerable amount of data, we proceed by first discussing, in this paper, the vibrationally elastic scattering processes whereas vibrational and electronic excitation shall be the subject of our following paper devoted to inelastic collisions.

  12. Relativistic Corrections to the Sunyaev-Zeldovich Effect for Clusters of Galaxies. III. Polarization Effect

    NASA Astrophysics Data System (ADS)

    Itoh, Naoki; Nozawa, Satoshi; Kohyama, Yasuharu

    2000-04-01

    We extend the formalism of relativistic thermal and kinematic Sunyaev-Zeldovich effects and include the polarization of the cosmic microwave background photons. We consider the situation of a cluster of galaxies moving with a velocity β≡v/c with respect to the cosmic microwave background radiation. In the present formalism, polarization of the scattered cosmic microwave background radiation caused by the proper motion of a cluster of galaxies is naturally derived as a special case of the kinematic Sunyaev-Zeldovich effect. The relativistic corrections are also included in a natural way. Our results are in complete agreement with the recent results of relativistic corrections obtained by Challinor, Ford, & Lasenby with an entirely different method, as well as the nonrelativistic limit obtained by Sunyaev & Zeldovich. The relativistic correction becomes significant in the Wien region.

  13. Automated detection of esophageal dysplasia in in vivo optical coherence tomography images of the human esophagus

    NASA Astrophysics Data System (ADS)

    Kassinopoulos, Michalis; Dong, Jing; Tearney, Guillermo J.; Pitris, Costas

    2018-02-01

    Catheter-based Optical Coherence Tomography (OCT) devices allow real-time and comprehensive imaging of the human esophagus. Hence, they provide the potential to overcome some of the limitations of endoscopy and biopsy, allowing earlier diagnosis and better prognosis for esophageal adenocarcinoma patients. However, the large number of images produced during every scan makes manual evaluation of the data exceedingly difficult. In this study, we propose a fully automated tissue characterization algorithm, capable of discriminating normal tissue from Barrett's Esophagus (BE) and dysplasia through entire three-dimensional (3D) data sets, acquired in vivo. The method is based on both the estimation of the scatterer size of the esophageal epithelial cells, using the bandwidth of the correlation of the derivative (COD) method, as well as intensity-based characteristics. The COD method can effectively estimate the scatterer size of the esophageal epithelium cells in good agreement with the literature. As expected, both the mean scatterer size and its standard deviation increase with increasing severity of disease (i.e. from normal to BE to dysplasia). The differences in the distribution of scatterer size for each tissue type are statistically significant, with a p value of < 0.0001. However, the scatterer size by itself cannot be used to accurately classify the various tissues. With the addition of intensity-based statistics the correct classification rates for all three tissue types range from 83 to 100% depending on the lesion size.

  14. Surface areas of fractally rough particles studied by scattering

    NASA Astrophysics Data System (ADS)

    Hurd, Alan J.; Schaefer, Dale W.; Smith, Douglas M.; Ross, Steven B.; Le Méhauté, Alain; Spooner, Steven

    1989-05-01

    The small-angle scattering from fractally rough surfaces has the potential to give information on the surface area at a given resolution. By use of quantitative neutron and x-ray scattering, a direct comparison of surface areas of fractally rough powders was made between scattering and adsorption techniques. This study supports a recently proposed correction to the theory for scattering from fractal surfaces. In addition, the scattering data provide an independent calibration of molecular adsorbate areas.

  15. Correction of complex nonlinear signal response from a pixel array detector

    PubMed Central

    van Driel, Tim Brandt; Herrmann, Sven; Carini, Gabriella; Nielsen, Martin Meedom; Lemke, Henrik Till

    2015-01-01

    The pulsed free-electron laser light sources represent a new challenge to photon area detectors due to the intrinsic spontaneous X-ray photon generation process that makes single-pulse detection necessary. Intensity fluctuations up to 100% between individual pulses lead to high linearity requirements in order to distinguish small signal changes. In real detectors, signal distortions as a function of the intensity distribution on the entire detector can occur. Here a robust method to correct this nonlinear response in an area detector is presented for the case of exposures to similar signals. The method is tested for the case of diffuse scattering from liquids where relevant sub-1% signal changes appear on the same order as artifacts induced by the detector electronics. PMID:25931072

  16. Correction of complex nonlinear signal response from a pixel array detector.

    PubMed

    van Driel, Tim Brandt; Herrmann, Sven; Carini, Gabriella; Nielsen, Martin Meedom; Lemke, Henrik Till

    2015-05-01

    The pulsed free-electron laser light sources represent a new challenge to photon area detectors due to the intrinsic spontaneous X-ray photon generation process that makes single-pulse detection necessary. Intensity fluctuations up to 100% between individual pulses lead to high linearity requirements in order to distinguish small signal changes. In real detectors, signal distortions as a function of the intensity distribution on the entire detector can occur. Here a robust method to correct this nonlinear response in an area detector is presented for the case of exposures to similar signals. The method is tested for the case of diffuse scattering from liquids where relevant sub-1% signal changes appear on the same order as artifacts induced by the detector electronics.

  17. Atmospheric correction of ocean color sensors: analysis of the effects of residual instrument polarization sensitivity.

    PubMed

    Gordon, H R; Du, T; Zhang, T

    1997-09-20

    We provide an analysis of the influence of instrument polarization sensitivity on the radiance measured by spaceborne ocean color sensors. Simulated examples demonstrate the influence of polarization sensitivity on the retrieval of the water-leaving reflectance rho(w). A simple method for partially correcting for polarization sensitivity--replacing the linear polarization properties of the top-of-atmosphere reflectance with those from a Rayleigh-scattering atmosphere--is provided and its efficacy is evaluated. It is shown that this scheme improves rho(w) retrievals as long as the polarization sensitivity of the instrument does not vary strongly from band to band. Of course, a complete polarization-sensitivity characterization of the ocean color sensor is required to implement the correction.

  18. Precision measurement of longitudinal and transverse response functions of quasi-elastic electron scattering in the momentum transfer range 0.55GeV/c lte math| lte 0.9GeV/c

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan Yao, Jefferson Lab Hall A Collaboration, E05-110 Collaboration

    2012-04-01

    In order to test the Coulomb sum rule in nuclei, a precision measurement of inclusive electron scattering cross sections in the quasi-elastic region was performed at Jefferson Lab. Incident electrons of energies ranging from 0.4 GeV/c to 4 GeV/c scattered off {sup 4}He, {sup 12}C, {sup 56}Fe and {sup 208}Pb nuclei at four scattering angles (15deg., 60deg., 90deg., 120deg.) and scattered energies ranging from 0.1 GeV/c to 4 GeV/c. The Rosenbluth method with proper Coulomb corrections is used to extract the transverse and longitudinal response functions at three-momentum transfers 0.55 GeV/c {le} |q{yields}| {le} 1.0 GeV/c. The Coulomb Sum ismore » determined in the same |q{yields}| range as mentioned above and will be compared to predictions. Analysis progress and preliminary results will be presented.« less

  19. A New Clinical Instrument for The Early Detection of Cataract Using Dynamic Light Scattering and Corneal Topography

    NASA Technical Reports Server (NTRS)

    Ansari, Rafat R.; Datiles, Manuel B., III; King, James F.

    2000-01-01

    A growing cataract can be detected at the molecular level using the technique of dynamic light scattering (DLS). However, the success of this method in clinical use depends upon the precise control of the scattering volume inside a patient's eye and especially during patient's repeat visits. This is important because the scattering volume (cross-over region between the scattered fight and incident light) inside the eye in a high-quality DLS set-up is very small (few microns in dimension). This precise control holds the key for success in the longitudinal studies of cataract and during anti-cataract drug screening. We have circumvented these problems by fabricating a new DLS fiber optic probe with a working distance of 40 mm and by mounting it inside a cone of a corneal analyzer. This analyzer is frequently used in mapping the corneal topography during PRK (photorefractive keratectomy) and LASIK (laser in situ keratomileusis) procedures in shaping of the cornea to correct myopia. This new instrument and some preliminary clinical tests on one of us (RRA) showing the data reproducibility are described.

  20. New clinical instrument for the early detection of cataract using dynamic light scattering and corneal topography

    NASA Astrophysics Data System (ADS)

    Ansari, Rafat R.; Datiles, Manuel B., III; King, James F.

    2000-06-01

    A growing cataract can be detected at the molecular level using the technique of dynamic light scattering (DLS). However, the success of this method in clinical use depends upon the precise control of the scattering volume inside a patient's eye and especially during patient's repeat visits. This is important because the scattering volume (cross-over region between the scattered light and incident light) inside the eye in a high-quality DLS set-up is very small (few microns in dimension). This precise control holds the key for success in the longitudinal studies of cataract and during anti-cataract drug screening. We have circumvented these problems by fabricating a new DLS fiber optic probe with a working distance of 40 mm and by mounting it inside a cone of a corneal analyzer. This analyzer is frequently used in mapping the corneal topography during PRK (photorefractive keratectomy) and LASIK (laser in situ keratomileusis) procedures in shaping of the cornea to correct myopia. This new instrument and some preliminary clinical tests on one of us (RRA) showing the data reproducibility are described.

  1. Electron kinetic effects on interferometry, polarimetry and Thomson scattering measurements in burning plasmas (invited).

    PubMed

    Mirnov, V V; Brower, D L; Den Hartog, D J; Ding, W X; Duff, J; Parke, E

    2014-11-01

    At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = Te/mec(2) model may be insufficient; we present a more precise model with τ(2)-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of Te measurement relevant to ITER operational scenarios.

  2. Variability of adjacency effects in sky reflectance measurements.

    PubMed

    Groetsch, Philipp M M; Gege, Peter; Simis, Stefan G H; Eleveld, Marieke A; Peters, Steef W M

    2017-09-01

    Sky reflectance R sky (λ) is used to correct in situ reflectance measurements in the remote detection of water color. We analyzed the directional and spectral variability in R sky (λ) due to adjacency effects against an atmospheric radiance model. The analysis is based on one year of semi-continuous R sky (λ) observations that were recorded in two azimuth directions. Adjacency effects contributed to R sky (λ) dependence on season and viewing angle and predominantly in the near-infrared (NIR). For our test area, adjacency effects spectrally resembled a generic vegetation spectrum. The adjacency effect was weakly dependent on the magnitude of Rayleigh- and aerosol-scattered radiance. The reflectance differed between viewing directions 5.4±6.3% for adjacency effects and 21.0±19.8% for Rayleigh- and aerosol-scattered R sky (λ) in the NIR. Under which conditions in situ water reflectance observations require dedicated correction for adjacency effects is discussed. We provide an open source implementation of our method to aid identification of such conditions.

  3. Data pre-processing: Stratospheric aerosol perturbing effect on the remote sensing of vegetation: Correction method for the composite NDVI after the Pinatubo eruption

    NASA Technical Reports Server (NTRS)

    Vermote, E.; Elsaleous, N.; Kaufman, Y. J.; Dutton, E.

    1994-01-01

    An operational stratospheric correction scheme used after the Mount Pinatubo (Phillipines) eruption (Jun. 1991) is presented. The stratospheric aerosol distribution is assumed to be only variable with latitude. Each 9 days the latitudinal distribution of the optical thickness is computed by inverting radiances observed in the NOAA AVHRR channel 1 (0.63 micrometers) and channel 2 (0.83 micrometers) over the Pacific Ocean. This radiance data set is used to check the validity of model used for inversion by checking consistency of the optical thickness deduced from each channel as well as optical thickness deduced from different scattering angles. Using the optical thickness profile previously computed and radiative transfer code assuming Lambertian boundary condition, each pixel of channel 1 and 2 are corrected prior to computation of NDVI (Normalized Difference Vegetation Index). Comparison between corrected, non corrected, and years prior to Pinatubo eruption (1989 to 1990) NDVI composite, shows the necessity and the accuracy of the operational correction scheme.

  4. Simulation of inverse Compton scattering and its implications on the scattered linewidth

    NASA Astrophysics Data System (ADS)

    Ranjan, N.; Terzić, B.; Krafft, G. A.; Petrillo, V.; Drebot, I.; Serafini, L.

    2018-03-01

    Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. In this paper, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model to describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016), 10.1103/PhysRevAccelBeams.19.121302], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.

  5. Simulation of inverse Compton scattering and its implications on the scattered linewidth

    DOE PAGES

    Ranjan, N.; Terzić, B.; Krafft, G. A.; ...

    2018-03-06

    Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. Here in this article, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model tomore » describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016)], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.« less

  6. Studies of porous anodic alumina using spin echo scattering angle measurement

    NASA Astrophysics Data System (ADS)

    Stonaha, Paul

    The properties of a neutron make it a useful tool for use in scattering experiments. We have developed a method, dubbed SESAME, in which specially designed magnetic fields encode the scattering signal of a neutron beam into the beam's average Larmor phase. A geometry is presented that delivers the correct Larmor phase (to first order), and it is shown that reasonable variations of the geometry do not significantly affect the net Larmor phase. The solenoids are designed using an analytic approximation. Comparison of this approximate function with finite element calculations and Hall probe measurements confirm its validity, allowing for fast computation of the magnetic fields. The coils were built and tested in-house on the NBL-4 instrument, a polarized neutron reflectometer whose construction is another major portion of this work. Neutron scattering experiments using the solenoids are presented, and the scattering signal from porous anodic alumina is investigated in detail. A model using the Born Approximation is developed and compared against the scattering measurements. Using the model, we define the necessary degree of alignment of such samples in a SESAME measurement, and we show how the signal retrieved using SESAME is sensitive to range of detectable momentum transfer.

  7. Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.

    PubMed

    Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P

    2016-04-15

    We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.

  8. Characterization of scatter in digital mammography from use of Monte Carlo simulations and comparison to physical measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K.; Brateman, Libby F.

    2014-11-01

    Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extentmore » (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging −0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.« less

  9. SU-F-T-143: Implementation of a Correction-Based Output Model for a Compact Passively Scattered Proton Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, S; Ahmad, S; Chen, Y

    2016-06-15

    Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less

  10. Contrast enhanced imaging with a stationary digital breast tomosynthesis system

    NASA Astrophysics Data System (ADS)

    Puett, Connor; Calliste, Jabari; Wu, Gongting; Inscoe, Christina R.; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping

    2017-03-01

    Digital breast tomosynthesis (DBT) captures some depth information and thereby improves the conspicuity of breast lesions, compared to standard mammography. Using contrast during DBT may also help distinguish malignant from benign sites. However, adequate visualization of the low iodine signal requires a subtraction step to remove background signal and increase lesion contrast. Additionally, attention to factors that limit contrast, including scatter, noise, and artifact, are important during the image acquisition and post-acquisition processing steps. Stationary DBT (sDBT) is an emerging technology that offers a higher spatial and temporal resolution than conventional DBT. This phantom-based study explored contrast-enhanced sDBT (CE sDBT) across a range of clinically-appropriate iodine concentrations, lesion sizes, and breast thicknesses. The protocol included an effective scatter correction method and an iterative reconstruction technique that is unique to the sDBT system. The study demonstrated the ability of this CE sDBT system to collect projection images adequate for both temporal subtraction (TS) and dual-energy subtraction (DES). Additionally, the reconstruction approach preserved the improved contrast-to-noise ratio (CNR) achieved in the subtraction step. Finally, scatter correction increased the iodine signal and CNR of iodine-containing regions in projection views and reconstructed image slices during both TS and DES. These findings support the ongoing study of sDBT as a potentially useful tool for contrast-enhanced breast imaging and also highlight the significant effect that scatter has on image quality during DBT.

  11. Segmentation-free empirical beam hardening correction for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai

    2015-02-15

    Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one ismore » a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. Results: All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. Conclusions: sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.« less

  12. A model of primary and scattered photon fluence for mammographic x-ray image quantification

    NASA Astrophysics Data System (ADS)

    Tromans, Christopher E.; Cocker, Mary R.; Brady, Michael, Sir

    2012-10-01

    We present an efficient method to calculate the primary and scattered x-ray photon fluence component of a mammographic image. This can be used for a range of clinically important purposes, including estimation of breast density, personalized image display, and quantitative mammogram analysis. The method is based on models of: the x-ray tube; the digital detector; and a novel ray tracer which models the diverging beam emanating from the focal spot. The tube model includes consideration of the anode heel effect, and empirical corrections for wear and manufacturing tolerances. The detector model is empirical, being based on a family of transfer functions that cover the range of beam qualities and compressed breast thicknesses which are encountered clinically. The scatter estimation utilizes optimal information sampling and interpolation (to yield a clinical usable computation time) of scatter calculated using fundamental physics relations. A scatter kernel arising around each primary ray is calculated, and these are summed by superposition to form the scatter image. Beam quality, spatial position in the field (in particular that arising at the air-boundary due to the depletion of scatter contribution from the surroundings), and the possible presence of a grid, are considered, as is tissue composition using an iterative refinement procedure. We present numerous validation results that use a purpose designed tissue equivalent step wedge phantom. The average differences between actual acquisitions and modelled pixel intensities observed across the adipose to fibroglandular attenuation range vary between 5% and 7%, depending on beam quality and, for a single beam quality are 2.09% and 3.36% respectively with and without a grid.

  13. Experimental evaluation of effective atomic number of composite materials using back-scattering of gamma photons

    NASA Astrophysics Data System (ADS)

    Singh, Inderjeet; Singh, Bhajan; Sandhu, B. S.; Sabharwal, Arvind D.

    2017-04-01

    A method has been presented for calculation of effective atomic number (Zeff) of composite materials, by using back-scattering of 662 keV gamma photons obtained from a 137Cs mono-energetic radioactive source. The present technique is a non-destructive approach, and is employed to evaluate Zeff of different composite materials, by interacting gamma photons with semi-infinite material in a back-scattering geometry, using a 3″ × 3″ NaI(Tl) scintillation detector. The present work is undertaken to study the effect of target thickness on intensity distribution of gamma photons which are multiply back-scattered from targets (pure elements) and composites (mixtures of different elements). The intensity of multiply back-scattered events increases with increasing target thickness and finally saturates. The saturation thickness for multiply back-scattered events is used to assign a number (Zeff) for multi-element materials. Response function of the 3″ × 3″ NaI(Tl) scintillation detector is applied on observed pulse-height distribution to include the contribution of partially absorbed photons. The reduced value of signal-to-noise ratio interprets the increase in multiply back-scattered data of a response corrected spectrum. Data obtained from Monte Carlo simulations and literature also support the present experimental results.

  14. dAcquisition setting optimization and quantitative imaging for 124I studies with the Inveon microPET-CT system.

    PubMed

    Anizan, Nadège; Carlier, Thomas; Hindorf, Cecilia; Barbet, Jacques; Bardiès, Manuel

    2012-02-13

    Noninvasive multimodality imaging is essential for preclinical evaluation of the biodistribution and pharmacokinetics of radionuclide therapy and for monitoring tumor response. Imaging with nonstandard positron-emission tomography [PET] isotopes such as 124I is promising in that context but requires accurate activity quantification. The decay scheme of 124I implies an optimization of both acquisition settings and correction processing. The PET scanner investigated in this study was the Inveon PET/CT system dedicated to small animal imaging. The noise equivalent count rate [NECR], the scatter fraction [SF], and the gamma-prompt fraction [GF] were used to determine the best acquisition parameters for mouse- and rat-sized phantoms filled with 124I. An image-quality phantom as specified by the National Electrical Manufacturers Association NU 4-2008 protocol was acquired and reconstructed with two-dimensional filtered back projection, 2D ordered-subset expectation maximization [2DOSEM], and 3DOSEM with maximum a posteriori [3DOSEM/MAP] algorithms, with and without attenuation correction, scatter correction, and gamma-prompt correction (weighted uniform distribution subtraction). Optimal energy windows were established for the rat phantom (390 to 550 keV) and the mouse phantom (400 to 590 keV) by combining the NECR, SF, and GF results. The coincidence time window had no significant impact regarding the NECR curve variation. Activity concentration of 124I measured in the uniform region of an image-quality phantom was underestimated by 9.9% for the 3DOSEM/MAP algorithm with attenuation and scatter corrections, and by 23% with the gamma-prompt correction. Attenuation, scatter, and gamma-prompt corrections decreased the residual signal in the cold insert. The optimal energy windows were chosen with the NECR, SF, and GF evaluation. Nevertheless, an image quality and an activity quantification assessment were required to establish the most suitable reconstruction algorithm and corrections for 124I small animal imaging.

  15. Polarizability tensor retrieval for magnetic and plasmonic antenna design

    NASA Astrophysics Data System (ADS)

    Bernal Arango, Felipe; Femius Koenderink, A.

    2013-07-01

    A key quantity in the design of plasmonic antennas and metasurfaces, as well as metamaterials, is the electrodynamic polarizability of a single scattering building block. In particular, in the current merging of plasmonics and metamaterials, subwavelength scatterers are judged by their ability to present a large, generally anisotropic electric and magnetic polarizability, as well as a bi-anisotropic magnetoelectric polarizability. This bi-anisotropic response, whereby a magnetic dipole is induced through electric driving, and vice versa, is strongly linked to the optical activity and chiral response of plasmonic metamolecules. We present two distinct methods to retrieve the polarizibility tensor from electrodynamic simulations. As a basis for both, we use the surface integral equation (SIE) method to solve for the scattering response of arbitrary objects exactly. In the first retrieval method, we project scattered fields onto vector spherical harmonics with the aid of an exact discrete spherical harmonic Fourier transform on the unit sphere. In the second, we take the effective current distributions generated by SIE as a basis to calculate dipole moments. We verify that the first approach holds for scatterers of any size, while the second is only approximately correct for small scatterers. We present benchmark calculations, revisiting the zero-forward scattering paradox of Kerker et al (1983 J. Opt. Soc. Am. 73 765-7) and Alù and Engheta (2010 J. Nanophoton. 4 041590), relevant in dielectric scattering cancelation and sensor cloaking designs. Finally, we report the polarizability tensor of split rings, and show that split rings will strongly influence the emission of dipolar single emitters. In the context of plasmon-enhanced emission, split rings can imbue their large magnetic dipole moment on the emission of simple electric dipole emitters. We present a split ring antenna array design that is capable of converting the emission of a single linear dipole emitter in forward and backward beams of directional emission of opposite handedness. This design can, for instance, find application in the spin angular momentum encoding of quantum information.

  16. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  17. Scatter and cross-talk corrections in simultaneous Tc-99m/I-123 brain SPECT using constrained factor analysis and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Fakhri, G. El; Maksud, P.; Kijewski, M. F.; Haberi, M. O.; Todd-Pokropek, A.; Aurengo, A.; Moore, S. C.

    2000-08-01

    Simultaneous imaging of Tc-99m and I-123 would have a high clinical potential in the assessment of brain perfusion (Tc-99m) and neurotransmission (I-123) but is hindered by cross-talk between the two radionuclides. Monte Carlo simulations of 15 different dual-isotope studies were performed using a digital brain phantom. Several physiologic Tc-99m and I-123 uptake patterns were modeled in the brain structures. Two methods were considered to correct for cross-talk from both scattered and unscattered photons: constrained spectral factor analysis (SFA) and artificial neural networks (ANN). The accuracy and precision of reconstructed pixel values within several brain structures were compared to those obtained with an energy windowing method (WSA). In I-123 images, mean bias was close to 10% in all structures for SFA and ANN and between 14% (in the caudate nucleus) and 25% (in the cerebellum) for WSA. Tc-99m activity was overestimated by 35% in the cortex and 53% in the caudate nucleus with WSA, but by less than 9% in all structures with SFA and ANN. SFA and ANN performed well even in the presence of high-energy I-123 photons. The accuracy was greatly improved by incorporating the contamination into the SFA model or in the learning phase for ANN. SFA and ANN are promising approaches to correct for cross-talk in simultaneous Tc-99m/I-123 SPECT.

  18. Determination of the Kwall correction factor for a cylindrical ionization chamber to measure air-kerma in 60Co gamma beams.

    PubMed

    Laitano, R F; Toni, M P; Pimpinella, M; Bovi, M

    2002-07-21

    The factor Kwall to correct for photon attenuation and scatter in the wall of ionization chambers for 60Co air-kerma measurement has been traditionally determined by a procedure based on a linear extrapolation of the chamber current to zero wall thickness. Monte Carlo calculations by Rogers and Bielajew (1990 Phys. Med. Biol. 35 1065-78) provided evidence, mostly for chambers of cylindrical and spherical geometry, of appreciable deviations between the calculated values of Kwall and those obtained by the traditional extrapolation procedure. In the present work an experimental method other than the traditional extrapolation procedure was used to determine the Kwall factor. In this method the dependence of the ionization current in a cylindrical chamber was analysed as a function of an effective wall thickness in place of the physical (radial) wall thickness traditionally considered in this type of measurement. To this end the chamber wall was ideally divided into distinct regions and for each region an effective thickness to which the chamber current correlates was determined. A Monte Carlo calculation of attenuation and scatter effects in the different regions of the chamber wall was also made to compare calculation to measurement results. The Kwall values experimentally determined in this work agree within 0.2% with the Monte Carlo calculation. The agreement between these independent methods and the appreciable deviation (up to about 1%) between the results of both these methods and those obtained by the traditional extrapolation procedure support the conclusion that the two independent methods providing comparable results are correct and the traditional extrapolation procedure is likely to be wrong. The numerical results of the present study refer to a cylindrical cavity chamber like that adopted as the Italian national air-kerma standard at INMRI-ENEA (Italy). The method used in this study applies, however, to any other chamber of the same type.

  19. Scattering of Femtosecond Laser Pulses on the Negative Hydrogen Ion

    NASA Astrophysics Data System (ADS)

    Astapenko, V. A.; Moroz, N. N.

    2018-05-01

    Elastic scattering of ultrashort laser pulses (USLPs) on the negative hydrogen ion is considered. Results of calculations of the USLP scattering probability are presented and analyzed for pulses of two types: the corrected Gaussian pulse and wavelet pulse without carrier frequency depending on the problem parameters.

  20. Charm-Quark Production in Deep-Inelastic Neutrino Scattering at Next-to-Next-to-Leading Order in QCD.

    PubMed

    Berger, Edmond L; Gao, Jun; Li, Chong Sheng; Liu, Ze Long; Zhu, Hua Xing

    2016-05-27

    We present a fully differential next-to-next-to-leading order calculation of charm-quark production in charged-current deep-inelastic scattering, with full charm-quark mass dependence. The next-to-next-to-leading order corrections in perturbative quantum chromodynamics are found to be comparable in size to the next-to-leading order corrections in certain kinematic regions. We compare our predictions with data on dimuon production in (anti)neutrino scattering from a heavy nucleus. Our results can be used to improve the extraction of the parton distribution function of a strange quark in the nucleon.

  1. 4D cone-beam computed tomography (CBCT) using a moving blocker for simultaneous radiation dose reduction and scatter correction

    NASA Astrophysics Data System (ADS)

    Zhao, Cong; Zhong, Yuncheng; Duan, Xinhui; Zhang, You; Huang, Xiaokun; Wang, Jing; Jin, Mingwu

    2018-06-01

    Four-dimensional (4D) x-ray cone-beam computed tomography (CBCT) is important for a precise radiation therapy for lung cancer. Due to the repeated use and 4D acquisition over a course of radiotherapy, the radiation dose becomes a concern. Meanwhile, the scatter contamination in CBCT deteriorates image quality for treatment tasks. In this work, we propose the use of a moving blocker (MB) during the 4D CBCT acquisition (‘4D MB’) and to combine motion-compensated reconstruction to address these two issues simultaneously. In 4D MB CBCT, the moving blocker reduces the x-ray flux passing through the patient and collects the scatter information in the blocked region at the same time. The scatter signal is estimated from the blocked region for correction. Even though the number of projection views and projection data in each view are not complete for conventional reconstruction, 4D reconstruction with a total-variation (TV) constraint and a motion-compensated temporal constraint can utilize both spatial gradient sparsity and temporal correlations among different phases to overcome the missing data problem. The feasibility simulation studies using the 4D NCAT phantom showed that 4D MB with motion-compensated reconstruction with 1/3 imaging dose reduction could produce satisfactory images and achieve 37% improvement on structural similarity (SSIM) index and 55% improvement on root mean square error (RMSE), compared to 4D reconstruction at the regular imaging dose without scatter correction. For the same 4D MB data, 4D reconstruction outperformed 3D TV reconstruction by 28% on SSIM and 34% on RMSE. A study of synthetic patient data also demonstrated the potential of 4D MB to reduce the radiation dose by 1/3 without compromising the image quality. This work paves the way for more comprehensive studies to investigate the dose reduction limit offered by this novel 4D MB method using physical phantom experiments and real patient data based on clinical relevant metrics.

  2. Validity of using a 3-dimensional PET scanner during inhalation of 15O-labeled oxygen for quantitative assessment of regional metabolic rate of oxygen in man

    NASA Astrophysics Data System (ADS)

    Hori, Yuki; Hirano, Yoshiyuki; Koshino, Kazuhiro; Moriguchi, Tetsuaki; Iguchi, Satoshi; Yamamoto, Akihide; Enmi, Junichiro; Kawashima, Hidekazu; Zeniya, Tsutomu; Morita, Naomi; Nakagawara, Jyoji; Casey, Michael E.; Iida, Hidehiro

    2014-09-01

    Use of 15O labeled oxygen (15O2) and positron emission tomography (PET) allows quantitative assessment of the regional metabolic rate of oxygen (CMRO2) in vivo, which is essential to understanding the pathological status of patients with cerebral vascular and neurological disorders. The method has, however, been challenging, when a 3D PET scanner is employed, largely attributed to the presence of gaseous radioactivity in the trachea and the inhalation system, which results in a large amount of scatter and random events in the PET assessment. The present study was intended to evaluate the adequacy of using a recently available commercial 3D PET scanner in the assessment of regional cerebral radioactivity distribution during an inhalation of 15O2. Systematic experiments were carried out on a brain phantom. Experiments were also performed on a healthy volunteer following a recently developed protocol for simultaneous assessment of CMRO2 and cerebral blood flow, which involves sequential administration of 15O2 and C15O2. A particular intention was to evaluate the adequacy of the scatter-correction procedures. The phantom experiment demonstrated that errors were within 3% at the practically maximum radioactivity in the face mask, with the greatest radioactivity in the lung. The volunteer experiment demonstrated that the counting rate was at peak during the 15O gas inhalation period, within a verified range. Tomographic images represented good quality over the entire FOV, including the lower part of the cerebral structures and the carotid artery regions. The scatter-correction procedures appeared to be important, particularly in the process to compensate for the scatter originating outside the FOV. Reconstructed images dramatically changed if the correction was carried out using inappropriate procedures. This study demonstrated that accurate reconstruction could be obtained when the scatter compensation was appropriately carried out. This study also suggested the feasibility of using a state-of-the-art 3D PET scanner in the quantitative PET imaging during inhalation of 15O labeled oxygen.

  3. 4D cone-beam computed tomography (CBCT) using a moving blocker for simultaneous radiation dose reduction and scatter correction.

    PubMed

    Zhao, Cong; Zhong, Yuncheng; Duan, Xinhui; Zhang, You; Huang, Xiaokun; Wang, Jing; Jin, Mingwu

    2018-05-03

    Four-dimensional (4D) X-ray cone-beam computed tomography (CBCT) is important for a precise radiation therapy for lung cancer. Due to the repeated use and 4D acquisition over a course of radiotherapy, the radiation dose becomes a concern. Meanwhile, the scatter contamination in CBCT deteriorates image quality for treatment tasks. In this work, we propose to use a moving blocker (MB) during the 4D CBCT acquisition ("4D MB") and to combine motion-compensated reconstruction to address these two issues simultaneously. In 4D MB CBCT, the moving blocker reduces the X-ray flux passing through the patient and collects the scatter information in the blocked region at the same time. The scatter signal is estimated from the blocked region for correction. Even though the number of projection views and projection data in each view are not complete for conventional reconstruction, 4D reconstruction with a total-variation (TV) constraint and a motion-compensated temporal constraint can utilize both spatial gradient sparsity and temporal correlations among different phases to overcome the missing data problem. The feasibility simulation studies using the 4D NCAT phantom showed that 4D MB with motion-compensated reconstruction with 1/3 imaging dose reduction could produce satisfactory images and achieve 37% improvement on structural similarity (SSIM) index and 55% improvement on root mean square error (RMSE), compared to 4D reconstruction at the regular imaging dose without scatter correction. For the same 4D MB data, 4D reconstruction outperformed 3D TV reconstruction by 28% on SSIM and 34% on RMSE. A study of synthetic patient data also demonstrated the potential of 4D MB to reduce the radiation dose by 1/3 without compromising the image quality. This work paves the way for more comprehensive studies to investigate the dose reduction limit offered by this novel 4D MB method using physical phantom experiments and real patient data based on clinical relevant metrics. © 2018 Institute of Physics and Engineering in Medicine.

  4. SU-E-I-20: Dead Time Count Loss Compensation in SPECT/CT: Projection Versus Global Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman, W; Kappadath, S

    Purpose: To compare projection-based versus global correction that compensate for deadtime count loss in SPECT/CT images. Methods: SPECT/CT images of an IEC phantom (2.3GBq 99mTc) with ∼10% deadtime loss containing the 37mm (uptake 3), 28 and 22mm (uptake 6) spheres were acquired using a 2 detector SPECT/CT system with 64 projections/detector and 15 s/projection. The deadtime, Ti and the true count rate, Ni at each projection, i was calculated using the monitor-source method. Deadtime corrected SPECT were reconstructed twice: (1) with projections that were individually-corrected for deadtime-losses; and (2) with original projections with losses and then correcting the reconstructed SPECTmore » images using a scaling factor equal to the inverse of the average fractional loss for 5 projections/detector. For both cases, the SPECT images were reconstructed using OSEM with attenuation and scatter corrections. The two SPECT datasets were assessed by comparing line profiles in xyplane and z-axis, evaluating the count recoveries, and comparing ROI statistics. Higher deadtime losses (up to 50%) were also simulated to the individually corrected projections by multiplying each projection i by exp(-a*Ni*Ti), where a is a scalar. Additionally, deadtime corrections in phantoms with different geometries and deadtime losses were also explored. The same two correction methods were carried for all these data sets. Results: Averaging the deadtime losses in 5 projections/detector suffices to recover >99% of the loss counts in most clinical cases. The line profiles (xyplane and z-axis) and the statistics in the ROIs drawn in the SPECT images corrected using both methods showed agreement within the statistical noise. The count-loss recoveries in the two methods also agree within >99%. Conclusion: The projection-based and the global correction yield visually indistinguishable SPECT images. The global correction based on sparse sampling of projections losses allows for accurate SPECT deadtime loss correction while keeping the study duration reasonable.« less

  5. Alterations to the relativistic Love-Franey model and their application to inelastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeile, J.R.

    The fictitious axial-vector and tensor mesons for the real part of the relativistic Love-Franey interaction are removed. In an attempt to make up for this loss, derivative couplings are used for the {pi} and {rho} mesons. Such derivative couplings require the introduction of axial-vector and tensor contact term corrections. Meson parameters are then fit to free nucleon-nucleon scattering data. The resulting fits are comparable to those of the relativistic Love-Franey model provided that the contact term corrections are included and the fits are weighted over the physically significant quantity of twice the tensor minus the axial-vector Lorentz invariants. Failure tomore » include contact term corrections leads to poor fits at higher energies. The off-shell behavior of this model is then examined by looking at several applications from inelastic proton-nucleus scattering.« less

  6. A quantitative reconstruction software suite for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Namías, Mauro; Jeraj, Robert

    2017-11-01

    Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.

  7. Holographic method for site-resolved detection of a 2D array of ultracold atoms

    NASA Astrophysics Data System (ADS)

    Hoffmann, Daniel Kai; Deissler, Benjamin; Limmer, Wolfgang; Hecker Denschlag, Johannes

    2016-08-01

    We propose a novel approach to site-resolved detection of a 2D gas of ultracold atoms in an optical lattice. A near-resonant laser beam is coherently scattered by the atomic array, and after passing a lens its interference pattern is holographically recorded by superimposing it with a reference laser beam on a CCD chip. Fourier transformation of the recorded intensity pattern reconstructs the atomic distribution in the lattice with single-site resolution. The holographic detection method requires only about two hundred scattered photons per atom in order to achieve a high reconstruction fidelity of 99.9 %. Therefore, additional cooling during detection might not be necessary even for light atomic elements such as lithium. Furthermore, first investigations suggest that small aberrations of the lens can be post-corrected in imaging processing.

  8. Method of measuring blood oxygenation based on spectroscopy of diffusely scattered light

    NASA Astrophysics Data System (ADS)

    Kleshnin, M. S.; Orlova, A. G.; Kirillin, M. Yu.; Golubyatnikov, G. Yu.; Turchin, I. V.

    2017-05-01

    A new approach to the measurement of blood oxygenation is developed and implemented, based on an original two-step algorithm reconstructing the relative concentration of biological chromophores (haemoglobin, water, lipids) from the measured spectra of diffusely scattered light at different distances from the radiation source. The numerical experiments and approbation of the proposed approach using a biological phantom have shown the high accuracy of the reconstruction of optical properties of the object in question, as well as the possibility of correct calculation of the haemoglobin oxygenation in the presence of additive noises without calibration of the measuring device. The results of the experimental studies in animals agree with the previously published results obtained by other research groups and demonstrate the possibility of applying the developed method to the monitoring of blood oxygenation in tumour tissues.

  9. A noninvasive method of examination of the hemostasis system.

    PubMed

    Kuznik, B I; Fine, I W; Kaminsky, A V

    2011-09-01

    We propose a noninvasive method of in vivo examination the hemostasis system based on speckle pattern analysis of coherent light scattering from the skin. We compared the results of measuring basic blood coagulation parameters by conventional invasive and noninvasive methods. A strict correlation was found between the results of measurement of soluble fibrin monomer complexes, international normalized ratio (INR), prothrombin index, and protein C content. The noninvasive method of examination of the hemostatic system enable rough evaluation of the intensity of the intravascular coagulation and correction of the dose of indirect anticoagulants maintaining desired values of INR or prothrombin index.

  10. Including Delbrück scattering in GEANT4

    NASA Astrophysics Data System (ADS)

    Omer, Mohamed; Hajima, Ryoichi

    2017-08-01

    Elastic scattering of γ-rays is a significant interaction among γ-ray interactions with matter. Therefore, the planning of experiments involving measurements of γ-rays using Monte Carlo simulations usually includes elastic scattering. However, current simulation tools do not provide a complete picture of elastic scattering. The majority of these tools assume Rayleigh scattering is the primary contributor to elastic scattering and neglect other elastic scattering processes, such as nuclear Thomson and Delbrück scattering. Here, we develop a tabulation-based method to simulate elastic scattering in one of the most common open-source Monte Carlo simulation toolkits, GEANT4. We collectively include three processes, Rayleigh scattering, nuclear Thomson scattering, and Delbrück scattering. Our simulation more appropriately uses differential cross sections based on the second-order scattering matrix instead of current data, which are based on the form factor approximation. Moreover, the superposition of these processes is carefully taken into account emphasizing the complex nature of the scattering amplitudes. The simulation covers an energy range of 0.01 MeV ≤ E ≤ 3 MeV and all elements with atomic numbers of 1 ≤ Z ≤ 99. In addition, we validated our simulation by comparing the differential cross sections measured in earlier experiments with those extracted from the simulations. We find that the simulations are in good agreement with the experimental measurements. Differences between the experiments and the simulations are 21% for uranium, 24% for lead, 3% for tantalum, and 8% for cerium at 2.754 MeV. Coulomb corrections to the Delbrück amplitudes may account for the relatively large differences that appear at higher Z values.

  11. Quantitative assessment of scatter correction techniques incorporated in next generation dual-source computed tomography

    NASA Astrophysics Data System (ADS)

    Mobberley, Sean David

    Accurate, cross-scanner assessment of in-vivo air density used to quantitatively assess amount and distribution of emphysema in COPD subjects has remained elusive. Hounsfield units (HU) within tracheal air can be considerably more positive than -1000 HU. With the advent of new dual-source scanners which employ dedicated scatter correction techniques, it is of interest to evaluate how the quantitative measures of lung density compare between dual-source and single-source scan modes. This study has sought to characterize in-vivo and phantom-based air metrics using dual-energy computed tomography technology where the nature of the technology has required adjustments to scatter correction. Anesthetized ovine (N=6), swine (N=13: more human-like rib cage shape), lung phantom and a thoracic phantom were studied using a dual-source MDCT scanner (Siemens Definition Flash. Multiple dual-source dual-energy (DSDE) and single-source (SS) scans taken at different energy levels and scan settings were acquired for direct quantitative comparison. Density histograms were evaluated for the lung, tracheal, water and blood segments. Image data were obtained at 80, 100, 120, and 140 kVp in the SS mode (B35f kernel) and at 80, 100, 140, and 140-Sn (tin filtered) kVp in the DSDE mode (B35f and D30f kernels), in addition to variations in dose, rotation time, and pitch. To minimize the effect of cross-scatter, the phantom scans in the DSDE mode was obtained by reducing the tube current of one of the tubes to its minimum (near zero) value. When using image data obtained in the DSDE mode, the median HU values in the tracheal regions of all animals and the phantom were consistently closer to -1000 HU regardless of reconstruction kernel (chapters 3 and 4). Similarly, HU values of water and blood were consistently closer to their nominal values of 0 HU and 55 HU respectively. When using image data obtained in the SS mode the air CT numbers demonstrated a consistent positive shift of up to 35 HU with respect to the nominal -1000 HU value. In vivo data demonstrated considerable variability in tracheal, influenced by local anatomy with SS mode scanning while tracheal air was more consistent with DSDE imaging. Scatter effects in the lung parenchyma differed from adjacent tracheal measures. In summary, data suggest that enhanced scatter correction serves to provide more accurate CT lung density measures sought to quantitatively assess the presence and distribution of emphysema in COPD subjects. Data further suggest that CT images, acquired without adequate scatter correction, cannot be corrected by linear algorithms given the variability in tracheal air HU values and the independent scatter effects on lung parenchyma.

  12. Emerging From Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer

    NASA Astrophysics Data System (ADS)

    Li, Chongyi; Guo, Jichang; Guo, Chunle

    2018-03-01

    Underwater vision suffers from severe effects due to selective attenuation and scattering when light propagates through water. Such degradation not only affects the quality of underwater images but limits the ability of vision tasks. Different from existing methods which either ignore the wavelength dependency of the attenuation or assume a specific spectral profile, we tackle color distortion problem of underwater image from a new view. In this letter, we propose a weakly supervised color transfer method to correct color distortion, which relaxes the need of paired underwater images for training and allows for the underwater images unknown where were taken. Inspired by Cycle-Consistent Adversarial Networks, we design a multi-term loss function including adversarial loss, cycle consistency loss, and SSIM (Structural Similarity Index Measure) loss, which allows the content and structure of the corrected result the same as the input, but the color as if the image was taken without the water. Experiments on underwater images captured under diverse scenes show that our method produces visually pleasing results, even outperforms the art-of-the-state methods. Besides, our method can improve the performance of vision tasks.

  13. Correcting reaction rates measured by saturation-transfer magnetic resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Gabr, Refaat E.; Weiss, Robert G.; Bottomley, Paul A.

    2008-04-01

    Off-resonance or spillover irradiation and incomplete saturation can introduce significant errors in the estimates of chemical rate constants measured by saturation-transfer magnetic resonance spectroscopy (MRS). Existing methods of correction are effective only over a limited parameter range. Here, a general approach of numerically solving the Bloch-McConnell equations to calculate exchange rates, relaxation times and concentrations for the saturation-transfer experiment is investigated, but found to require more measurements and higher signal-to-noise ratios than in vivo studies can practically afford. As an alternative, correction formulae for the reaction rate are provided which account for the expected parameter ranges and limited measurements available in vivo. The correction term is a quadratic function of experimental measurements. In computer simulations, the new formulae showed negligible bias and reduced the maximum error in the rate constants by about 3-fold compared to traditional formulae, and the error scatter by about 4-fold, over a wide range of parameters for conventional saturation transfer employing progressive saturation, and for the four-angle saturation-transfer method applied to the creatine kinase (CK) reaction in the human heart at 1.5 T. In normal in vivo spectra affected by spillover, the correction increases the mean calculated forward CK reaction rate by 6-16% over traditional and prior correction formulae.

  14. Electroweak radiative corrections to neutrino scattering at NuTeV

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Baur, Ulrich; Wackeroth, Doreen

    2007-04-01

    The W boson mass extracted by the NuTeV collaboration from the ratios of neutral and charged-current neutrino and anti-neutrino cross sections differs from direct measurements performed at LEP2 and the Fermilab Tevatron by about 3 σ. Several possible sources for the observed difference have been discussed in the literature, including new physics beyond the Standard Model (SM). However, in order to be able to pin down the cause of this discrepancy and to interpret this result as a deviation to the SM, it is important to include the complete electroweak one-loop corrections when extracting the W boson mass from neutrino scattering cross sections. We will present results of a Monte Carlo program for νN (νN) scattering including the complete electroweak O(α) corrections, which will be used to study the effects of these corrections on the extracted values for the electroweak parameters. We will briefly introduce some of the newly developed computational tools for generating Feynman diagrams and corresponding analytic expressions for one-loop matrix elements.

  15. Ground based measurements on reflectance towards validating atmospheric correction algorithms on IRS-P6 AWiFS data

    NASA Astrophysics Data System (ADS)

    Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.

    In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with aggregated ground measurements which showed a very good correlation of 0.96 in all four spectral bands (i.e. green, red, NIR and SWIR). In order to quantify the accuracy of the proposed method in the estimation of the surface reflectance, the root mean square error (RMSE) associated to the proposed method was evaluated. The analysis of the ground measured versus retrieved AWiFS reflectance yielded smaller RMSE values in case of all four spectral bands. EOS TERRA/AQUA MODIS derived AOD exhibited very good correlation of 0.92 and the data sets provides an effective means for carrying out atmospheric corrections in an operational way. Keywords: Atmospheric correction, 6S code, MODIS, Spectroradiometer, Sun-Photometer

  16. Magnetically confined electron beam system for high resolution electron transmission-beam experiments

    NASA Astrophysics Data System (ADS)

    Lozano, A. I.; Oller, J. C.; Krupa, K.; Ferreira da Silva, F.; Limão-Vieira, P.; Blanco, F.; Muñoz, A.; Colmenares, R.; García, G.

    2018-06-01

    A novel experimental setup has been implemented to provide accurate electron scattering cross sections from molecules at low and intermediate impact energies (1-300 eV) by measuring the attenuation of a magnetically confined linear electron beam from a molecular target. High-resolution electron energy is achieved through confinement in a magnetic gas trap where electrons are cooled by successive collisions with N2. Additionally, we developed and present a method to correct systematic errors arising from energy and angular resolution limitations. The accuracy of the entire measurement procedure is validated by comparing the N2 total scattering cross section in the considered energy range with benchmark values available in the literature.

  17. A note on extracting electronic stopping from energy spectra of backscattered slow ions applying Bragg's rule

    NASA Astrophysics Data System (ADS)

    Bruckner, B.; Roth, D.; Goebl, D.; Bauer, P.; Primetzhofer, D.

    2018-05-01

    Electronic stopping measurements in chemically reactive targets, e.g., transition and rare earth metals are challenging. These metals often contain low Z impurities, which contribute to electronic stopping. In this article, we present two ways how one can correct for the presence of impurities in the evaluation of proton and He stopping in Ni for primary energies between 1 and 100 keV, either considering or ignoring the contribution of the low Z impurities to multiple scattering. We find, that for protons either method leads to concordant results, but for heavier projectiles, e.g. He ions, the influence on multiple scattering must not be neglected.

  18. Second-order Born calculation of coplanar symmetric (e, 2e) process on Mg

    NASA Astrophysics Data System (ADS)

    Zhang, Yong-Zhi; Wang, Yang; Zhou, Ya-Jun

    2014-06-01

    The second-order distorted wave Born approximation (DWBA) method is employed to investigate the triple differential cross sections (TDCS) of coplanar doubly symmetric (e, 2e) collisions for magnesium at excess energies of 6 eV-20 eV. Comparing with the standard first-order DWBA calculations, the inclusion of the second-order Born term in the scattering amplitude improves the degree of agreement with experiments, especially for backward scattering region of TDCS. This indicates that the present second-order Born term is capable to give a reasonable correction to DWBA model in studying coplanar symmetric (e, 2e) problems of two-valence-electron target in low energy range.

  19. Angle-domain inverse scattering migration/inversion in isotropic media

    NASA Astrophysics Data System (ADS)

    Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan

    2018-07-01

    The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.

  20. Correction of complex nonlinear signal response from a pixel array detector

    DOE PAGES

    van Driel, Tim Brandt; Herrmann, Sven; Carini, Gabriella; ...

    2015-04-22

    The pulsed free-electron laser light sources represent a new challenge to photon area detectors due to the intrinsic spontaneous X-ray photon generation process that makes single-pulse detection necessary. Intensity fluctuations up to 100% between individual pulses lead to high linearity requirements in order to distinguish small signal changes. In real detectors, signal distortions as a function of the intensity distribution on the entire detector can occur. Here a robust method to correct this nonlinear response in an area detector is presented for the case of exposures to similar signals. The method is tested for the case of diffuse scattering frommore » liquids where relevant sub-1% signal changes appear on the same order as artifacts induced by the detector electronics.« less

  1. Wavefront shaping to correct intraocular scattering

    NASA Astrophysics Data System (ADS)

    Artal, Pablo; Arias, Augusto; Fernández, Enrique

    2018-02-01

    Cataracts is a common ocular pathology that increases the amount of intraocular scattering. It degrades the quality of vision by both blur and contrast reduction of the retinal images. In this work, we propose a non-invasive method, based on wavefront shaping (WS), to minimize cataract effects. For the experimental demonstration of the method, a liquid crystal on silicon (LCoS) spatial light modulator was used for both reproduction and reduction of the realistic cataracts effects. The LCoS area was separated in two halves conjugated with the eye's pupil by a telescope with unitary magnification. Thus, while the phase maps that induced programmable amounts of intraocular scattering (related to cataract severity) were displayed in a one half of the LCoS, sequentially testing wavefronts were displayed in the second one. Results of the imaging improvements were visually evaluated by subjects with no known ocular pathology seeing through the instrument. The diffracted intensity of exit pupil is analyzed for the feedback of the implemented algorithms in search for the optimum wavefront. Numerical and experimental results of the imaging improvements are presented and discussed.

  2. Application of the Feynman-tree theorem together with BCFW recursion relations

    NASA Astrophysics Data System (ADS)

    Maniatis, M.

    2018-03-01

    Recently, it has been shown that on-shell scattering amplitudes can be constructed by the Feynman-tree theorem combined with the BCFW recursion relations. Since the BCFW relations are restricted to tree diagrams, the preceding application of the Feynman-tree theorem is essential. In this way, amplitudes can be constructed by on-shell and gauge-invariant tree amplitudes. Here, we want to apply this method to the electron-photon vertex correction. We present all the single, double, and triple phase-space tensor integrals explicitly and show that the sum of amplitudes coincides with the result of the conventional calculation of a virtual loop correction.

  3. Testing the Two-Layer Model for Correcting Near Cloud Reflectance Enhancement Using LES SHDOM Simulated Radiances

    NASA Technical Reports Server (NTRS)

    Wen, Guoyong; Marshak, Alexander; Varnai, Tamas; Levy, Robert

    2016-01-01

    A transition zone exists between cloudy skies and clear sky; such that, clouds scatter solar radiation into clear-sky regions. From a satellite perspective, it appears that clouds enhance the radiation nearby. We seek a simple method to estimate this enhancement, since it is so computationally expensive to account for all three-dimensional (3-D) scattering processes. In previous studies, we developed a simple two-layer model (2LM) that estimated the radiation scattered via cloud-molecular interactions. Here we have developed a new model to account for cloud-surface interaction (CSI). We test the models by comparing to calculations provided by full 3-D radiative transfer simulations of realistic cloud scenes. For these scenes, the Moderate Resolution Imaging Spectroradiometer (MODIS)-like radiance fields were computed from the Spherical Harmonic Discrete Ordinate Method (SHDOM), based on a large number of cumulus fields simulated by the University of California, Los Angeles (UCLA) large eddy simulation (LES) model. We find that the original 2LM model that estimates cloud-air molecule interactions accounts for 64 of the total reflectance enhancement and the new model (2LM+CSI) that also includes cloud-surface interactions accounts for nearly 80. We discuss the possibility of accounting for cloud-aerosol radiative interactions in 3-D cloud-induced reflectance enhancement, which may explain the remaining 20 of enhancements. Because these are simple models, these corrections can be applied to global satellite observations (e.g., MODIS) and help to reduce biases in aerosol and other clear-sky retrievals.

  4. Analytic Scattering and Refraction Models for Exoplanet Transit Spectra

    NASA Astrophysics Data System (ADS)

    Robinson, Tyler D.; Fortney, Jonathan J.; Hubbard, William B.

    2017-12-01

    Observations of exoplanet transit spectra are essential to understanding the physics and chemistry of distant worlds. The effects of opacity sources and many physical processes combine to set the shape of a transit spectrum. Two such key processes—refraction and cloud and/or haze forward-scattering—have seen substantial recent study. However, models of these processes are typically complex, which prevents their incorporation into observational analyses and standard transit spectrum tools. In this work, we develop analytic expressions that allow for the efficient parameterization of forward-scattering and refraction effects in transit spectra. We derive an effective slant optical depth that includes a correction for forward-scattered light, and present an analytic form of this correction. We validate our correction against a full-physics transit spectrum model that includes scattering, and we explore the extent to which the omission of forward-scattering effects may bias models. Also, we verify a common analytic expression for the location of a refractive boundary, which we express in terms of the maximum pressure probed in a transit spectrum. This expression is designed to be easily incorporated into existing tools, and we discuss how the detection of a refractive boundary could help indicate the background atmospheric composition by constraining the bulk refractivity of the atmosphere. Finally, we show that opacity from Rayleigh scattering and collision-induced absorption will outweigh the effects of refraction for Jupiter-like atmospheres whose equilibrium temperatures are above 400-500 K.

  5. Forwardscattering corrections for optical extinction measurements in aerosol media. II - Polydispersions

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Box, M. A.

    1978-01-01

    The paper presents a parametric study of the forwardscattering corrections for experimentally measured optical extinction coefficients in polydisperse particulate media, since some forward scattered light invariably enters, along with the direct beam, into the finite aperture of the detector. Forwardscattering corrections are computed by two methods: (1) using the exact Mie theory, and (2) the approximate Rayleigh diffraction formula for spherical particles. A parametric study of the dependence of the corrections on mode radii, real and imaginary parts of the complex refractive index, and half-angle of the detector's view cone has been carried out for three different size distribution functions of the modified gamma type. In addition, a study has been carried out to investigate the range of these parameters in which the approximate formulation is valid. The agreement is especially good for small-view cone angles and large particles, which improves significantly for slightly absorbing aerosol particles. Also discussed is the dependence of these corrections on the experimental design of the transmissometer systems.

  6. Lidar inelastic multiple-scattering parameters of cirrus particle ensembles determined with geometrical-optics crystal phase functions.

    PubMed

    Reichardt, J; Hess, M; Macke, A

    2000-04-20

    Multiple-scattering correction factors for cirrus particle extinction coefficients measured with Raman and high spectral resolution lidars are calculated with a radiative-transfer model. Cirrus particle-ensemble phase functions are computed from single-crystal phase functions derived in a geometrical-optics approximation. Seven crystal types are considered. In cirrus clouds with height-independent particle extinction coefficients the general pattern of the multiple-scattering parameters has a steep onset at cloud base with values of 0.5-0.7 followed by a gradual and monotonic decrease to 0.1-0.2 at cloud top. The larger the scattering particles are, the more gradual is the rate of decrease. Multiple-scattering parameters of complex crystals and of imperfect hexagonal columns and plates can be well approximated by those of projected-area equivalent ice spheres, whereas perfect hexagonal crystals show values as much as 70% higher than those of spheres. The dependencies of the multiple-scattering parameters on cirrus particle spectrum, base height, and geometric depth and on the lidar parameters laser wavelength and receiver field of view, are discussed, and a set of multiple-scattering parameter profiles for the correction of extinction measurements in homogeneous cirrus is provided.

  7. Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.

    PubMed

    Ripple, Dean C; Hu, Zhishang

    2016-03-01

    Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.

  8. Dispersive approach to two-photon exchange in elastic electron-proton scattering

    DOE PAGES

    Blunden, P. G.; Melnitchouk, W.

    2017-06-14

    We examine the two-photon exchange corrections to elastic electron-nucleon scattering within a dispersive approach, including contributions from both nucleon and Δ intermediate states. The dispersive analysis avoids off-shell uncertainties inherent in traditional approaches based on direct evaluation of loop diagrams, and guarantees the correct unitary behavior in the high energy limit. Using empirical information on the electromagnetic nucleon elastic and NΔ transition form factors, we compute the two-photon exchange corrections both algebraically and numerically. Finally, results are compared with recent measurements of e + p to e - p cross section ratios from the CLAS, VEPP-3 and OLYMPUS experiments.

  9. Spectral structure of laser light scattering revisited: bandwidths of nonresonant scattering lidars.

    PubMed

    She, C Y

    2001-09-20

    It is well known that scattering lidars, i.e., Mie, aerosol-wind, Rayleigh, high-spectral-resolution, molecular-wind, rotational Raman, and vibrational Raman lidars, are workhorses for probing atmospheric properties, including the backscatter ratio, aerosol extinction coefficient, temperature, pressure, density, and winds. The spectral structure of molecular scattering (strength and bandwidth) and its constituent spectra associated with Rayleigh and vibrational Raman scattering are reviewed. Revisiting the correct name by distinguishing Cabannes scattering from Rayleigh scattering, and sharpening the definition of each scattering component in the Rayleigh scattering spectrum, the review allows a systematic, logical, and useful comparison in strength and bandwidth between each scattering component and in receiver bandwidths (for both nighttime and daytime operation) between the various scattering lidars for atmospheric sensing.

  10. Implementation of an Analytical Raman Scattering Correction for Satellite Ocean-Color Processing

    NASA Technical Reports Server (NTRS)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Proctor, Christopher W.

    2016-01-01

    Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a timeseries study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs.

  11. DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.

    2008-12-01

    A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.

  12. Calibration of AIS Data Using Ground-based Spectral Reflectance Measurements

    NASA Technical Reports Server (NTRS)

    Conel, J. E.

    1985-01-01

    Present methods of correcting airborne imaging spectrometer (AIS) data for instrumental and atmospheric effects include the flat- or curved-field correction and a deviation-from-the-average adjustment performed on a line-by-line basis throughout the image. Both methods eliminate the atmospheric absorptions, but remove the possibility of studying the atmosphere for its own sake, or of using the atmospheric information present as a possible basis for theoretical modeling. The method discussed here relies on use of ground-based measurements of the surface spectral reflectance in comparison with scanner data to fix in a least-squares sense parameters in a simplified model of the atmosphere on a wavelength-by-wavelength basis. The model parameters (for optically thin conditions) are interpretable in terms of optical depth and scattering phase function, and thus, in principle, provide an approximate description of the atmosphere as a homogeneous body intervening between the sensor and the ground.

  13. Elastic scattering and vibrational excitation for electron impact on para-benzoquinone

    NASA Astrophysics Data System (ADS)

    Jones, D. B.; Blanco, F.; García, G.; da Costa, R. F.; Kossoski, F.; Varella, M. T. do N.; Bettega, M. H. F.; Lima, M. A. P.; White, R. D.; Brunger, M. J.

    2017-12-01

    We report on theoretical elastic and experimental vibrational-excitation differential cross sections (DCSs) for electron scattering from para-benzoquinone (C6H4O2), in the intermediate energy range 15-50 eV. The calculations were conducted with two different theoretical methodologies, the Schwinger multichannel method with pseudopotentials (SMCPP) and the independent atom method with screening corrected additivity rule (IAM-SCAR) that also now incorporates a further interference (I) term. The SMCPP with N energetically open electronic states (Nopen) at the static-exchange-plus-polarisation (Nopench-SEP) level was used to calculate the scattering amplitudes using a channel coupling scheme that ranges from 1ch-SE up to the 89ch-SEP level of approximation. We found that in going from the 38ch-SEP to the 89ch-SEP, at all energies considered here, the elastic DCSs did not change significantly in terms of both their shapes and magnitudes. This is a good indication that our SMCPP 89ch-SEP elastic DCSs are converged with respect to the multichannel coupling effect for the investigated intermediate energies. While agreement between our IAM-SCAR+I and SMCPP 89ch-SEP computations improves as the incident electron energy increases from 15 eV, overall the level of accord is only marginal. This is particularly true at middle scattering angles, suggesting that our SCAR and interference corrections are failing somewhat for this molecule below 50 eV. We also report experimental DCS results, using a crossed-beam apparatus, for excitation of some of the unresolved ("hybrid") vibrational quanta (bands I-III) of para-benzoquinone. Those data were derived from electron energy loss spectra that were measured over a scattered electron angular range of 10°-90° and put on an absolute scale using our elastic SMCPP 89ch-SEP DCS results. The energy resolution of our measurements was ˜80 meV, which is why, at least in part, the observed vibrational features were only partially resolved. To the best of our knowledge, there are no other experimental or theoretical vibrational excitation results against which we might compare the present measurements.

  14. Joint reconstruction of activity and attenuation in Time-of-Flight PET: A Quantitative Analysis.

    PubMed

    Rezaei, Ahmadreza; Deroose, Christophe M; Vahle, Thomas; Boada, Fernando; Nuyts, Johan

    2018-03-01

    Joint activity and attenuation reconstruction methods from time of flight (TOF) positron emission tomography (PET) data provide an effective solution to attenuation correction when no (or incomplete/inaccurate) information on the attenuation is available. One of the main barriers limiting their use in clinical practice is the lack of validation of these methods on a relatively large patient database. In this contribution, we aim at validating the activity reconstructions of the maximum likelihood activity reconstruction and attenuation registration (MLRR) algorithm on a whole-body patient data set. Furthermore, a partial validation (since the scale problem of the algorithm is avoided for now) of the maximum likelihood activity and attenuation reconstruction (MLAA) algorithm is also provided. We present a quantitative comparison of the joint reconstructions to the current clinical gold-standard maximum likelihood expectation maximization (MLEM) reconstruction with CT-based attenuation correction. Methods: The whole-body TOF-PET emission data of each patient data set is processed as a whole to reconstruct an activity volume covering all the acquired bed positions, which helps to reduce the problem of a scale per bed position in MLAA to a global scale for the entire activity volume. Three reconstruction algorithms are used: MLEM, MLRR and MLAA. A maximum likelihood (ML) scaling of the single scatter simulation (SSS) estimate to the emission data is used for scatter correction. The reconstruction results are then analyzed in different regions of interest. Results: The joint reconstructions of the whole-body patient data set provide better quantification in case of PET and CT misalignments caused by patient and organ motion. Our quantitative analysis shows a difference of -4.2% (±2.3%) and -7.5% (±4.6%) between the joint reconstructions of MLRR and MLAA compared to MLEM, averaged over all regions of interest, respectively. Conclusion: Joint activity and attenuation estimation methods provide a useful means to estimate the tracer distribution in cases where CT-based attenuation images are subject to misalignments or are not available. With an accurate estimate of the scatter contribution in the emission measurements, the joint TOF-PET reconstructions are within clinical acceptable accuracy. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  15. A curvature-corrected Kirchhoff formulation for radar sea-return from the near vertical

    NASA Technical Reports Server (NTRS)

    Jackson, F. C.

    1974-01-01

    A new theoretical treatment of the problem of electromagnetic wave scattering from a randomly rough surface is given. A high frequency correction to the Kirchhoff approximation is derived from a field integral equation for a perfectly conducting surface. The correction, which accounts for the effect of local surface curvature, is seen to be identical with an asymptotic form found by Fock (1945) for diffraction by a paraboloid. The corrected boundary values are substituted into the far field Stratton-Chu integral, and average backscattered powers are computed assuming the scattering surface is a homogeneous Gaussian process. Preliminary calculations for K(-4) ocean wave spectrum indicate a resonable modelling of polarization effects near the vertical, theta 45 deg. Correspondence with the results of small perturbation theory is shown.

  16. A bias-corrected CMIP5 dataset for Africa using the CDF-t method - a contribution to agricultural impact studies

    NASA Astrophysics Data System (ADS)

    Moise Famien, Adjoua; Janicot, Serge; Delfin Ochou, Abe; Vrac, Mathieu; Defrance, Dimitri; Sultan, Benjamin; Noël, Thomas

    2018-03-01

    The objective of this paper is to present a new dataset of bias-corrected CMIP5 global climate model (GCM) daily data over Africa. This dataset was obtained using the cumulative distribution function transform (CDF-t) method, a method that has been applied to several regions and contexts but never to Africa. Here CDF-t has been applied over the period 1950-2099 combining Historical runs and climate change scenarios for six variables: precipitation, mean near-surface air temperature, near-surface maximum air temperature, near-surface minimum air temperature, surface downwelling shortwave radiation, and wind speed, which are critical variables for agricultural purposes. WFDEI has been used as the reference dataset to correct the GCMs. Evaluation of the results over West Africa has been carried out on a list of priority user-based metrics that were discussed and selected with stakeholders. It includes simulated yield using a crop model simulating maize growth. These bias-corrected GCM data have been compared with another available dataset of bias-corrected GCMs using WATCH Forcing Data as the reference dataset. The impact of WFD, WFDEI, and also EWEMBI reference datasets has been also examined in detail. It is shown that CDF-t is very effective at removing the biases and reducing the high inter-GCM scattering. Differences with other bias-corrected GCM data are mainly due to the differences among the reference datasets. This is particularly true for surface downwelling shortwave radiation, which has a significant impact in terms of simulated maize yields. Projections of future yields over West Africa are quite different, depending on the bias-correction method used. However all these projections show a similar relative decreasing trend over the 21st century.

  17. Electron kinetic effects on interferometry, polarimetry and Thomson scattering measurements in burning plasmas (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirnov, V. V.; Hartog, D. J. Den; Duff, J.

    2014-11-15

    At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = T{sub e}/m{sub e}c{sup 2} model may be insufficient; we present a more precise model with τ{sup 2}-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused bymore » equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of T{sub e} measurement relevant to ITER operational scenarios.« less

  18. Filling the gap: Calibration of the low molar-mass range of cellulose in size exclusion chromatography with cello-oligomers.

    PubMed

    Oberlerchner, J T; Vejdovszky, P; Zweckmair, T; Kindler, A; Koch, S; Rosenau, T; Potthast, A

    2016-11-04

    Degraded celluloses are becoming increasingly important as part of product streams coming from various biorefinery scenarios. Analysis of the molar mass distribution of such fractions is a challenge, since neither established methods for mono- or disaccharides nor common methods for polysaccharide characterization cover the intermediate oligomer range appropriately. Size exclusion chromatography (SEC) with multi-angle laser light scattering (MALLS), the standard approach for celluloses, suffers from decreased scattering intensities in the lower-molar mass range. The limitation in the low-molecular range can, in principle, be overcome by calibration, but calibration standards for such "short" celluloses are either not readily available or structurally remote and thus questionable. In this paper, we present the calibration of a SEC system- for the first time - with monodisperse cellooligomer standards up to about 3400gmol -1 . These cellooligomers are "short-chain celluloses" and can be seen as the "true" standard compounds, by contrast to commonly used standards that are chemically different from cellulose, such as pullulan, dextran, polystyrene, or poly(methyl methacrylate). The calibration is compared against those commercial standards and correction factors are calculated. Calibrations with non-cellulose standards can now be adjusted to yield better fitting results, and data already available can be corrected retrospectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Histogram-driven cupping correction (HDCC) in CT

    NASA Astrophysics Data System (ADS)

    Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.

    2010-04-01

    Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.

  20. Variations in Near-Infrared Emissivity of Venus Surface Observed by the Galileo Near-Infrared Mapping Spectrometer

    NASA Astrophysics Data System (ADS)

    Hashimoto, G. L.; Roos-Serote, M.; Sugita, S.

    2004-11-01

    We evaluate the spatial variation of venusian surface emissivity at a near-infrared wavelength using multispectral images obtained by the Near-Infrared Mapping Spectrometer (NIMS) on board the Galileo spacecraft. The Galileo made a close flyby to Venus in February 1990. During this flyby, NIMS observed the nightside of Venus with 17 spectral channels, which includes the well-known spectral windows at 1.18, 1.74, and 2.3 μ m. The surface emissivity is evaluated at 1.18 μ m, at which thermal radiation emitted from the planetary surface could be detected. To analyze the NIMS observations, synthetic spectra have been generated by means of a line-by-line radiative transfer program which includes both scattering and absorption. We used the discrete ordinate method to calculate the spectra of vertically inhomogeneous plane-parallel atmosphere. Gas opacity is calculated based on the method of Pollack et al. (1993), though binary absorption coefficients for continuum opacity are adjusted to achieve an acceptable fit to the NIMS data. We used Mie scattering theory and a cloud model developed by Pollack et al. (1993) to determine the single scattering albedo and scattering phase function of the cloud particles. The vertical temperature profile of Venus International Reference Atmosphere (VIRA) is used in all our calculations. The procedure of the analysis is the followings. We first made a correction for emission angle. Then, a modulation of emission by the cloud opacities is removed using simultaneously measured 1.74 and 2.3 μ m radiances. The resulting images are correlated with the topographic map of Magellan. To search for variations in surface emissivity, this cloud corrected images are divided by synthetic radiance maps that were created from the Magellan data. This work has been supported by The 21st Century COE Program of Origin and Evolution of Planetary Systems of Ministry of Education, Culture, Sports, Science and Technology (MEXT).

  1. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.

  2. MEASUREMENTS OF THE ABSORPTION AND SCATTERING CROSS SECTIONS FOR THE INTERACTION OF SOLAR ACOUSTIC WAVES WITH SUNSPOTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Hui; Chou, Dean-Yi, E-mail: chou@phys.nthu.edu.tw

    The solar acoustic waves are modified by the interaction with sunspots. The interaction can be treated as a scattering problem: an incident wave propagating toward a sunspot is scattered by the sunspot into different modes. The absorption cross section and scattering cross section are two important parameters in the scattering problem. In this study, we use the wavefunction of the scattered wave, measured with a deconvolution method, to compute the absorption cross section σ {sub ab} and the scattering cross section σ {sub sc} for the radial order n = 0–5 for two sunspots, NOAA 11084 and NOAA 11092. Inmore » the computation of the cross sections, the random noise and dissipation in the measured acoustic power are corrected. For both σ {sub ab} and σ {sub sc}, the value of NOAA 11092 is greater than that of NOAA 11084, but their overall n dependence is similar: decreasing with n . The ratio of σ {sub ab} of NOAA 11092 to that of NOAA 11084 approximately equals the ratio of sunspot radii for all n , while the ratio of σ {sub sc} of the two sunspots is greater than the ratio of sunspot radii and increases with n . This suggests that σ {sub ab} is approximately proportional to the sunspot radius, while the dependence of σ {sub sc} on radius is faster than the linear increase.« less

  3. Three-dimensional generalization of the Van Cittert-Zernike theorem to wave and particle scattering

    NASA Astrophysics Data System (ADS)

    Zarubin, Alexander M.

    1993-07-01

    Coherence properties of primary partially coherent radiations (light, X-rays and particles) elastically scattered from a 3D object consisting of a collection of electrons and nuclei are analyzed in the Fresnel diffraction region and in the far field. The behaviour of the cross-spectral density of the scattered radiation transverse and along to the local direction of propagation is shown to be described by respectively the 3D Fourier and Fresnel transform of the generalized radiance function of a scattering secondary source associated with the object. A relativistic correct expression is derived for the mutual coherence function of radiation which takes account of the dispersive propagation of particle beams in vacuum. An effect of the spatial coherence of radiation on the temporal one is found; in the Fresnel diffraction region, in distinction to the field, both the longitudinal spatial coherence and the spectral width of radiation affect the longitudinal coherence. A solution of the 3D inverse scattering problem for partially coherent radiation is presented. It is shown that squared modulus of the scattering potential and its 2D projections can be reconstructed from measurements of the modulus and phase of the degree of transverse spatial coherence of the scattered radiation. The results provide a theoretical basis for new methods of image formation and structure analysis in X-ray, electron, ion, and neutron optics.

  4. Development of online automatic detector of hydrocarbons and suspended organic matter by simultaneously acquisition of fluorescence and scattering.

    PubMed

    Mbaye, Moussa; Diaw, Pape Abdoulaye; Gaye-Saye, Diabou; Le Jeune, Bernard; Cavalin, Goulven; Denis, Lydie; Aaron, Jean-Jacques; Delmas, Roger; Giamarchi, Philippe

    2018-03-05

    Permanent online monitoring of water supply pollution by hydrocarbons is needed for various industrial plants, to serve as an alert when thresholds are exceeded. Fluorescence spectroscopy is a suitable technique for this purpose due to its sensitivity and moderate cost. However, fluorescence measurements can be disturbed by the presence of suspended organic matter, which induces beam scattering and absorption, leading to an underestimation of hydrocarbon content. To overcome this problem, we propose an original technique of fluorescence spectra correction, based on a measure of the excitation beam scattering caused by suspended organic matter on the left side of the Rayleigh scattering spectral line. This correction allowed us to obtain a statistically validated estimate of the naphthalene content (used as representative of the polyaromatic hydrocarbon contamination), regardless of the amount of suspended organic matter in the sample. Moreover, it thus becomes possible, based on this correction, to estimate the amount of suspended organic matter. By this approach, the online warning system remains operational even when suspended organic matter is present in the water supply. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Reciprocal space mapping and single-crystal scattering rods.

    PubMed

    Smilgies, Detlef M; Blasini, Daniel R; Hotta, Shu; Yanagi, Hisao

    2005-11-01

    Reciprocal space mapping using a linear gas detector in combination with a matching Soller collimator has been applied to map scattering rods of well oriented organic microcrystals grown on a solid surface. Formulae are provided to correct image distortions in angular space and to determine the required oscillation range, in order to measure properly integrated scattering intensities.

  6. Quasi-elastic nuclear scattering at high energies

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Townsend, Lawrence W.; Wilson, John W.

    1992-01-01

    The quasi-elastic scattering of two nuclei is considered in the high-energy optical model. Energy loss and momentum transfer spectra for projectile ions are evaluated in terms of an inelastic multiple-scattering series corresponding to multiple knockout of target nucleons. The leading-order correction to the coherent projectile approximation is evaluated. Calculations are compared with experiments.

  7. Anomalous Rayleigh scattering with dilute concentrations of elements of biological importance

    NASA Astrophysics Data System (ADS)

    Hugtenburg, Richard P.; Bradley, David A.

    2004-01-01

    The anomalous scattering factor (ASF) correction to the relativistic form-factor approximation for Rayleigh scattering is examined in support of its utilization in radiographic imaging. ASF corrected total cross-section data have been generated for a low resolution grid for the Monte Carlo code EGS4 for the biologically important elements, K, Ca, Mn, Fe, Cu and Zn. Points in the fixed energy grid used by EGS4 as well as 8 other points in the vicinity of the K-edge have been chosen to achieve an uncertainty in the ASF component of 20% according to the Thomas-Reiche-Kuhn sum rule and an energy resolution of 20 eV. Such data is useful for analysis of imaging with a quasi-monoenergetic source. Corrections to the sampled distribution of outgoing photons, due to ASF, are given and new total cross-section data including that of the photoelectric effect have been computed using the Slater exchange self-consistent potential with the Latter tail. A measurement of Rayleigh scattering in a dilute aqueous solution of manganese (II) was performed, this system enabling determination of the absolute cross-section, although background subtraction was necessary to remove K β fluorescence and resonant Raman scattering occurring within several 100 eV of the edge. Measurements confirm the presence of below edge bound-bound structure and variation in the structure due to the ionic state that are not currently included in tabulations.

  8. Chiral symmetry constraints on resonant amplitudes

    NASA Astrophysics Data System (ADS)

    Bruns, Peter C.; Mai, Maxim

    2018-03-01

    We discuss the impact of chiral symmetry constraints on the quark-mass dependence of meson resonance pole positions, which are encoded in non-perturbative parametrizations of meson scattering amplitudes. Model-independent conditions on such parametrizations are derived, which are shown to guarantee the correct functional form of the leading quark-mass corrections to the resonance pole positions. Some model amplitudes for ππ scattering, widely used for the determination of ρ and σ resonance properties from results of lattice simulations, are tested explicitly with respect to these conditions.

  9. Holographic corrections to meson scattering amplitudes

    NASA Astrophysics Data System (ADS)

    Armoni, Adi; Ireson, Edwin

    2017-06-01

    We compute meson scattering amplitudes using the holographic duality between confining gauge theories and string theory, in order to consider holographic corrections to the Veneziano amplitude and associated higher-point functions. The generic nature of such computations is explained, thanks to the well-understood nature of confining string backgrounds, and two different examples of the calculation in given backgrounds are used to illustrate the details. The effect we discover, whilst only qualitative, is re-obtainable in many such examples, in four-point but also higher point amplitudes.

  10. Spatial coherence effect on layer thickness determination in narrowband full-field optical coherence tomography.

    PubMed

    Safrani, Avner; Abdulhalim, Ibrahim

    2011-06-20

    Longitudinal spatial coherence (LSC) is determined by the spatial frequency content of an optical beam. The use of lenses with a high numerical aperture (NA) in full-field optical coherence tomography and a narrowband light source makes the LSC length much shorter than the temporal coherence length, hence suggesting that high-resolution 3D images of biological and multilayered samples can be obtained based on the low LSC. A simplified model is derived, supported by experimental results, which describes the expected interference output signal of multilayered samples when high-NA lenses are used together with a narrowband light source. An expression for the correction factor for the layer thickness determination is found valid for high-NA objectives. Additionally, the method was applied to a strongly scattering layer, demonstrating the potential of this method for high-resolution imaging of scattering media.

  11. Segmentation-free empirical beam hardening correction for CT.

    PubMed

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc

    2015-02-01

    The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.

  12. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    PubMed Central

    Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.

    2017-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335

  13. Imaging strategies using focusing functions with applications to a North Sea field

    NASA Astrophysics Data System (ADS)

    da Costa Filho, C. A.; Meles, G. A.; Curtis, A.; Ravasi, M.; Kritski, A.

    2018-04-01

    Seismic methods are used in a wide variety of contexts to investigate subsurface Earth structures, and to explore and monitor resources and waste-storage reservoirs in the upper ˜100 km of the Earth's subsurface. Reverse-time migration (RTM) is one widely used seismic method which constructs high-frequency images of subsurface structures. Unfortunately, RTM has certain disadvantages shared with other conventional single-scattering-based methods, such as not being able to correctly migrate multiply scattered arrivals. In principle, the recently developed Marchenko methods can be used to migrate all orders of multiples correctly. In practice however, using Marchenko methods are costlier to compute than RTM—for a single imaging location, the cost of performing the Marchenko method is several times that of standard RTM, and performing RTM itself requires dedicated use of some of the largest computers in the world for individual data sets. A different imaging strategy is therefore required. We propose a new set of imaging methods which use so-called focusing functions to obtain images with few artifacts from multiply scattered waves, while greatly reducing the number of points across the image at which the Marchenko method need be applied. Focusing functions are outputs of the Marchenko scheme: they are solutions of wave equations that focus in time and space at particular surface or subsurface locations. However, they are mathematical rather than physical entities, being defined only in reference media that equal to the true Earth above their focusing depths but are homogeneous below. Here, we use these focusing functions as virtual source/receiver surface seismic surveys, the upgoing focusing function being the virtual received wavefield that is created when the downgoing focusing function acts as a spatially distributed source. These source/receiver wavefields are used in three imaging schemes: one allows specific individual reflectors to be selected and imaged. The other two schemes provide either targeted or complete images with distinct advantages over current RTM methods, such as fewer artifacts and artifacts that occur in different locations. The latter property allows the recently published `combined imaging' method to remove almost all artifacts. We show several examples to demonstrate the methods: acoustic 1-D and 2-D synthetic examples, and a 2-D line from an ocean bottom cable field data set. We discuss an extension to elastic media, which is illustrated by a 1.5-D elastic synthetic example.

  14. A class of renormalised meshless Laplacians for boundary value problems

    NASA Astrophysics Data System (ADS)

    Basic, Josip; Degiuli, Nastia; Ban, Dario

    2018-02-01

    A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.

  15. Perturbative Quantum Gravity and its Relation to Gauge Theory.

    PubMed

    Bern, Zvi

    2002-01-01

    In this review we describe a non-trivial relationship between perturbative gauge theory and gravity scattering amplitudes. At the semi-classical or tree-level, the scattering amplitudes of gravity theories in flat space can be expressed as a sum of products of well defined pieces of gauge theory amplitudes. These relationships were first discovered by Kawai, Lewellen, and Tye in the context of string theory, but hold more generally. In particular, they hold for standard Einstein gravity. A method based on D -dimensional unitarity can then be used to systematically construct all quantum loop corrections order-by-order in perturbation theory using as input the gravity tree amplitudes expressed in terms of gauge theory ones. More generally, the unitarity method provides a means for perturbatively quantizing massless gravity theories without the usual formal apparatus associated with the quantization of constrained systems. As one application, this method was used to demonstrate that maximally supersymmetric gravity is less divergent in the ultraviolet than previously thought.

  16. Scattered image artifacts from cone beam computed tomography and its clinical potential in bone mineral density estimation.

    PubMed

    Ko, Hoon; Jeong, Kwanmoon; Lee, Chang-Hoon; Jun, Hong Young; Jeong, Changwon; Lee, Myeung Su; Nam, Yunyoung; Yoon, Kwon-Ha; Lee, Jinseok

    2016-01-01

    Image artifacts affect the quality of medical images and may obscure anatomic structure and pathology. Numerous methods for suppression and correction of scattered image artifacts have been suggested in the past three decades. In this paper, we assessed the feasibility of use of information on scattered artifacts for estimation of bone mineral density (BMD) without dual-energy X-ray absorptiometry (DXA) or quantitative computed tomographic imaging (QCT). To investigate the relationship between scattered image artifacts and BMD, we first used a forearm phantom and cone-beam computed tomography. In the phantom, we considered two regions of interest-bone-equivalent solid material containing 50 mg HA per cm(-3) and water-to represent low- and high-density trabecular bone, respectively. We compared the scattered image artifacts in the high-density material with those in the low-density material. The technique was then applied to osteoporosis patients and healthy subjects to assess its feasibility for BMD estimation. The high-density material produced a greater number of scattered image artifacts than the low-density material. Moreover, the radius and ulna of healthy subjects produced a greater number of scattered image artifacts than those from osteoporosis patients. Although other parameters, such as bone thickness and X-ray incidence, should be considered, our technique facilitated BMD estimation directly without DXA or QCT. We believe that BMD estimation based on assessment of scattered image artifacts may benefit the prevention, early treatment and management of osteoporosis.

  17. Shear and bulk viscosity of high-temperature gluon plasma

    NASA Astrophysics Data System (ADS)

    Zhang, Le; Hou, De-Fu

    2018-05-01

    We calculate the shear viscosity (η) and bulk viscosity (ζ) to entropy density (s) ratios η/s and ζ/s of a gluon plasma system in kinetic theory, including both the elastic {gg}≤ftrightarrow {gg} forward scattering and the inelastic soft gluon bremsstrahlung {gg}≤ftrightarrow {ggg} processes. Due to the suppressed contribution to η and ζ in the {gg}≤ftrightarrow {gg} forward scattering and the effective g≤ftrightarrow {gg} gluon splitting, Arnold, Moore and Yaffe (AMY) and Arnold, Dogan and Moore (ADM) have got the leading order computations for η and ζ in high-temperature QCD matter. In this paper, we calculate the correction to η and ζ in the soft gluon bremsstrahlung {gg}≤ftrightarrow {ggg} process with an analytic method. We find that the contribution of the collision term from the {gg}≤ftrightarrow {ggg} soft gluon bremsstrahlung process is just a small perturbation to the {gg}≤ftrightarrow {gg} scattering process and that the correction is at ∼5% level. Then, we obtain the bulk viscosity of the gluon plasma for the number-changing process. Furthermore, our leading-order result for bulk viscosity is the formula \\zeta \\propto \\tfrac{{α }s2{T}3}{ln}{α }s-1} in high-temperature gluon plasma. Supported by Ministry of Science and Technology of China (MSTC) under the “973” Project (2015CB856904(4)) and National Natural Science Foundation of China (11735007, 11521064)

  18. How to apply the optimal estimation method to your lidar measurements for improved retrievals of temperature and composition

    NASA Astrophysics Data System (ADS)

    Sica, R. J.; Haefele, A.; Jalali, A.; Gamage, S.; Farhani, G.

    2018-04-01

    The optimal estimation method (OEM) has a long history of use in passive remote sensing, but has only recently been applied to active instruments like lidar. The OEM's advantage over traditional techniques includes obtaining a full systematic and random uncertainty budget plus the ability to work with the raw measurements without first applying instrument corrections. In our meeting presentation we will show you how to use the OEM for temperature and composition retrievals for Rayleigh-scatter, Ramanscatter and DIAL lidars.

  19. Assessment, Validation, and Refinement of the Atmospheric Correction Algorithm for the Ocean Color Sensors. Chapter 19

    NASA Technical Reports Server (NTRS)

    Wang, Menghua

    2003-01-01

    The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, H; Kong, V; Jin, J

    Purpose: A synchronized moving grid (SMOG) has been proposed to reduce scatter and lag artifacts in cone beam computed tomography (CBCT). However, information is missing in each projection because certain areas are blocked by the grid. A previous solution to this issue is acquiring 2 complimentary projections at each position, which increases scanning time. This study reports our first Result using an inter-projection sensor fusion (IPSF) method to estimate missing projection in our prototype SMOG-based CBCT system. Methods: An in-house SMOG assembling with a 1:1 grid of 3 mm gap has been installed in a CBCT benchtop. The grid movesmore » back and forth in a 3-mm amplitude and up-to 20-Hz frequency. A control program in LabView synchronizes the grid motion with the platform rotation and x-ray firing so that the grid patterns for any two neighboring projections are complimentary. A Catphan was scanned with 360 projections. After scatter correction, the IPSF algorithm was applied to estimate missing signal for each projection using the information from the 2 neighboring projections. Feldkamp-Davis-Kress (FDK) algorithm was applied to reconstruct CBCT images. The CBCTs were compared to those reconstructed using normal projections without applying the SMOG system. Results: The SMOG-IPSF method may reduce image dose by half due to the blocked radiation by the grid. The method almost completely removed scatter related artifacts, such as the cupping artifacts. The evaluation of line pair patterns in the CatPhan suggested that the spatial resolution degradation was minimal. Conclusion: The SMOG-IPSF is promising in reducing scatter artifacts and improving image quality while reducing radiation dose.« less

  1. Examination of a Method to Determine the Reference Region for Calculating the Specific Binding Ratio in Dopamine Transporter Imaging.

    PubMed

    Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu

    2017-01-01

    The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.

  2. Density-functional calculations of transport properties in the nondegenerate limit and the role of electron-electron scattering

    DOE PAGES

    Desjarlais, Michael P.; Scullard, Christian R.; Benedict, Lorin X.; ...

    2017-03-13

    We compute electrical and thermal conductivities of hydrogen plasmas in the non-degenerate regime using Kohn-Sham Density Functional Theory (DFT) and an application of the Kubo- Greenwood response formula, and demonstrate that for thermal conductivity, the mean-field treatment of the electron-electron (e-e) interaction therein is insufficient to reproduce the weak-coupling limit obtained by plasma kinetic theories. An explicit e-e scattering correction to the DFT is posited by appealing to Matthiessen's Rule and the results of our computations of conductivities with the quantum Lenard-Balescu (QLB) equation. Further motivation of our correction is provided by an argument arising from the Zubarev quantum kineticmore » theory approach. Significant emphasis is placed on our efforts to produce properly converged results for plasma transport using Kohn-Sham DFT, so that an accurate assessment of the importance and efficacy of our e-e scattering corrections to the thermal conductivity can be made.« less

  3. On-the-Fly Generation of Differential Resonance Scattering Probability Distribution Functions for Monte Carlo Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, Eva E.; Martin, William R.

    Current Monte Carlo codes use one of three models: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S(α,β) model, depending on the neutron energy and the specific Monte Carlo code. This thesis addresses the consequences of using the free gas scattering model, which assumes that the neutron interacts with atoms in thermal motion in a monatomic gas in thermal equilibrium at material temperature, T. Most importantly, the free gas model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not formore » heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that the exact resonance scattering model is temperaturedependent, and neglecting the resonances in the lower epithermal range can under-predict resonance absorption due to the upscattering phenomenon mentioned above, leading to an over-prediction of keff by several hundred pcm. Existing methods to address this issue involve changing the neutron weights or implementing an extra rejection scheme in the free gas sampling scheme, and these all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame to continue the random walk of the neutron. The goal of this paper was to develop a sampling methodology that (1) accounted for the energydependent scattering cross sections in the collision analysis and (2) was performed in the laboratory frame,avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials (2nd and 4th order) to approximate the scattering cross section in Blackshaw’s equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using methods developed in this dissertation showed very close comparison to results using the reference Dopplerbroadened rejection correction (DBRC) scheme.« less

  4. On-the-Fly Generation of Differential Resonance Scattering Probability Distribution Functions for Monte Carlo Codes

    DOE PAGES

    Davidson, Eva E.; Martin, William R.

    2017-05-26

    Current Monte Carlo codes use one of three models: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S(α,β) model, depending on the neutron energy and the specific Monte Carlo code. This thesis addresses the consequences of using the free gas scattering model, which assumes that the neutron interacts with atoms in thermal motion in a monatomic gas in thermal equilibrium at material temperature, T. Most importantly, the free gas model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not formore » heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that the exact resonance scattering model is temperaturedependent, and neglecting the resonances in the lower epithermal range can under-predict resonance absorption due to the upscattering phenomenon mentioned above, leading to an over-prediction of keff by several hundred pcm. Existing methods to address this issue involve changing the neutron weights or implementing an extra rejection scheme in the free gas sampling scheme, and these all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame to continue the random walk of the neutron. The goal of this paper was to develop a sampling methodology that (1) accounted for the energydependent scattering cross sections in the collision analysis and (2) was performed in the laboratory frame,avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials (2nd and 4th order) to approximate the scattering cross section in Blackshaw’s equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using methods developed in this dissertation showed very close comparison to results using the reference Dopplerbroadened rejection correction (DBRC) scheme.« less

  5. One-loop corrections to light cone wave functions: The dipole picture DIS cross section

    NASA Astrophysics Data System (ADS)

    Hänninen, H.; Lappi, T.; Paatelainen, R.

    2018-06-01

    We develop methods to perform loop calculations in light cone perturbation theory using a helicity basis, refining the method introduced in our earlier work. In particular this includes implementing a consistent way to contract the four-dimensional tensor structures from the helicity vectors with d-dimensional tensors arising from loop integrals, in a way that can be fully automatized. We demonstrate this explicitly by calculating the one-loop correction to the virtual photon to quark-antiquark dipole light cone wave function. This allows us to calculate the deep inelastic scattering cross section in the dipole formalism to next-to-leading order accuracy. Our results, obtained using the four dimensional helicity scheme, agree with the recent calculation by Beuf using conventional dimensional regularization, confirming the regularization scheme independence of this cross section.

  6. Quantum vacuum interaction between two cosmic strings revisited

    NASA Astrophysics Data System (ADS)

    Muñoz-Castañeda, J. M.; Bordag, M.

    2014-03-01

    We reconsider the quantum vacuum interaction energy between two straight parallel cosmic strings. This problem was discussed several times in an approach treating both strings perturbatively and treating only one perturbatively. Here we point out that a simplifying assumption made by Bordag [Ann. Phys. (Berlin) 47, 93 (1990).] can be justified and show that, despite the global character of the background, the perturbative approach delivers a correct result. We consider the applicability of the scattering methods, developed in the past decade for the Casimir effect, for the cosmic string and find it not applicable. We calculate the scattering T-operator on one string. Finally, we consider the vacuum interaction of two strings when each carries a two-dimensional delta function potential.

  7. Methods of InSAR atmosphere correction for volcano activity monitoring

    USGS Publications Warehouse

    Gong, W.; Meyer, F.; Webley, P.W.; Lu, Z.

    2011-01-01

    When a Synthetic Aperture Radar (SAR) signal propagates through the atmosphere on its path to and from the sensor, it is inevitably affected by atmospheric effects. In particular, the applicability and accuracy of Interferometric SAR (InSAR) techniques for volcano monitoring is limited by atmospheric path delays. Therefore, atmospheric correction of interferograms is required to improve the performance of InSAR for detecting volcanic activity, especially in order to advance its ability to detect subtle pre-eruptive changes in deformation dynamics. In this paper, we focus on InSAR tropospheric mitigation methods and their performance in volcano deformation monitoring. Our study areas include Okmok volcano and Unimak Island located in the eastern Aleutians, AK. We explore two methods to mitigate atmospheric artifacts, namely the numerical weather model simulation and the atmospheric filtering using Persistent Scatterer processing. We investigate the capability of the proposed methods, and investigate their limitations and advantages when applied to determine volcanic processes. ?? 2011 IEEE.

  8. Attenuation correction for flexible magnetic resonance coils in combined magnetic resonance/positron emission tomography imaging.

    PubMed

    Eldib, Mootaz; Bini, Jason; Calcagno, Claudia; Robson, Philip M; Mani, Venkatesh; Fayad, Zahi A

    2014-02-01

    Attenuation correction for magnetic resonance (MR) coils is a new challenge that came about with the development of combined MR and positron emission tomography (PET) imaging. This task is difficult because such coils are not directly visible on either PET or MR acquisitions with current combined scanners and are therefore not easily localized in the field of view. This issue becomes more evident when trying to localize flexible MR coils (eg, cardiac or body matrix coil) that change position and shape from patient to patient and from one imaging session to another. In this study, we proposed a novel method to localize and correct for the attenuation and scatter of a flexible MR cardiac coil, using MR fiducial markers placed on the surface of the coil to allow for accurate registration of a template computed tomography (CT)-based attenuation map. To quantify the attenuation properties of the cardiac coil, a uniform cylindrical water phantom injected with 18F-fluorodeoxyglucose (18F-FDG) was imaged on a sequential MR/PET system with and without the flexible cardiac coil. After establishing the need to correct for the attenuation of the coil, we tested the feasibility of several methods to register a precomputed attenuation map to correct for the attenuation. To accomplish this, MR and CT visible markers were placed on the surface of the cardiac flexible coil. Using only the markers as a driver for registration, the CT image was registered to the reference image through a combination of rigid and deformable registration. The accuracy of several methods was compared for the deformable registration, including B-spline, thin-plate spline, elastic body spline, and volume spline. Finally, we validated our novel approach both in phantom and patient studies. The findings from the phantom experiments indicated that the presence of the coil resulted in a 10% reduction in measured 18F-FDG activity when compared with the phantom-only scan. Local underestimation reached 22% in regions of interest close to the coil. Various registration methods were tested, and the volume spline was deemed to be the most accurate, as measured by the Dice similarity metric. The results of our phantom experiments showed that the bias in the 18F-FDG quantification introduced by the presence of the coil could be reduced by using our registration method. An overestimation of only 1.9% of the overall activity for the phantom scan with the coil attenuation map was measured when compared with the baseline phantom scan without coil. A local overestimation of less than 3% was observed in the ROI analysis when using the proposed method to correct for the attenuation of the flexible cardiac coil. Quantitative results from the patient study agreed well with the phantom findings. We presented and validated an accurate method to localize and register a CT-based attenuation map to correct for the attenuation and scatter of flexible MR coils. This method may be translated to clinical use to produce quantitatively accurate measurements with the use of flexible MR coils during MR/PET imaging.

  9. Combined Henyey-Greenstein and Rayleigh phase function.

    PubMed

    Liu, Quanhua; Weng, Fuzhong

    2006-10-01

    The phase function is an important parameter that affects the distribution of scattered radiation. In Rayleigh scattering, a scatterer is approximated by a dipole, and its phase function is analytically related to the scattering angle. For the Henyey-Greenstein (HG) approximation, the phase function preserves only the correct asymmetry factor (i.e., the first moment), which is essentially important for anisotropic scattering. When the HG function is applied to small particles, it produces a significant error in radiance. In addition, the HG function is applied only for an intensity radiative transfer. We develop a combined HG and Rayleigh (HG-Rayleigh) phase function. The HG phase function plays the role of modulator extending the application of the Rayleigh phase function for small asymmetry scattering. The HG-Rayleigh phase function guarantees the correct asymmetry factor and is valid for a polarization radiative transfer. It approaches the Rayleigh phase function for small particles. Thus the HG-Rayleigh phase function has wider applications for both intensity and polarimetric radiative transfers. For microwave radiative transfer modeling in this study, the largest errors in the brightness temperature calculations for weak asymmetry scattering are generally below 0.02 K by using the HG-Rayleigh phase function. The errors can be much larger, in the 1-3 K range, if the Rayleigh and HG functions are applied separately.

  10. The Recovery of Optical Quality after Laser Vision Correction

    PubMed Central

    Jung, Hyeong-Gi

    2013-01-01

    Purpose To evaluate the optical quality after laser in situ keratomileusis (LASIK) or serial photorefractive keratectomy (PRK) using a double-pass system and to follow the recovery of optical quality after laser vision correction. Methods This study measured the visual acuity, manifest refraction and optical quality before and one day, one week, one month, and three months after laser vision correction. Optical quality parameters including the modulation transfer function, Strehl ratio and intraocular scattering were evaluated with a double-pass system. Results This study included 51 eyes that underwent LASIK and 57 that underwent PRK. The optical quality three months post-surgery did not differ significantly between these laser vision correction techniques. Furthermore, the preoperative and postoperative optical quality did not differ significantly in either group. Optical quality recovered within one week after LASIK but took between one and three months to recover after PRK. The optical quality of patients in the PRK group seemed to recover slightly more slowly than their uncorrected distance visual acuity. Conclusions Optical quality recovers to the preoperative level after laser vision correction, so laser vision correction is efficacious for correcting myopia. The double-pass system is a useful tool for clinical assessment of optical quality. PMID:23908570

  11. Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging

    PubMed Central

    Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.

    2017-01-01

    Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time. PMID:29270539

  12. Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging.

    PubMed

    Li, Yusheng; Matej, Samuel; Karp, Joel S; Metzler, Scott D

    2017-05-01

    Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time.

  13. Methods for reverberation suppression utilizing dual frequency band imaging.

    PubMed

    Rau, Jochen M; Måsøy, Svein-Erik; Hansen, Rune; Angelsen, Bjørn; Tangen, Thor Andreas

    2013-09-01

    Reverberations impair the contrast resolution of diagnostic ultrasound images. Tissue harmonic imaging is a common method to reduce these artifacts, but does not remove all reverberations. Dual frequency band imaging (DBI), utilizing a low frequency pulse which manipulates propagation of the high frequency imaging pulse, has been proposed earlier for reverberation suppression. This article adds two different methods for reverberation suppression with DBI: the delay corrected subtraction (DCS) and the first order content weighting (FOCW) method. Both methods utilize the propagation delay of the imaging pulse of two transmissions with alternating manipulation pressure to extract information about its depth of first scattering. FOCW further utilizes this information to estimate the content of first order scattering in the received signal. Initial evaluation is presented where both methods are applied to simulated and in vivo data. Both methods yield visual and measurable substantial improvement in image contrast. Comparing DCS with FOCW, DCS produces sharper images and retains more details while FOCW achieves best suppression levels and, thus, highest image contrast. The measured improvement in contrast ranges from 8 to 27 dB for DCS and from 4 dB up to the dynamic range for FOCW.

  14. Spatial frequency spectrum of the x-ray scatter distribution in CBCT projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J.; Verhaegen, F.; Department of Oncology, Medical Physics Unit, McGill University, Montreal, Quebec H3G 1A4

    2013-11-15

    Purpose: X-ray scatter is a source of significant image quality loss in cone-beam computed tomography (CBCT). The use of Monte Carlo (MC) simulations separating primary and scattered photons has allowed the structure and nature of the scatter distribution in CBCT to become better elucidated. This work seeks to quantify the structure and determine a suitable basis function for the scatter distribution by examining its spectral components using Fourier analysis.Methods: The scatter distribution projection data were simulated using a CBCT MC model based on the EGSnrc code. CBCT projection data, with separated primary and scatter signal, were generated for a 30.6more » cm diameter water cylinder [single angle projection with varying axis-to-detector distance (ADD) and bowtie filters] and two anthropomorphic phantoms (head and pelvis, 360 projections sampled every 1°, with and without a compensator). The Fourier transform of the resulting scatter distributions was computed and analyzed both qualitatively and quantitatively. A novel metric called the scatter frequency width (SFW) is introduced to determine the scatter distribution's frequency content. The frequency content results are used to determine a set basis functions, consisting of low-frequency sine and cosine functions, to fit and denoise the scatter distribution generated from MC simulations using a reduced number of photons and projections. The signal recovery is implemented using Fourier filtering (low-pass Butterworth filter) and interpolation. Estimates of the scatter distribution are used to correct and reconstruct simulated projections.Results: The spatial and angular frequencies are contained within a maximum frequency of 0.1 cm{sup −1} and 7/(2π) rad{sup −1} for the imaging scenarios examined, with these values varying depending on the object and imaging setup (e.g., ADD and compensator). These data indicate spatial and angular sampling every 5 cm and π/7 rad (∼25°) can be used to properly capture the scatter distribution, with reduced sampling possible depending on the imaging scenario. Using a low-pass Butterworth filter, tuned with the SFW values, to denoise the scatter projection data generated from MC simulations using 10{sup 6} photons resulted in an error reduction of greater than 85% for the estimating scatter in single and multiple projections. Analysis showed that the use of a compensator helped reduce the error in estimating the scatter distribution from limited photon simulations by more than 37% when compared to the case without a compensator for the head and pelvis phantoms. Reconstructions of simulated head phantom projections corrected by the filtered and interpolated scatter estimates showed improvements in overall image quality.Conclusions: The spatial frequency content of the scatter distribution in CBCT is found to be contained within the low frequency domain. The frequency content is modulated both by object and imaging parameters (ADD and compensator). The low-frequency nature of the scatter distribution allows for a limited set of sine and cosine basis functions to be used to accurately represent the scatter signal in the presence of noise and reduced data sampling decreasing MC based scatter estimation time. Compensator induced modulation of the scatter distribution reduces the frequency content and improves the fitting results.« less

  15. The effects of compensation for scatter, lead X-rays, and high-energy contamination on tumor detectability and activity estimation in Ga-67 imaging

    NASA Astrophysics Data System (ADS)

    Fakhri, G. El; Kijewski, M. F.; Maksud, P.; Moore, S. C.

    2003-06-01

    Compton scatter, lead X-rays, and high-energy contamination are major factors affecting image quality in Ga-67 imaging. Scattered photons detected in one photopeak window include photons exiting the patient at energies within the photopeak, as well as higher energy photons which have interacted in the collimator and crystal and lost energy. Furthermore, lead X-rays can be detected in the main energy photopeak (93 keV). We have previously developed two energy-based methods, based on artificial neural networks (ANN) and on a generalized spectral (GS) approach to compensate for scatter, high-energy contamination, and lead X-rays in Ga-67 imaging. For comparison, we considered also the projections that would be acquired in the clinic using the optimal energy windows (WIN) we have reported previously for tumor detection and estimation tasks for the 93, 185, and 300 keV photopeaks. The aim of the present study is to evaluate under realistic conditions the impact of these phenomena and their compensation on tumor detection and estimation tasks in Ga-67 imaging. ANN and GS were compared on the basis of performance of a three-channel Hotelling observer (CHO), in detecting the presence of a spherical tumor of unknown size embedded in an anatomic background as well as on the basis of estimation of tumor activity. Projection datasets of spherical tumors ranging from 2 to 6 cm in diameter, located at several sites in an anthropomorphic torso phantom, were simulated using a Monte Carlo program that modeled all photon interactions in the patient as well as in the collimator and the detector for all decays between 91 and 888 keV. One hundred realistic noise realizations were generated from each very-low-noise simulated projection dataset. The presence of scatter degraded both CHO signal-to-noise ratio (SNR) and estimation accuracy. On average, the presence of scatter led to a 12% reduction in CHO SNR. Correcting for scatter further diminished CHO SNR but to a lesser extent with ANN (5% reduction) than with GS (12%). Both scatter corrections improved performance in activity estimation. ANN yielded better precision (1.8% relative standard deviation) than did GS (4%) but greater average bias (5.1% with ANN, 3.6% with GS).

  16. An Empirical Study of Atmospheric Correction Procedures for Regional Infrasound Amplitudes with Ground Truth.

    NASA Astrophysics Data System (ADS)

    Howard, J. E.

    2014-12-01

    This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.

  17. Effect of Multiple Scattering on the Compton Recoil Current Generated in an EMP, Revisited

    DOE PAGES

    Farmer, William A.; Friedman, Alex

    2015-06-18

    Multiple scattering has historically been treated in EMP modeling through the obliquity factor. The validity of this approach is examined here. A simplified model problem, which correctly captures cyclotron motion, Doppler shifting due to the electron motion, and multiple scattering is first considered. The simplified problem is solved three ways: the obliquity factor, Monte-Carlo, and Fokker-Planck finite-difference. Because of the Doppler effect, skewness occurs in the distribution. It is demonstrated that the obliquity factor does not correctly capture this skewness, but the Monte-Carlo and Fokker-Planck finite-difference approaches do. Here, the obliquity factor and Fokker-Planck finite-difference approaches are then compared inmore » a fuller treatment, which includes the initial Klein-Nishina distribution of the electrons, and the momentum dependence of both drag and scattering. It is found that, in general, the obliquity factor is adequate for most situations. However, as the gamma energy increases and the Klein-Nishina becomes more peaked in the forward direction, skewness in the distribution causes greater disagreement between the obliquity factor and a more accurate model of multiple scattering.« less

  18. Simultaneous inversion of intrinsic and scattering attenuation parameters incorporating multiple scattering effect

    NASA Astrophysics Data System (ADS)

    Ogiso, M.

    2017-12-01

    Heterogeneous attenuation structure is important for not only understanding the earth structure and seismotectonics, but also ground motion prediction. Attenuation of ground motion in high frequency range is often characterized by the distribution of intrinsic and scattering attenuation parameters (intrinsic Q and scattering coefficient). From the viewpoint of ground motion prediction, both intrinsic and scattering attenuation affect the maximum amplitude of ground motion while scattering attenuation also affect the duration time of ground motion. Hence, estimation of both attenuation parameters will lead to sophisticate the ground motion prediction. In this study, we try to estimate both parameters in southwestern Japan in a tomographic manner. We will conduct envelope fitting of seismic coda since coda has sensitivity to both intrinsic attenuation and scattering coefficients. Recently, Takeuchi (2016) successfully calculated differential envelope when these parameters have fluctuations. We adopted his equations to calculate partial derivatives of these parameters since we did not need to assume homogeneous velocity structure. Matrix for inversion of structural parameters would become too huge to solve in a straightforward manner. Hence, we adopted ART-type Bayesian Reconstruction Method (Hirahara, 1998) to project the difference of envelopes to structural parameters iteratively. We conducted checkerboard reconstruction test. We assumed checkerboard pattern of 0.4 degree interval in horizontal direction and 20 km in depth direction. Reconstructed structures well reproduced the assumed pattern in shallower part while not in deeper part. Since the inversion kernel has large sensitivity around source and stations, resolution in deeper part would be limited due to the sparse distribution of earthquakes. To apply the inversion method which described above to actual waveforms, we have to correct the effects of source and site amplification term. We consider these issues to estimate the actual intrinsic and scattering structures of the target region.Acknowledgment We used the waveforms of Hi-net, NIED. This study was supported by the Earthquake Research Institute of the University of Tokyo cooperative research program.

  19. Positioning true coincidences that undergo inter-and intra-crystal scatter for a sub-mm resolution cadmium zinc telluride-based PET system

    NASA Astrophysics Data System (ADS)

    Abbaszadeh, Shiva; Chinn, Garry; Levin, Craig S.

    2018-01-01

    The kinematics of Compton scatter can be used to estimate the interaction sequence of inter-crystal scatter interactions in 3D position-sensitive cadmium zinc telluride (CZT) detectors. However, in the case of intra-crystal scatter in a ‘cross-strip’ CZT detector slab, multiple anode and cathode strips may be triggered, creating position ambiguity due to uncertainty in possible combinations of anode-cathode pairings. As a consequence, methods such as energy-weighted centroid are not applicable to position the interactions. In practice, since the event position is uncertain, these intra-crystal scatters events are discarded. In this work, we studied using Compton kinematics and a ‘direction difference angle’ to provide a method to correctly identify the anode-cathode pair corresponding to the first interaction position in an intra-crystal scatter event. GATE simulation studies of a NEMA NU4 image quality phantom in a small animal positron emission tomography under development composed of 192, 40~mm×40~mm×5 mm CZT crystals shows that 47% of total numbers of multiple-interaction photon events (MIPEs) are intra-crystal scatter with a 100 keV lower energy threshold per interaction. The sensitivity of the system increases from 0.6 to 4.10 (using 10 keV as system lower energy threshold) by including rather than discarding inter- and intra-crystal scatter. The contrast-to-noise ratio (CNR) also increases from 5.81+/-0.3 to 12.53+/-0.37 . It was shown that a higher energy threshold limits the capability of the system to detect MIPEs and reduces CNR. Results indicate a sensitivity increase (4.1 to 5.88) when raising the lower energy threshold (10 keV to 100 keV) for the case of only two-interaction events. In order to detect MIPEs accurately, a low noise system capable of a low energy threshold (10 keV) per interaction is desired.

  20. Comparison of different Aethalometer correction schemes and a reference multi-wavelength absorption technique for ambient aerosol data

    NASA Astrophysics Data System (ADS)

    Saturno, Jorge; Pöhlker, Christopher; Massabò, Dario; Brito, Joel; Carbone, Samara; Cheng, Yafang; Chi, Xuguang; Ditas, Florian; Hrabě de Angelis, Isabella; Morán-Zuloaga, Daniel; Pöhlker, Mira L.; Rizzo, Luciana V.; Walter, David; Wang, Qiaoqiao; Artaxo, Paulo; Prati, Paolo; Andreae, Meinrat O.

    2017-08-01

    Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP) data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA), which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June-September 2014). The mean absorption coefficient (at 637 nm) during this period was 1.8 ± 2.1 Mm-1, with a maximum of 15.9 Mm-1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS) retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.

  1. Limitations of the method of complex basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumel, R.T.; Crocker, M.C.; Nuttall, J.

    1975-08-01

    The method of complex basis functions proposed by Rescigno and Reinhardt is applied to the calculation of the amplitude in a model problem which can be treated analytically. It is found for an important class of potentials, including some of infinite range and also the square well, that the method does not provide a converging sequence of approximations. However, in some cases, approximations of relatively low order might be close to the correct result. The method is also applied to S-wave e-H elastic scattering above the ionization threshold, and spurious ''convergence'' to the wrong result is found. A procedure whichmore » might overcome the difficulties of the method is proposed.« less

  2. New methods for engineering site characterization using reflection and surface wave seismic survey

    NASA Astrophysics Data System (ADS)

    Chaiprakaikeow, Susit

    This study presents two new seismic testing methods for engineering application, a new shallow seismic reflection method and Time Filtered Analysis of Surface Waves (TFASW). Both methods are described in this dissertation. The new shallow seismic reflection was developed to measure reflection at a single point using two to four receivers, assuming homogeneous, horizontal layering. It uses one or more shakers driven by a swept sine function as a source, and the cross-correlation technique to identify wave arrivals. The phase difference between the source forcing function and the ground motion due to the dynamic response of the shaker-ground interface was corrected by using a reference geophone. Attenuated high frequency energy was also recovered using the whitening in frequency domain. The new shallow seismic reflection testing was performed at the crest of Porcupine Dam in Paradise, Utah. The testing used two horizontal Vibroseis sources and four receivers for spacings between 6 and 300 ft. Unfortunately, the results showed no clear evidence of the reflectors despite correction of the magnitude and phase of the signals. However, an improvement in the shape of the cross-correlations was noticed after the corrections. The results showed distinct primary lobes in the corrected cross-correlated signals up to 150 ft offset. More consistent maximum peaks were observed in the corrected waveforms. TFASW is a new surface (Rayleigh) wave method to determine the shear wave velocity profile at a site. It is a time domain method as opposed to the Spectral Analysis of Surface Waves (SASW) method, which is a frequency domain method. This method uses digital filtering to optimize bandwidth used to determine the dispersion curve. Results from testings at three different sites in Utah indicated good agreement with the dispersion curves measured using both TFASW and SASW methods. The advantage of TFASW method is that the dispersion curves had less scatter at long wavelengths as a result from wider bandwidth used in those tests.

  3. The EDGE-CALIFA survey: validating stellar dynamical mass models with CO kinematics

    NASA Astrophysics Data System (ADS)

    Leung, Gigi Y. C.; Leaman, Ryan; van de Ven, Glenn; Lyubenova, Mariya; Zhu, Ling; Bolatto, Alberto D.; Falcón-Barroso, Jesus; Blitz, Leo; Dannerbauer, Helmut; Fisher, David B.; Levy, Rebecca C.; Sanchez, Sebastian F.; Utomo, Dyas; Vogel, Stuart; Wong, Tony; Ziegler, Bodo

    2018-06-01

    Deriving circular velocities of galaxies from stellar kinematics can provide an estimate of their total dynamical mass, provided a contribution from the velocity dispersion of the stars is taken into account. Molecular gas (e.g. CO), on the other hand, is a dynamically cold tracer and hence acts as an independent circular velocity estimate without needing such a correction. In this paper, we test the underlying assumptions of three commonly used dynamical models, deriving circular velocities from stellar kinematics of 54 galaxies (S0-Sd) that have observations of both stellar kinematics from the Calar Alto Legacy Integral Field Area (CALIFA) survey, and CO kinematics from the Extragalactic Database for Galaxy Evolution (EDGE) survey. We test the asymmetric drift correction (ADC) method, as well as Jeans, and Schwarzschild models. The three methods each reproduce the CO circular velocity at 1Re to within 10 per cent. All three methods show larger scatter (up to 20 per cent) in the inner regions (R < 0.4Re) that may be due to an increasingly spherical mass distribution (which is not captured by the thin disc assumption in ADC), or non-constant stellar M/L ratios (for both the JAM and Schwarzschild models). This homogeneous analysis of stellar and gaseous kinematics validates that all three models can recover Mdyn at 1Re to better than 20 per cent, but users should be mindful of scatter in the inner regions where some assumptions may break down.

  4. Deformation Measurement In The Hayward Fault Zone Using Partially Correlated Persistent Scatterers

    NASA Astrophysics Data System (ADS)

    Lien, J.; Zebker, H. A.

    2013-12-01

    Interferometric synthetic aperture radar (InSAR) is an effective tool for measuring temporal changes in the Earth's surface. By combining SAR phase data collected at varying times and orbit geometries, with InSAR we can produce high accuracy, wide coverage images of crustal deformation fields. Changes in the radar imaging geometry, scatterer positions, or scattering behavior between radar passes causes the measured radar return to differ, leading to a decorrelation phase term that obscures the deformation signal and prevents the use of large baseline data. Here we present a new physically-based method of modeling decorrelation from the subset of pixels with the highest intrinsic signal-to-noise ratio, the so-called persistent scatters (PS). This more complete formulation, which includes both phase and amplitude scintillations, better describes the scattering behavior of partially correlated PS pixels and leads to a more reliable selection algorithm. The new method identifies PS pixels using maximum likelihood signal-to-clutter ratio (SCR) estimation based on the joint interferometric stack phase-amplitude distribution. Our PS selection method is unique in that it considers both phase and amplitude; accounts for correlation between all possible pairs of interferometric observations; and models the effect of spatial and temporal baselines on the stack. We use the resulting maximum likelihood SCR estimate as a criterion for PS selection. We implement the partially correlated persistent scatterer technique to analyze a stack of C-band European Remote Sensing (ERS-1/2) interferometric radar data imaging the Hayward Fault Zone from 1995 to 2000. We show that our technique achieves a better trade-off between PS pixel selection accuracy and network density compared to other PS identification methods, particularly in areas of natural terrain. We then present deformation measurements obtained by the selected PS network. Our results demonstrate that the partially correlated persistent scatterer technique can attain accurate deformation measurements even in areas that suffer decorrelation due to natural terrain. The accuracy of phase unwrapping and subsequent deformation estimation on the spatially sparse PS network depends on both pixel selection accuracy and the density of the network. We find that many additional pixels can be added to the PS list if we are able to correctly identify and add those in which the scattering mechanism exhibits partial, rather than complete, correlation across all radar scenes.

  5. Application of ALOS and Envisat Data in Improving Multi-Temporal InSAR Methods for Monitoring Damavand Volcano and Landslide Deformation in the Center of Alborz Mountains, North Iran

    NASA Astrophysics Data System (ADS)

    Vajedian, S.; Motagh, M.; Nilfouroushan, F.

    2013-09-01

    InSAR capacity to detect slow deformation over terrain areas is limited by temporal and geometric decorrelations. Multitemporal InSAR techniques involving Persistent Scatterer (Ps-InSAR) and Small Baseline (SBAS) are recently developed to compensate the decorrelation problems. Geometric decorrelation in mountainous areas especially for Envisat images makes phase unwrapping process difficult. To improve this unwrapping problem, we first modified phase filtering to make the wrapped phase image as smooth as possible. In addition, in order to improve unwrapping results, a modified unwrapping method has been developed. This method includes removing possible orbital and tropospheric effects. Topographic correction is done within three-dimensional unwrapping, Orbital and tropospheric corrections are done after unwrapping process. To evaluate the effectiveness of our improved method we tested the proposed algorithm by Envisat and ALOS dataset and compared our results with recently developed PS software (StaMAPS). In addition we used GPS observations for evaluating the modified method. The results indicate that our method improves the estimated deformation significantly.

  6. SUMMARY OF PROGRESS REPORT ON NUCLEAR PHYSICS , DURING YEARLY PERIOD ENDING IN FEBRUARY 1963

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1963-10-31

    Aspects of nucleon-nucleon interactions are being examined. Tests of the mathematical form of the one-pion exchange interaction and of the validity of charge independence were improved. No real negation of the applicability of either the mathematical form of the one-pion exchange potential or of charge independence has been found. Work on p-p and p-n scattering analysis centered on improving the accuracy of data and on rewriting IBM 704 programs for the IBM 709 and 7090. The new programs provide separate treatments of errors in quantities such as differential cross sections and in relative values at the same incident energy. Improvementsmore » in the least squares method of adjustment to data were also made. Difficulties in correcting for the effects of nuclear magnetic moments were examined, and formulas and numbers giving the corrections applicable to the one-pion exchange group were worked out and incorporated in new machine programs. A partial analysis of data for n-p scattering in the 2 to 3 Bev energy region shows the existence of a sharp peak in the angular distribution of recoil protons from elastic collisions. Interpretation of work on the interplay of multipion resonance effects with each other and with the direct pion interaction was begun. Systematization of spin rotation effect and aspects of relativistic effects in the Coulomb interaction in p-p scattering was carried out. Treatment of N-N spin- orbit and central field interactions caused by vector meson exchange was improved. Programming of the IBM 709 for computation of the first order nucleonnucleus optical potential and nucleon-nucleus observables in first Born approximation at forward angles from N-N phase parameters was performed. An IBM 709 program for computing pi -N scattering observables from phase shifts was also written. Investigation of photodisintegration of deuterons involved correcting the formulas used in calculating higher order magnetic multipole effects for some omitted effects and rewriting the 704 programs for the IBM 709. Progress in a scattering matrix approach to deuteron photodisintegration was also made. In theory of nucleon-transfer reactions, systematization of the connection between completely quantum mechanical and semiclassical treatments, including analysis of approximations required in the former to obtain the latter, was attempted. Presence of interference effects between the waves was partially confirmed by experiment. Calculations of third order effects in Coulomb excitation were carried out, and programming for more extensive work was begun. Work on interpretation of heavy ion elastic scattering involved additional calculations of the interaction using nucleonnucleus scattering with some calculations using an ingoing wave boundary condition instead of an optical model treatment of the nuclear interior. Twenty-one papers published or submitted for publication during the period are listed. (D.C.W.)« less

  7. Application of a multiple scattering model to estimate optical depth, lidar ratio and ice crystal effective radius of cirrus clouds observed with lidar.

    NASA Astrophysics Data System (ADS)

    Gouveia, Diego; Baars, Holger; Seifert, Patric; Wandinger, Ulla; Barbosa, Henrique; Barja, Boris; Artaxo, Paulo; Lopes, Fabio; Landulfo, Eduardo; Ansmann, Albert

    2018-04-01

    Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS). We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.

  8. Electroweak radiative corrections for polarized Moller scattering at the future 11 GeV JLab experiment

    DOE PAGES

    Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; ...

    2010-11-19

    We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e – e – → e – e – (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.

  9. Hadron mass corrections in semi-inclusive deep-inelastic scattering

    DOE PAGES

    Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; ...

    2015-09-24

    We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q 2. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.

  10. a Phenomenological Determination of the Pion-Nucleon Scattering Lengths from Pionic Hydrogen

    NASA Astrophysics Data System (ADS)

    Ericson, T. E. O.; Loiseau, B.; Wycech, S.

    A model independent expression for the electromagnetic corrections to a phenomenological hadronic pion-nucleon (πN) scattering length ah, extracted from pionic hydrogen, is obtained. In a non-relativistic approach and using an extended charge distribution, these corrections are derived up to terms of order α2 log α in the limit of a short-range hadronic interaction. We infer ahπ ^-p=0.0870(5)m-1π which gives for the πNN coupling through the GMO relation g2π ^± pn/(4π )=14.04(17).

  11. Fine-tuning satellite-based rainfall estimates

    NASA Astrophysics Data System (ADS)

    Harsa, Hastuadi; Buono, Agus; Hidayat, Rahmat; Achyar, Jaumil; Noviati, Sri; Kurniawan, Roni; Praja, Alfan S.

    2018-05-01

    Rainfall datasets are available from various sources, including satellite estimates and ground observation. The locations of ground observation scatter sparsely. Therefore, the use of satellite estimates is advantageous, because satellite estimates can provide data on places where the ground observations do not present. However, in general, the satellite estimates data contain bias, since they are product of algorithms that transform the sensors response into rainfall values. Another cause may come from the number of ground observations used by the algorithms as the reference in determining the rainfall values. This paper describe the application of bias correction method to modify the satellite-based dataset by adding a number of ground observation locations that have not been used before by the algorithm. The bias correction was performed by utilizing Quantile Mapping procedure between ground observation data and satellite estimates data. Since Quantile Mapping required mean and standard deviation of both the reference and the being-corrected data, thus the Inverse Distance Weighting scheme was applied beforehand to the mean and standard deviation of the observation data in order to provide a spatial composition of them, which were originally scattered. Therefore, it was possible to provide a reference data point at the same location with that of the satellite estimates. The results show that the new dataset have statistically better representation of the rainfall values recorded by the ground observation than the previous dataset.

  12. Processing techniques for global land 1-km AVHRR data

    USGS Publications Warehouse

    Eidenshink, Jeffery C.; Steinwand, Daniel R.; Wivell, Charles E.; Hollaren, Douglas M.; Meyer, David

    1993-01-01

    The U.S. Geological Survey's (USGS) Earth Resources Observation Systems (EROS) Data Center (EDC) in cooperation with several international science organizations has developed techniques for processing daily Advanced Very High Resolution Radiometer (AVHRR) 1-km data of the entire global land surface. These techniques include orbital stitching, geometric rectification, radiometric calibration, and atmospheric correction. An orbital stitching algorithm was developed to combine consecutive observations acquired along an orbit by ground receiving stations into contiguous half-orbital segments. The geometric rectification process uses an AVHRR satellite model that contains modules for forward mapping, forward terrain correction, and inverse mapping with terrain correction. The correction is accomplished by using the hydrologic features coastlines and lakes from the Digital Chart of the World. These features are rasterized into the satellite projection and are matched to the AVHRR imagery using binary edge correlation techniques. The resulting coefficients are related to six attitude correction parameters: roll, roll rate, pitch, pitch rate, yaw, and altitude. The image can then be precision corrected to a variety of map projections and user-selected image frames. Because the AVHRR lacks onboard calibration for the optical wavelengths, a series of time-variant calibration coefficients derived from vicarious calibration methods and are used to model the degradation profile of the instruments. Reducing atmospheric effects on AVHRR data is important. A method has been develop that will remove the effects of molecular scattering and absorption from clear sky observations, using climatological measurements of ozone. Other methods to remove the effects of water vapor and aerosols are being investigated.

  13. Computational adaptive optics for broadband optical interferometric tomography of biological tissue.

    PubMed

    Adie, Steven G; Graf, Benedikt W; Ahmad, Adeel; Carney, P Scott; Boppart, Stephen A

    2012-05-08

    Aberrations in optical microscopy reduce image resolution and contrast, and can limit imaging depth when focusing into biological samples. Static correction of aberrations may be achieved through appropriate lens design, but this approach does not offer the flexibility of simultaneously correcting aberrations for all imaging depths, nor the adaptability to correct for sample-specific aberrations for high-quality tomographic optical imaging. Incorporation of adaptive optics (AO) methods have demonstrated considerable improvement in optical image contrast and resolution in noninterferometric microscopy techniques, as well as in optical coherence tomography. Here we present a method to correct aberrations in a tomogram rather than the beam of a broadband optical interferometry system. Based on Fourier optics principles, we correct aberrations of a virtual pupil using Zernike polynomials. When used in conjunction with the computed imaging method interferometric synthetic aperture microscopy, this computational AO enables object reconstruction (within the single scattering limit) with ideal focal-plane resolution at all depths. Tomographic reconstructions of tissue phantoms containing subresolution titanium-dioxide particles and of ex vivo rat lung tissue demonstrate aberration correction in datasets acquired with a highly astigmatic illumination beam. These results also demonstrate that imaging with an aberrated astigmatic beam provides the advantage of a more uniform depth-dependent signal compared to imaging with a standard gaussian beam. With further work, computational AO could enable the replacement of complicated and expensive optical hardware components with algorithms implemented on a standard desktop computer, making high-resolution 3D interferometric tomography accessible to a wider group of users and nonspecialists.

  14. Characterization of Artifacts Introduced by the Empirical Volcano-Scan Atmospheric Correction Commonly Applied to CRISM and OMEGA Near-Infrared Spectra

    NASA Technical Reports Server (NTRS)

    Wiseman, S.M.; Arvidson, R.E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.; hide

    2014-01-01

    The empirical volcano-scan atmospheric correction is widely applied to Martian near infrared CRISM and OMEGA spectra between 1000 and 2600 nanometers to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the Martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nanometers, is caused by the inaccurate assumption that absorption coefficients of CO2 in the Martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.

  15. Characterization of artifacts introduced by the empirical volcano-scan atmospheric correction commonly applied to CRISM and OMEGA near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Wiseman, S. M.; Arvidson, R. E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.; McGuire, P. C.

    2016-05-01

    The empirical 'volcano-scan' atmospheric correction is widely applied to martian near infrared CRISM and OMEGA spectra between ∼1000 and ∼2600 nm to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano-scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nm, is caused by the inaccurate assumption that absorption coefficients of CO2 in the martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Medin, P; Jiang, S

    Purpose: In-treatment tumor localization is critical for the management of tumor motion in lung cancer radiotherapy. Conventional tumor-tracking methods using a kV or MV x-ray projection has limited contrast. To facilitate real-time, marker-less and low-dose in-treatment image tumor tracking, we propose a novel scheme using Compton scatter imaging. This study reports Monte Carlo (MC) simulations on this scheme for the purpose of proof-of-principle. Methods: A slit x-ray beam along the patient superior-inferior (SI) direction is directed to the patient, intersecting the patient lung at a 2D plane containing majority part of the tumor motion trajectory. X-ray photons are scattered duemore » to Compton effect from this plane, which are spatially collimated by, e.g., a pinhole, on one side of the plane and then captured by a detector behind it. The captured image, after correcting for x-ray attenuation and scatter angle variation, reflects the electron density, which allows visualization of the instantaneous anatomy on this plane. We performed MC studies on a phantom and a patient case for the initial test of this proposed method. Results: In the phantom case, the contrast-resolution calculated using tumor/lung as foreground/background for kV fluoroscopy, cone-beam CT, and scattering image were 0.0625, 0.6993, and 0.5290, respectively. In the patient case, tumor motion can be clearly observed in the scatter images. Compared to fluoroscopy, scattering imaging also significantly reduced imaging dose because of its narrower beam design. Conclusion: MC simulation studies demonstrated the potential of the proposed scheme in terms of capturing the instantaneous anatomy of a patient on a 2D plane. Clear visualization of the tumor will probably facilitate ‘marker-less’ and ‘real-time’ tumor tracking with low imaging dose. NIH (1R01CA154747-01, 1R21CA178787-01A1 and 1R21EB017978-01A1)« less

  17. NEMA NU 4-2008 validation and applications of the PET-SORTEO Monte Carlo simulations platform for the geometry of the Inveon PET preclinical scanner

    NASA Astrophysics Data System (ADS)

    Boisson, F.; Wimberley, C. J.; Lehnert, W.; Zahra, D.; Pham, T.; Perkins, G.; Hamze, H.; Gregoire, M.-C.; Reilhac, A.

    2013-10-01

    Monte Carlo-based simulation of positron emission tomography (PET) data plays a key role in the design and optimization of data correction and processing methods. Our first aim was to adapt and configure the PET-SORTEO Monte Carlo simulation program for the geometry of the widely distributed Inveon PET preclinical scanner manufactured by Siemens Preclinical Solutions. The validation was carried out against actual measurements performed on the Inveon PET scanner at the Australian Nuclear Science and Technology Organisation in Australia and at the Brain & Mind Research Institute and by strictly following the NEMA NU 4-2008 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction and count rates, image quality and Derenzo phantom studies. Results showed that PET-SORTEO reliably reproduces the performances of this Inveon preclinical system. In addition, imaging studies showed that the PET-SORTEO simulation program provides raw data for the Inveon scanner that can be fully corrected and reconstructed using the same programs as for the actual data. All correction techniques (attenuation, scatter, randoms, dead-time, and normalization) can be applied on the simulated data leading to fully quantitative reconstructed images. In the second part of the study, we demonstrated its ability to generate fast and realistic biological studies. PET-SORTEO is a workable and reliable tool that can be used, in a classical way, to validate and/or optimize a single PET data processing step such as a reconstruction method. However, we demonstrated that by combining a realistic simulated biological study ([11C]Raclopride here) involving different condition groups, simulation allows one also to assess and optimize the data correction, reconstruction and data processing line flow as a whole, specifically for each biological study, which is our ultimate intent.

  18. Method for accurate quantitation of background tissue optical properties in the presence of emission from a strong fluorescence marker

    NASA Astrophysics Data System (ADS)

    Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.

    2015-03-01

    Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.

  19. Acuros CTS: A fast, linear Boltzmann transport equation solver for computed tomography scatter - Part I: Core algorithms and validation.

    PubMed

    Maslowski, Alexander; Wang, Adam; Sun, Mingshan; Wareing, Todd; Davis, Ian; Star-Lack, Josh

    2018-05-01

    To describe Acuros ® CTS, a new software tool for rapidly and accurately estimating scatter in x-ray projection images by deterministically solving the linear Boltzmann transport equation (LBTE). The LBTE describes the behavior of particles as they interact with an object across spatial, energy, and directional (propagation) domains. Acuros CTS deterministically solves the LBTE by modeling photon transport associated with an x-ray projection in three main steps: (a) Ray tracing photons from the x-ray source into the object where they experience their first scattering event and form scattering sources. (b) Propagating photons from their first scattering sources across the object in all directions to form second scattering sources, then repeating this process until all high-order scattering sources are computed using the source iteration method. (c) Ray-tracing photons from scattering sources within the object to the detector, accounting for the detector's energy and anti-scatter grid responses. To make this process computationally tractable, a combination of analytical and discrete methods is applied. The three domains are discretized using the Linear Discontinuous Finite Elements, Multigroup, and Discrete Ordinates methods, respectively, which confer the ability to maintain the accuracy of a continuous solution. Furthermore, through the implementation in CUDA, we sought to exploit the parallel computing capabilities of graphics processing units (GPUs) to achieve the speeds required for clinical utilization. Acuros CTS was validated against Geant4 Monte Carlo simulations using two digital phantoms: (a) a water phantom containing lung, air, and bone inserts (WLAB phantom) and (b) a pelvis phantom derived from a clinical CT dataset. For these studies, we modeled the TrueBeam ® (Varian Medical Systems, Palo Alto, CA) kV imaging system with a source energy of 125 kVp. The imager comprised a 600 μm-thick Cesium Iodide (CsI) scintillator and a 10:1 one-dimensional anti-scatter grid. For the WLAB studies, the full-fan geometry without a bowtie filter was used (with and without the anti-scatter grid). For the pelvis phantom studies, a half-fan geometry with bowtie was used (with the anti-scatter grid). Scattered and primary photon fluences and energies deposited in the detector were recorded. The Acuros CTS and Monte Carlo results demonstrated excellent agreement. For the WLAB studies, the average percent difference between the Monte Carlo- and Acuros-generated scattered photon fluences at the face of the detector was -0.7%. After including the detector response, the average percent differences between the Monte Carlo- and Acuros-generated scatter fractions (SF) were -0.1% without the grid and 0.6% with the grid. For the digital pelvis simulation, the Monte Carlo- and Acuros-generated SFs agreed to within 0.1% on average, despite the scatter-to-primary ratios (SPRs) being as high as 5.5. The Acuros CTS computation time for each scatter image was ~1 s using a single GPU. Acuros CTS enables a fast and accurate calculation of scatter images by deterministically solving the LBTE thus offering a computationally attractive alternative to Monte Carlo methods. Part II describes the application of Acuros CTS to scatter correction of CBCT scans on the TrueBeam system. © 2018 American Association of Physicists in Medicine.

  20. WE-EF-207-03: Design and Optimization of a CBCT Head Scanner for Detection of Acute Intracranial Hemorrhage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J; Sisniega, A; Zbijewski, W

    Purpose: To design a dedicated x-ray cone-beam CT (CBCT) system suitable to deployment at the point-of-care and offering reliable detection of acute intracranial hemorrhage (ICH), traumatic brain injury (TBI), stroke, and other head and neck injuries. Methods: A comprehensive task-based image quality model was developed to guide system design and optimization of a prototype head scanner suitable to imaging of acute TBI and ICH. Previously reported models were expanded to include the effects of x-ray scatter correction necessary for detection of low contrast ICH and the contribution of bit depth (digitization noise) to imaging performance. Task-based detectablity index provided themore » objective function for optimization of system geometry, x-ray source, detector type, anti-scatter grid, and technique at 10–25 mGy dose. Optimal characteristics were experimentally validated using a custom head phantom with 50 HU contrast ICH inserts imaged on a CBCT imaging bench allowing variation of system geometry, focal spot size, detector, grid selection, and x-ray technique. Results: The model guided selection of system geometry with a nominal source-detector distance 1100 mm and optimal magnification of 1.50. Focal spot size ∼0.6 mm was sufficient for spatial resolution requirements in ICH detection. Imaging at 90 kVp yielded the best tradeoff between noise and contrast. The model provided quantitation of tradeoffs between flat-panel and CMOS detectors with respect to electronic noise, field of view, and readout speed required for imaging of ICH. An anti-scatter grid was shown to provide modest benefit in conjunction with post-acquisition scatter correction. Images of the head phantom demonstrate visualization of millimeter-scale simulated ICH. Conclusions: Performance consistent with acute TBI and ICH detection is feasible with model-based system design and robust artifact correction in a dedicated head CBCT system. Further improvements can be achieved with incorporation of model-based iterative reconstruction techniques also within the scope of the task-based optimization framework. David Foos and Xiaohui Wang are employees of Carestream Health.« less

  1. Multiplexed aberration measurement for deep tissue imaging in vivo

    PubMed Central

    Wang, Chen; Liu, Rui; Milkie, Daniel E.; Sun, Wenzhi; Tan, Zhongchao; Kerlin, Aaron; Chen, Tsai-Wen; Kim, Douglas S.; Ji, Na

    2014-01-01

    We describe a multiplexed aberration measurement method that modulates the intensity or phase of light rays at multiple pupil segments in parallel to determine their phase gradients. Applicable to fluorescent-protein-labeled structures of arbitrary complexity, it allows us to obtain diffraction-limited resolution in various samples in vivo. For the strongly scattering mouse brain, a single aberration correction improves structural and functional imaging of fine neuronal processes over a large imaging volume. PMID:25128976

  2. Robust finger vein ROI localization based on flexible segmentation.

    PubMed

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-10-24

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.

  3. Robust Finger Vein ROI Localization Based on Flexible Segmentation

    PubMed Central

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-01-01

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769

  4. Comptonization of X-rays by low-temperature electrons. [photon wavelength redistribution in cosmic sources

    NASA Technical Reports Server (NTRS)

    Illarionov, A.; Kallman, T.; Mccray, R.; Ross, R.

    1979-01-01

    A method is described for calculating the spectrum that results from the Compton scattering of a monochromatic source of X-rays by low-temperature electrons, both for initial-value relaxation problems and for steady-state spatial diffusion problems. The method gives an exact solution of the inital-value problem for evolution of the spectrum in an infinite homogeneous medium if Klein-Nishina corrections to the Thomson cross section are neglected. This, together with approximate solutions for problems in which Klein-Nishina corrections are significant and/or spatial diffusion occurs, shows spectral structure near the original photon wavelength that may be used to infer physical conditions in cosmic X-ray sources. Explicit results, shown for examples of time relaxation in an infinite medium and spatial diffusion through a uniform sphere, are compared with results obtained by Monte Carlo calculations and by solving the appropriate Fokker-Planck equation.

  5. Simultaneous Tc-99m and I-123 dual-radionuclide imaging with a solid-state detector-based brain-SPECT system and energy-based scatter correction.

    PubMed

    Takeuchi, Wataru; Suzuki, Atsuro; Shiga, Tohru; Kubo, Naoki; Morimoto, Yuichi; Ueno, Yuichiro; Kobashi, Keiji; Umegaki, Kikuo; Tamaki, Nagara

    2016-12-01

    A brain single-photon emission computed tomography (SPECT) system using cadmium telluride (CdTe) solid-state detectors was previously developed. This CdTe-SPECT system is suitable for simultaneous dual-radionuclide imaging due to its fine energy resolution (6.6 %). However, the problems of down-scatter and low-energy tail due to the spectral characteristics of a pixelated solid-state detector should be addressed. The objective of this work was to develop a system for simultaneous Tc-99m and I-123 brain studies and evaluate its accuracy. A scatter correction method using five energy windows (FiveEWs) was developed. The windows are Tc-lower, Tc-main, shared sub-window of Tc-upper and I-lower, I-main, and I-upper. This FiveEW method uses pre-measured responses for primary gamma rays from each radionuclide to compensate for the overestimation of scatter by the triple-energy window method that is used. Two phantom experiments and a healthy volunteer experiment were conducted using the CdTe-SPECT system. A cylindrical phantom and a six-compartment phantom with five different mixtures of Tc-99m and I-123 and a cold one were scanned. The quantitative accuracy was evaluated using 18 regions of interest for each phantom. In the volunteer study, five healthy volunteers were injected with Tc-99m human serum albumin diethylene triamine pentaacetic acid (HSA-D) and scanned (single acquisition). They were then injected with I-123 N-isopropyl-4-iodoamphetamine hydrochloride (IMP) and scanned again (dual acquisition). The counts of the Tc-99m images for the single and dual acquisitions were compared. In the cylindrical phantom experiments, the percentage difference (PD) between the single and dual acquisitions was 5.7 ± 4.0 % (mean ± standard deviation). In the six-compartment phantom experiment, the PDs between measured and injected activity for Tc-99m and I-123 were 14.4 ± 11.0 and 2.3 ± 1.8 %, respectively. In the volunteer study, the PD between the single and dual acquisitions was 4.5 ± 3.4 %. This CdTe-SPECT system using the FiveEW method can provide accurate simultaneous dual-radionuclide imaging. A solid-state detector SPECT system using the FiveEW method will permit quantitative simultaneous Tc-99m and I-123 study to become clinically applicable.

  6. Clinical Evaluation of 68Ga-PSMA-II and 68Ga-RM2 PET Images Reconstructed With an Improved Scatter Correction Algorithm.

    PubMed

    Wangerin, Kristen A; Baratto, Lucia; Khalighi, Mohammad Mehdi; Hope, Thomas A; Gulaka, Praveen K; Deller, Timothy W; Iagaru, Andrei H

    2018-06-06

    Gallium-68-labeled radiopharmaceuticals pose a challenge for scatter estimation because their targeted nature can produce high contrast in these regions of the kidneys and bladder. Even small errors in the scatter estimate can result in washout artifacts. Administration of diuretics can reduce these artifacts, but they may result in adverse events. Here, we investigated the ability of algorithmic modifications to mitigate washout artifacts and eliminate the need for diuretics or other interventions. The model-based scatter algorithm was modified to account for PET/MRI scanner geometry and challenges of non-FDG tracers. Fifty-three clinical 68 Ga-RM2 and 68 Ga-PSMA-11 whole-body images were reconstructed using the baseline scatter algorithm. For comparison, reconstruction was also processed with modified sampling in the single-scatter estimation and with an offset in the scatter tail-scaling process. None of the patients received furosemide to attempt to decrease the accumulation of radiopharmaceuticals in the bladder. The images were scored independently by three blinded reviewers using the 5-point Likert scale. The scatter algorithm improvements significantly decreased or completely eliminated the washout artifacts. When comparing the baseline and most improved algorithm, the image quality increased and image artifacts were reduced for both 68 Ga-RM2 and for 68 Ga-PSMA-11 in the kidneys and bladder regions. Image reconstruction with the improved scatter correction algorithm mitigated washout artifacts and recovered diagnostic image quality in 68 Ga PET, indicating that the use of diuretics may be avoided.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haugen, Carl C.; Forget, Benoit; Smith, Kord S.

    Most high performance computing systems being deployed currently and envisioned for the future are based on making use of heavy parallelism across many computational nodes and many concurrent cores. These types of heavily parallel systems often have relatively little memory per core but large amounts of computing capability. This places a significant constraint on how data storage is handled in many Monte Carlo codes. This is made even more significant in fully coupled multiphysics simulations, which requires simulations of many physical phenomena be carried out concurrently on individual processing nodes, which further reduces the amount of memory available for storagemore » of Monte Carlo data. As such, there has been a move towards on-the-fly nuclear data generation to reduce memory requirements associated with interpolation between pre-generated large nuclear data tables for a selection of system temperatures. Methods have been previously developed and implemented in MIT’s OpenMC Monte Carlo code for both the resolved resonance regime and the unresolved resonance regime, but are currently absent for the thermal energy regime. While there are many components involved in generating a thermal neutron scattering cross section on-the-fly, this work will focus on a proposed method for determining the energy and direction of a neutron after a thermal incoherent inelastic scattering event. This work proposes a rejection sampling based method using the thermal scattering kernel to determine the correct outgoing energy and angle. The goal of this project is to be able to treat the full S (a, ß) kernel for graphite, to assist in high fidelity simulations of the TREAT reactor at Idaho National Laboratory. The method is, however, sufficiently general to be applicable in other thermal scattering materials, and can be initially validated with the continuous analytic free gas model.« less

  8. Automatic Detection and Positioning of Ground Control Points Using TerraSAR-X Multiaspect Acquisitions

    NASA Astrophysics Data System (ADS)

    Montazeri, Sina; Gisinger, Christoph; Eineder, Michael; Zhu, Xiao xiang

    2018-05-01

    Geodetic stereo Synthetic Aperture Radar (SAR) is capable of absolute three-dimensional localization of natural Persistent Scatterer (PS)s which allows for Ground Control Point (GCP) generation using only SAR data. The prerequisite for the method to achieve high precision results is the correct detection of common scatterers in SAR images acquired from different viewing geometries. In this contribution, we describe three strategies for automatic detection of identical targets in SAR images of urban areas taken from different orbit tracks. Moreover, a complete work-flow for automatic generation of large number of GCPs using SAR data is presented and its applicability is shown by exploiting TerraSAR-X (TS-X) high resolution spotlight images over the city of Oulu, Finland and a test site in Berlin, Germany.

  9. Fluorescence lifetime measurements in heterogeneous scattering medium

    NASA Astrophysics Data System (ADS)

    Nishimura, Goro; Awasthi, Kamlesh; Furukawa, Daisuke

    2016-07-01

    Fluorescence lifetime in heterogeneous multiple light scattering systems is analyzed by an algorithm without solving the diffusion or radiative transfer equations. The algorithm assumes that the optical properties of medium are constant in the excitation and emission wavelength regions. If the assumption is correct and the fluorophore is a single species, the fluorescence lifetime can be determined by a set of measurements of temporal point-spread function of the excitation light and fluorescence at two different concentrations of the fluorophore. This method is not dependent on the heterogeneity of the optical properties of the medium as well as the geometry of the excitation-detection on an arbitrary shape of the sample. The algorithm was validated by an indocyanine green fluorescence in phantom measurements and demonstrated by an in vivo measurement.

  10. Detector-specific correction factors in radiosurgery beams and their impact on dose distribution calculations.

    PubMed

    García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M

    2018-01-01

    Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.

  11. Theory of lidar method for measurement of the modulation transfer function of water layers.

    PubMed

    Dolin, Lev S

    2013-01-10

    We develop a method to evaluate the modulation transfer function (MTF) of a water layer from the characteristics of lidar signal backscattered by water volume. We propose several designs of a lidar system for remote measurement of the MTF and the procedure to determine optical properties of water using the measured MTF. We discuss a laser system for sea-bottom imaging that accounts for the influence of water slab on the image structure and allows for correction of image distortions caused by light scattering in water. © 2013 Optical Society of America

  12. Magnetic Field Effects on the Fluctuation Corrections to the Sound Attenuation in Liquid ^3He

    NASA Astrophysics Data System (ADS)

    Zhao, Erhai; Sauls, James A.

    2002-03-01

    We investigated the effect of a magnetic field on the excess sound attenuation due to order parameter fluctuations in bulk liquid ^3He and liquid ^3He in aerogel for temperatures just above the corresponding superfluid transition temperatures. The fluctuation corrections to the acoustic attenuation are sensitive to magnetic field pairbreaking, aerogel scattering as well as the spin correlations of fluctuating pairs. Calculations of the corrections to the zero sound velocity, δ c_0, and attenuation, δα_0, are carried out in the ladder approximation for the singular part of the quasiparticle-quasiparticle scattering amplitude(V. Samalam and J. W. Serene, Phys. Rev. Lett. \\underline41), 497 (1978). as a function of frequency, temperature, impurity scattering and magnetic field strength. The magnetic field suppresses the fluctuation contributions to the attenuation of zero sound. With increasing magnetic field the temperature dependence of δα_0(t) crosses over from δα_0(t) ~√ t to δα_0(t) ~ t, where t=T/Tc -1 is the reduced temperature.

  13. Author Correction: Induced unconventional superconductivity on the surface states of Bi2Te3 topological insulator.

    PubMed

    Charpentier, Sophie; Galletti, Luca; Kunakova, Gunta; Arpaia, Riccardo; Song, Yuxin; Baghdadi, Reza; Wang, Shu Min; Kalaboukhov, Alexei; Olsson, Eva; Tafuri, Francesco; Golubev, Dmitry; Linder, Jacob; Bauch, Thilo; Lombardi, Floriana

    2018-01-30

    The original version of this Article contained an error in Fig. 6b. In the top scattering process, while the positioning of both arrows was correct, the colours were switched: the first arrow was red and the second arrow was blue, rather than the correct order of blue then red.

  14. The integration of improved Monte Carlo compton scattering algorithms into the Integrated TIGER Series.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quirk, Thomas, J., IV

    2004-08-01

    The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Comptonmore » scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.« less

  15. Stationary table CT dosimetry and anomalous scanner-reported values of CTDI{sub vol}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Robert L., E-mail: rdixon@wfubmc.edu; Boone, John M.

    2014-01-15

    Purpose: Anomalous, scanner-reported values of CTDI{sub vol} for stationary phantom/table protocols (having elevated values of CTDI{sub vol} over 300% higher than the actual dose to the phantom) have been observed; which are well-beyond the typical accuracy expected of CTDI{sub vol} as a phantom dose. Recognition of these outliers as “bad data” is important to users of CT dose index tracking systems (e.g., ACR DIR), and a method for recognition and correction is provided. Methods: Rigorous methods and equations are presented which describe the dose distributions for stationary-table CT. A comparison with formulae for scanner-reported values of CTDI{sub vol} clearly identifiesmore » the source of these anomalies. Results: For the stationary table, use of the CTDI{sub 100} formula (applicable to a moving phantom only) overestimates the dose due to extra scatter and also includes an overbeaming correction, both of which are nonexistent when the phantom (or patient) is held stationary. The reported DLP remains robust for the stationary phantom. Conclusions: The CTDI-paradigm does not apply in the case of a stationary phantom and simpler nonintegral equations suffice. A method of correction of the currently reported CTDI{sub vol} using the approach-to-equilibrium formula H(a) and an overbeaming correction factor serves to scale the reported CTDI{sub vol} values to more accurate levels for stationary-table CT, as well as serving as an indicator in the detection of “bad data.”.« less

  16. First Order Statistics of Speckle around a Scatterer Volume Density Edge and Edge Detection in Ultrasound Images.

    NASA Astrophysics Data System (ADS)

    Li, Yue

    1990-01-01

    Ultrasonic imaging plays an important role in medical imaging. But the images exhibit a granular structure, commonly known as speckle. The speckle tends to mask the presence of low-contrast lesions and reduces the ability of a human observer to resolve fine details. Our interest in this research is to examine the problem of edge detection and come up with methods for improving the visualization of organ boundaries and tissue inhomogeneity edges. An edge in an image can be formed either by acoustic impedance change or by scatterer volume density change (or both). The echo produced from these two kinds of edges has different properties. In this work, it has been proved that the echo from a scatterer volume density edge is the Hilbert transform of the echo from a rough impedance boundary (except for a constant) under certain conditions. This result can be used for choosing the correct signal to transmit to optimize the performance of edge detectors and characterizing an edge. The signal to noise ratio of the echo produced by a scatterer volume density edge is also obtained. It is found that: (1) By transmitting a signal with high bandwidth ratio and low center frequency, one can obtain a higher signal to noise ratio. (2) For large area edges, the farther the transducer is from the edge, the larger is the signal to noise ratio. But for small area edges, the nearer the transducer is to the edge, the larger is the signal to noise ratio. These results enable us to maximize the signal to noise ratio by adjusting these parameters. (3) The signal to noise ratio is not only related to the ratio of scatterer volume densities at the edge, but also related to the absolute value of scatterer volume densities. Some of these results have been proved through simulation and experiment. Different edge detection methods have been used to detect simulated scatterer volume density edges to compare their performance. A so-called interlaced array method has been developed for speckle reduction in the images formed by synthetic aperture focussing technique, and experiments have been done to evaluate its performance.

  17. XUV and x-ray elastic scattering of attosecond electromagnetic pulses on atoms

    NASA Astrophysics Data System (ADS)

    Rosmej, F. B.; Astapenko, V. A.; Lisitsa, V. S.

    2017-12-01

    Elastic scattering of electromagnetic pulses on atoms in XUV and soft x-ray ranges is considered for ultra-short pulses. The inclusion of the retardation term, non-dipole interaction and an efficient scattering tensor approximation allowed studying the scattering probability in dependence of the pulse duration for different carrier frequencies. Numerical calculations carried out for Mg, Al and Fe atoms demonstrate that the scattering probability is a highly nonlinear function of the pulse duration and has extrema for pulse carrier frequencies in the vicinity of the resonance-like features of the polarization charge spectrum. Closed expressions for the non-dipole correction and the angular dependence of the scattered radiation are obtained.

  18. Laser pulsing in linear Compton scattering

    DOE PAGES

    Krafft, G. A.; Johnson, E.; Deitrick, K.; ...

    2016-12-16

    Previous work on calculating energy spectra from Compton scattering events has either neglected considering the pulsed structure of the incident laser beam, or has calculated these effects in an approximate way subject to criticism. In this paper, this problem has been reconsidered within a linear plane wave model for the incident laser beam. By performing the proper Lorentz transformation of the Klein-Nishina scattering cross section, a spectrum calculation can be created which allows the electron beam energy spread and emittance effects on the spectrum to be accurately calculated, essentially by summing over the emission of each individual electron. Such anmore » approach has the obvious advantage that it is easily integrated with a particle distribution generated by particle tracking, allowing precise calculations of spectra for realistic particle distributions in collision. The method is used to predict the energy spectrum of radiation passing through an aperture for the proposed Old Dominion University inverse Compton source. In addition, as discussed in the body of the paper, many of the results allow easy scaling estimates to be made of the expected spectrum. A misconception in the literature on Compton scattering of circularly polarized beams is corrected and recorded.« less

  19. Mie and debye scattering in dusty plasmas

    PubMed

    Guerra; Mendonca

    2000-07-01

    We calculate the total field scattered by a charged sphere immersed in a plasma using a unified treatment that includes the usual Mie scattering and the scattering by the Debye cloud around the particle. This is accomplished by use of the Dyadic Green function to determine the field radiated by the electrons of the Debye cloud, which is then obtained as a series of spherical vector wave functions similar to that of the Mie field. Thus we treat the Debye-Mie field as a whole and study its properties. The main results of this study are (1) the Mie (Debye) field dominates at small (large) wavelengths and in the Rayleigh limit the Debye field is constant; (2) the total cross section has an interference term between the Debye and Mie fields, important in some regimes; (3) this term is negative for negative charge of the grain, implying a total cross section smaller than previously thought; (4) a method is proposed to determine the charge of the grain (divided by a certain suppression factor) and the Debye length of the plasma; (5) a correction to the dispersion relation of an electromagnetic wave propagating in a plasma is derived.

  20. Atmospheric Multiple Scattering Effects on GLAS Altimetry. Part 2; Analysis of Expected Errors in Antarctic Altitude Measurements

    NASA Technical Reports Server (NTRS)

    Mahesh, Ashwin; Spinhirne, James D.; Duda, David P.; Eloranta, Edwin W.; Starr, David O'C (Technical Monitor)

    2001-01-01

    The altimetry bias in GLAS (Geoscience Laser Altimeter System) or other laser altimeters resulting from atmospheric multiple scattering is studied in relationship to current knowledge of cloud properties over the Antarctic Plateau. Estimates of seasonal and interannual changes in the bias are presented. Results show the bias in altitude from multiple scattering in clouds would be a significant error source without correction. The selective use of low optical depth clouds or cloudfree observations, as well as improved analysis of the return pulse such as by the Gaussian method used here, are necessary to minimize the surface altitude errors. The magnitude of the bias is affected by variations in cloud height, cloud effective particle size and optical depth. Interannual variations in these properties as well as in cloud cover fraction could lead to significant year-to-year variations in the altitude bias. Although cloud-free observations reduce biases in surface elevation measurements from space, over Antarctica these may often include near-surface blowing snow, also a source of scattering-induced delay. With careful selection and analysis of data, laser altimetry specifications can be met.

  1. Effects on RCS of a perfect electromagnetic conductor sphere in the presence of anisotropic plasma layer

    NASA Astrophysics Data System (ADS)

    Ghaffar, A.; Hussan, M. M.; Illahi, A.; Alkanhal, Majeed A. S.; Ur Rehman, Sajjad; Naz, M. Y.

    2018-01-01

    Effects on RCS of perfect electromagnetic conductor (PEMC) sphere by coating with anisotropic plasma layer are studied in this paper. The incident, scattered and transmitted electromagnetic fields are expanded in term of spherical vector wave functions using extended classical theory of scattering. Co and cross-polarized scattered field coefficients are obtained at the interface of free space-anisotropic plasma and at anisotropic plasma-PEMC sphere core by scattering matrices method. The presented analytical expressions are general for any perfect conducting sphere (PMC, PEC, or PEMC) with general anisotropic/isotropic material coatings that include plasma and metamaterials. The behavior of the forward and backscattered radar cross section of PEMC sphere with the variation of the magnetic field strength, incident frequency, plasma density, and effective collision frequency for the co-polarized and the cross polarized fields are investigated. It is also observed from the obtained results that anisotropic layer on PEMC sphere shows reciprocal behavior as compared to isotopic plasma layer on PEMC sphere. The comparisons of the numerical results of the presented analytical expressions with available results of some special cases show the correctness of the analysis.

  2. Angular distribution of elastic scattering induced by 17F on medium-mass target nuclei at energies near the Coulomb barrier

    NASA Astrophysics Data System (ADS)

    Zhang, G. L.; Zhang, G. X.; Lin, C. J.; Lubian, J.; Rangel, J.; Paes, B.; Ferreira, J. L.; Zhang, H. Q.; Qu, W. W.; Jia, H. M.; Yang, L.; Ma, N. R.; Sun, L. J.; Wang, D. X.; Zheng, L.; Liu, X. X.; Chu, X. T.; Yang, J. C.; Wang, J. S.; Xu, S. W.; Ma, P.; Ma, J. B.; Jin, S. L.; Bai, Z.; Huang, M. R.; Zang, H. L.; Yang, B.; Liu, Y.

    2018-04-01

    The elastic scattering angular distributions were measured for 50- and 59-MeV 17F radioactive ion beam on a 89Y target. The aim of this work is to study the effect of the breakup of the proton halo projectile on the elastic scattering angular distribution. The experimental data were analyzed by means of the optical model with the double-folding São Paulo potential for both real and imaginary parts. The theoretical calculations reproduced the experimental data reasonably well. It is shown that the method of the data analysis is correct. In order to clarify the difference observed at large angles for the 59-MeV incident energy data, Continuum-Discretized Coupled-Channels (CDCC) calculations were performed to consider the breakup coupling effect. It is found that the experimental data show the Coulomb rainbow peak and that the effect of the coupling to the continuum states is not very significant, producing only a small hindrance of the Coulomb rainbow peak and a very small enhancement of the elastic scattering angular distribution at backward angles, suggesting that the multipole response of the neutron halo projectiles is stronger than that of the proton halo systems.

  3. The Weak Charge of the Proton. A Search For Physics Beyond the Standard Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacEwan, Scott J.

    2015-05-01

    The Q weak experiment, which completed running in May of 2012 at Jefferson Laboratory, has measured the parity-violating asymmetry in elastic electron-proton scattering at four-momentum transfer Q 2 =0.025 (GeV/c) 2 in order to provide the first direct measurement of the proton's weak charge, Q W p. The Standard Model makes firm predictions for the weak charge; deviations from the predicted value would provide strong evidence of new physics beyond the Standard Model. Using an 89% polarized electron beam at 145 microA scattering from a 34.4 cm long liquid hydrogen target, scattered electrons were detected using an array of eightmore » fused-silica detectors placed symmetric about the beam axis. The parity-violating asymmetry was then measured by reversing the helicity of the incoming electrons and measuring the normalized difference in rate seen in the detectors. The low Q 2 enables a theoretically clean measurement; the higher-order hadronic corrections are constrained using previous parity-violating electron scattering world data. The experimental method will be discussed, with recent results constituting 4% of our total data and projections of our proposed uncertainties on the full data set.« less

  4. Variance-reduction normalization technique for a compton camera system

    NASA Astrophysics Data System (ADS)

    Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.

    2011-01-01

    For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.

  5. Multiple Volume Scattering in Random Media and Periodic Structures with Applications in Microwave Remote Sensing and Wave Functional Materials

    NASA Astrophysics Data System (ADS)

    Tan, Shurun

    The objective of my research is two-fold: to study wave scattering phenomena in dense volumetric random media and in periodic wave functional materials. For the first part, the goal is to use the microwave remote sensing technique to monitor water resources and global climate change. Towards this goal, I study the microwave scattering behavior of snow and ice sheet. For snowpack scattering, I have extended the traditional dense media radiative transfer (DMRT) approach to include cyclical corrections that give rise to backscattering enhancements, enabling the theory to model combined active and passive observations of snowpack using the same set of physical parameters. Besides DMRT, a fully coherent approach is also developed by solving Maxwell's equations directly over the entire snowpack including a bottom half space. This revolutionary new approach produces consistent scattering and emission results, and demonstrates backscattering enhancements and coherent layer effects. The birefringence in anisotropic snow layers is also analyzed by numerically solving Maxwell's equation directly. The effects of rapid density fluctuations in polar ice sheet emission in the 0.5˜2.0 GHz spectrum are examined using both fully coherent and partially coherent layered media emission theories that agree with each other and distinct from incoherent approaches. For the second part, the goal is to develop integral equation based methods to solve wave scattering in periodic structures such as photonic crystals and metamaterials that can be used for broadband simulations. Set upon the concept of modal expansion of the periodic Green's function, we have developed the method of broadband Green's function with low wavenumber extraction (BBGFL), where a low wavenumber component is extracted and results a non-singular and fast-converging remaining part with simple wavenumber dependence. We've applied the technique to simulate band diagrams and modal solutions of periodic structures, and to construct broadband Green's functions including periodic scatterers.

  6. SU-E-CAMPUS-I-04: Automatic Skin-Dose Mapping for An Angiographic System with a Region-Of-Interest, High-Resolution Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vijayan, S; Rana, V; Setlur Nagesh, S

    2014-06-15

    Purpose: Our real-time skin dose tracking system (DTS) has been upgraded to monitor dose for the micro-angiographic fluoroscope (MAF), a high-resolution, small field-of-view x-ray detector. Methods: The MAF has been mounted on a changer on a clinical C-Arm gantry so it can be used interchangeably with the standard flat-panel detector (FPD) during neuro-interventional procedures when high resolution is needed in a region-of-interest. To monitor patient skin dose when using the MAF, our DTS has been modified to automatically account for the change in scatter for the very small MAF FOV and to provide separated dose distributions for each detector. Themore » DTS is able to provide a color-coded mapping of the cumulative skin dose on a 3D graphic model of the patient. To determine the correct entrance skin exposure to be applied by the DTS, a correction factor was determined by measuring the exposure at the entrance surface of a skull phantom with an ionization chamber as a function of entrance beam size for various beam filters and kVps. Entrance exposure measurements included primary radiation, patient backscatter and table forward scatter. To allow separation of the dose from each detector, a parameter log is kept that allows a replay of the procedure exposure events and recalculation of the dose components.The graphic display can then be constructed showing the dose distribution from the MAF and FPD separately or together. Results: The DTS is able to provide separate displays of dose for the MAF and FPD with field-size specific scatter corrections. These measured corrections change from about 49% down to 10% when changing from the FPD to the MAF. Conclusion: The upgraded DTS allows identification of the patient skin dose delivered when using each detector in order to achieve improved dose management as well as to facilitate peak skin-dose reduction through dose spreading. Research supported in part by Toshiba Medical Systems Corporation and NIH Grants R43FD0158401, R44FD0158402 and R01EB002873.« less

  7. Calculation method for laser radar cross sections of rotationally symmetric targets.

    PubMed

    Cao, Yunhua; Du, Yongzhi; Bai, Lu; Wu, Zhensen; Li, Haiying; Li, Yanhui

    2017-07-01

    The laser radar cross section (LRCS) is a key parameter in the study of target scattering characteristics. In this paper, a practical method for calculating LRCSs of rotationally symmetric targets is presented. Monostatic LRCSs for four kinds of rotationally symmetric targets (cone, rotating ellipsoid, super ellipsoid, and blunt cone) are calculated, and the results verify the feasibility of the method. Compared with the results for the triangular patch method, the correctness of the method is verified, and several advantages of the method are highlighted. For instance, the method does not require geometric modeling and patch discretization. The method uses a generatrix model and double integral, and its calculation is concise and accurate. This work provides a theory analysis for the rapid calculation of LRCS for common basic targets.

  8. CORRECTING FOR INTERPLANETARY SCATTERING IN VELOCITY DISPERSION ANALYSIS OF SOLAR ENERGETIC PARTICLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laitinen, T.; Dalla, S.; Huttunen-Heikinmaa, K.

    2015-06-10

    To understand the origin of Solar Energetic Particles (SEPs), we must study their injection time relative to other solar eruption manifestations. Traditionally the injection time is determined using the Velocity Dispersion Analysis (VDA) where a linear fit of the observed event onset times at 1 AU to the inverse velocities of SEPs is used to derive the injection time and path length of the first-arriving particles. VDA does not, however, take into account that the particles that produce a statistically observable onset at 1 AU have scattered in the interplanetary space. We use Monte Carlo test particle simulations of energeticmore » protons to study the effect of particle scattering on the observable SEP event onset above pre-event background, and consequently on VDA results. We find that the VDA results are sensitive to the properties of the pre-event and event particle spectra as well as SEP injection and scattering parameters. In particular, a VDA-obtained path length that is close to the nominal Parker spiral length does not imply that the VDA injection time is correct. We study the delay to the observed onset caused by scattering of the particles and derive a simple estimate for the delay time by using the rate of intensity increase at the SEP onset as a parameter. We apply the correction to a magnetically well-connected SEP event of 2000 June 10, and show it to improve both the path length and injection time estimates, while also increasing the error limits to better reflect the inherent uncertainties of VDA.« less

  9. Reflectance and fluorescence spectroscopies in photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Finlay, Jarod C.

    In vivo fluorescence spectroscopy during photodynamic therapy (PDT) has the potential to provide information on the distribution and degradation of sensitizers, the formation of fluorescent photoproducts and changes in tissue autofluorescence induced by photodynamic treatment. Reflectance spectroscopy allows quantification of light absorption and scattering in tissue. We present the results of several related studies of fluorescence and reflectance spectroscopy and their applications to photodynamic dosimetry. First, we develop and test an empirical method for the correction of the distortions imposed on fluorescence spectra by absorption and scattering in turbid media. We characterize the irradiance dependence of the in vivo photobleaching of three sensitizers, protoporphyrin IX (PpIX), Photofrin and mTHPC, in a rat skin model. The photobleaching and photoproduct formation of PpIX exhibit irradiance dependence consistent with singlet oxygen (1O2)-mediated bleaching. The bleaching of mTHPC occurs in two phases, only one of which is consistent with a 1O 2-mediated mechanism. Photofrin's bleaching is independent of irradiance, although its photoproduct formation is not. This can be explained by a mixed-mechanism bleaching model. Second, we develop an algorithm for the determination of tissue optical properties using diffuse reflectance spectra measured at a single source-detector separation and demonstrate the recovery of the hemoglobin oxygen dissociation curve from tissue-simulating phantoms containing human erythrocytes. This method is then used to investigate the heterogeneity of oxygenation response in murine tumors induced by carbogen inhalation. We find that while the response varies among animals and within each tumor, the majority of tumors exhibit an increase in blood oxygenation during carbogen breathing. We present a forward-adjoint model of fluorescence propagation that uses the optical property information acquired from reflectance spectroscopy to obtain the undistorted fluorescence spectrum over a wide range of optical properties. Finally, we investigate the ability of the forward-adjoint theory to extract undistorted fluorescence and optical property information simultaneously from a single measured fluorescence spectrum. This method can recover the hemoglobin oxygen dissociation curve in tissue-simulating phantoms with an accuracy comparable to that of reflectance-based methods while correcting distortions in the fluorescence over a wide range of absorption and scattering coefficients.

  10. Vibrational effects in x-ray absorption and resonant inelastic x-ray scattering using a semiclassical scheme

    NASA Astrophysics Data System (ADS)

    Ljungberg, Mathias P.

    2017-12-01

    A method is presented for describing vibrational effects in x-ray absorption spectroscopy and resonant inelastic x-ray scattering (RIXS) using a combination of the classical Franck-Condon (FC) approximation and classical trajectories run on the core-excited state. The formulation of RIXS is an extension of the semiclassical Kramers-Heisenberg formalism of Ljungberg et al. [Phys. Rev. B 82, 245115 (2010), 10.1103/PhysRevB.82.245115] to the resonant case, retaining approximately the same computational cost. To overcome difficulties with connecting the absorption and emission processes in RIXS, the classical FC approximation is used for the absorption, which is seen to work well provided that a zero-point-energy correction is included. In the case of core-excited states with dissociative character, the method is capable of closely reproducing the main features for one-dimensional test systems, compared to the quantum-mechanical formulation. Due to the good accuracy combined with the relatively low computational cost, the method has great potential of being used for complex systems with many degrees of freedom, such as liquids and surface adsorbates.

  11. The possibility of applying spectral redundancy in DWDM systems on existing long-distance FOCLs for increasing the data transmission rate and decreasing nonlinear effects and double Rayleigh scattering without changes in the communication channel

    NASA Astrophysics Data System (ADS)

    Nekuchaev, A. O.; Shuteev, S. A.

    2014-04-01

    A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.

  12. Extension of the International Atomic Energy Agency phantom study in image quantification: results of multicentre evaluation in Croatia.

    PubMed

    Grošev, Darko; Gregov, Marin; Wolfl, Miroslava Radić; Krstonošić, Branislav; Debeljuh, Dea Dundara

    2018-06-07

    To make quantitative methods of nuclear medicine more available, four centres in Croatia participated in the national intercomparison study, following the materials and methods used in the previous international study organized by the International Atomic Energy Agency (IAEA). The study task was to calculate the activities of four Ba sources (T1/2=10.54 years; Eγ=356 keV) using planar and single-photon emission computed tomography (SPECT) or SPECT/CT acquisitions of the sources inside a water-filled cylindrical phantom. The sources were previously calibrated by the US National Institute of Standards and Technology. Triple-energy window was utilized for scatter correction. Planar studies were corrected for attenuation correction (AC) using the conjugate-view method. For SPECT/CT studies, data from X-ray computed tomography were used for attenuation correction (CT-AC), whereas for SPECT-only acquisition, the Chang-AC method was applied. Using the lessons learned from the IAEA study, data were acquired according to the harmonized data acquisition protocol, and the acquired images were then processed using centralized data analysis. The accuracy of the activity quantification was evaluated as the ratio R between the calculated activity and the value obtained from National Institute of Standards and Technology. For planar studies, R=1.06±0.08; for SPECT/CT study using CT-AC, R=1.00±0.08; and for Chang-AC, R=0.89±0.12. The results are in accordance with those obtained within the larger IAEA study and confirm that SPECT/CT method is the most appropriate for accurate activity quantification.

  13. A rapid detection method of Escherichia coli by surface enhanced Raman scattering

    NASA Astrophysics Data System (ADS)

    Tao, Feifei; Peng, Yankun; Xu, Tianfeng

    2015-05-01

    Conventional microbiological detection and enumeration methods are time-consuming, labor-intensive, and giving retrospective information. The objectives of the present work are to study the capability of surface enhanced Raman scattering (SERS) to detect Escherichia coli (E. coli) using the presented silver colloidal substrate. The obtained results showed that the adaptive iteratively reweighed Penalized Least Squares (airPLS) algorithm could effectively remove the fluorescent background from original Raman spectra, and Raman characteristic peaks of 558, 682, 726, 1128, 1210 and 1328 cm-1 could be observed stably in the baseline corrected SERS spectra of all studied bacterial concentrations. The detection limit of SERS could be determined to be as low as 0.73 log CFU/ml for E. coli with the prepared silver colloidal substrate. The quantitative prediction results using the intensity values of characteristic peaks were not good, with the correlation coefficients of calibration set and cross validation set of 0.99 and 0.64, respectively.

  14. Propagation of Gaussian wave packets in complex media and application to fracture characterization

    NASA Astrophysics Data System (ADS)

    Ding, Yinshuai; Zheng, Yingcai; Zhou, Hua-Wei; Howell, Michael; Hu, Hao; Zhang, Yu

    2017-08-01

    Knowledge of the subsurface fracture networks is critical in probing the tectonic stress states and flow of fluids in reservoirs containing fractures. We propose to characterize fractures using scattered seismic data, based on the theory of local plane-wave multiple scattering in a fractured medium. We construct a localized directional wave packet using point sources on the surface and propagate it toward the targeted subsurface fractures. The wave packet behaves as a local plane wave when interacting with the fractures. The interaction produces multiple scattering of the wave packet that eventually travels up to the surface receivers. The propagation direction and amplitude of the multiply scattered wave can be used to characterize fracture density, orientation and compliance. Two key aspects in this characterization process are the spatial localization and directionality of the wave packet. Here we first show the physical behaviour of a new localized wave, known as the Gaussian Wave Packet (GWP), by examining its analytical solution originally formulated for a homogenous medium. We then use a numerical finite-difference time-domain (FDTD) method to study its propagation behaviour in heterogeneous media. We find that a GWP can still be localized and directional in space even over a large propagation distance in heterogeneous media. We then propose a method to decompose the recorded seismic wavefield into GWPs based on the reverse-time concept. This method enables us to create a virtually recorded seismic data using field shot gathers, as if the source were an incident GWP. Finally, we demonstrate the feasibility of using GWPs for fracture characterization using three numerical examples. For a medium containing fractures, we can reliably invert for the local parameters of multiple fracture sets. Differing from conventional seismic imaging such as migration methods, our fracture characterization method is less sensitive to errors in the background velocity model. For a layered medium containing fractures, our method can correctly recover the fracture density even with an inaccurate velocity model.

  15. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Circulatory fibre-optic memory loop with a built-in service channel

    NASA Astrophysics Data System (ADS)

    Pilipovich, V. A.; Esman, A. K.; Goncharenko, I. A.; Posed'ko, V. S.; Solonovich, I. F.

    1995-10-01

    A method for increasing the information capacity and enhancing the reliability of information storage in a dynamic fibre-optic memory is proposed. An additional built-in channel with counterpropagating circulation of signals is provided for this purpose. This additional channel can be used to transmit both information and service signals, such as address words, clock signals, correcting sequences, etc. The possibility of compensating the attenuation of an information signal by stimulated Raman scattering is considered.

  16. Probing nuclear effects using single-transverse kinematic imbalance with MINERvA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, X. -G.; Betancourt, M.

    2016-08-15

    Kinematic imbalance of the final-state particles in the plane transverse to the neutrino direction provides a sensitive probe of nuclear effects. In this contribution, we report the MINERvA measurement of the single-transverse kinematic imbalance in neutrino charged-current quasielastic-like events on CH targets. To improve the momentum measurements of the final-state particles, we develop a method to select elastically scattering contained (ESC) protons and a general procedure to correct the transverse momentum scales.

  17. Nonclassical-light generation in a photonic-band-gap nonlinear planar waveguide

    NASA Astrophysics Data System (ADS)

    Peřina, Jan, Jr.; Sibilia, Concita; Tricca, Daniela; Bertolotti, Mario

    2004-10-01

    The optical parametric process occurring in a photonic-band-gap planar waveguide is studied from the point of view of nonclassical-light generation. The nonlinearly interacting optical fields are described by the generalized superposition of coherent signals and noise using the method of operator linear corrections to a classical strong solution. Scattered backward-propagating fields are taken into account. Squeezed light as well as light with sub-Poissonian statistics can be obtained in two-mode fields under the specified conditions.

  18. The MOSDEF Survey: Dissecting the Star Formation Rate versus Stellar Mass Relation Using Hα and Hβ Emission Lines at z ∼ 2

    NASA Astrophysics Data System (ADS)

    Shivaei, Irene; Reddy, Naveen A.; Shapley, Alice E.; Kriek, Mariska; Siana, Brian; Mobasher, Bahram; Coil, Alison L.; Freeman, William R.; Sanders, Ryan; Price, Sedona H.; de Groot, Laura; Azadi, Mojegan

    2015-12-01

    We present results on the star formation rate (SFR) versus stellar mass (M*) relation (i.e., the “main sequence”) among star-forming galaxies at 1.37 ≤ z ≤ 2.61 using the MOSFIRE Deep Evolution Field (MOSDEF) survey. Based on a sample of 261 galaxies with Hα and Hβ spectroscopy, we have estimated robust dust-corrected instantaneous SFRs over a large range in M* (˜109.5-1011.5 M⊙). We find a correlation between log(SFR(Hα)) and log(M*) with a slope of 0.65 ± 0.08 (0.58 ± 0.10) at 1.4 < z < 2.6 (2.1 < z < 2.6). We find that different assumptions for the dust correction, such as using the color excess of the stellar continuum to correct the nebular lines, sample selection biases against red star-forming galaxies, and not accounting for Balmer absorption, can yield steeper slopes of the log(SFR)-log(M*) relation. Our sample is immune from these biases as it is rest-frame optically selected, Hα and Hβ are corrected for Balmer absorption, and the Hα luminosity is dust corrected using the nebular color excess computed from the Balmer decrement. The scatter of the log(SFR(Hα))-log(M*) relation, after accounting for the measurement uncertainties, is 0.31 dex at 2.1 < z < 2.6, which is 0.05 dex larger than the scatter in log(SFR(UV))-log(M*). Based on comparisons to a simulated SFR-M* relation with some intrinsic scatter, we argue that in the absence of direct measurements of galaxy-to-galaxy variations in the attenuation/extinction curves and the initial mass function, one cannot use the difference in the scatter of the SFR(Hα)- and SFR(UV)-M* relations to constrain the stochasticity of star formation in high-redshift galaxies.

  19. Demonstration of a novel technique to measure two-photon exchange effects in elastic e±p scattering

    DOE PAGES

    Moteabbed, Maryam; Niroula, Megh; Raue, Brian A.; ...

    2013-08-30

    The discrepancy between proton electromagnetic form factors extracted using unpolarized and polarized scattering data is believed to be a consequence of two-photon exchange (TPE) effects. However, the calculations of TPE corrections have significant model dependence, and there is limited direct experimental evidence for such corrections. The TPE contributions depend on the sign of the lepton charge in e±p scattering, but the luminosities of secondary positron beams limited past measurement at large scattering angles, where the TPE effects are believe to be most significant. We present the results of a new experimental technique for making direct e±p comparisons, which has themore » potential to make precise measurements over a broad range in Q 2 and scattering angles. We use the Jefferson Laboratory electron beam and the Hall B photon tagger to generate a clean but untagged photon beam. The photon beam impinges on a converter foil to generate a mixed beam of electrons, positrons, and photons. A chicane is used to separate and recombine the electron and positron beams while the photon beam is stopped by a photon blocker. This provides a combined electron and positron beam, with energies from 0.5 to 3.2 GeV, which impinges on a liquid hydrogen target. The large acceptance CLAS detector is used to identify and reconstruct elastic scattering events, determining both the initial lepton energy and the sign of the scattered lepton. The data were collected in two days with a primary electron beam energy of only 3.3 GeV, limiting the data from this run to smaller values of Q 2 and scattering angle. Nonetheless, this measurement yields a data sample for e±p with statistics comparable to those of the best previous measurements. We have shown that we can cleanly identify elastic scattering events and correct for the difference in acceptance for electron and positron scattering. Because we ran with only one polarity for the chicane, we are unable to study the difference between the incoming electron and positron beams. This systematic effect leads to the largest uncertainty in the final ratio of positron to electron scattering: R=1.027±0.005±0.05 for < Q 2 >=0.206 GeV 2 and 0.830 ≤ ε ≤ 0.943. We have demonstrated that the tertiary e ± beam generated using this technique provides the opportunity for dramatically improved comparisons of e±p scattering, covering a significant range in both Q 2 and scattering angle. Combining data with different chicane polarities will allow for detailed studies of the difference between the incoming e + and e - beams.« less

  20. Theory of thermal conductivity in the disordered electron liquid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwiete, G., E-mail: schwiete@uni-mainz.de; Finkel’stein, A. M.

    2016-03-15

    We study thermal conductivity in the disordered two-dimensional electron liquid in the presence of long-range Coulomb interactions. We describe a microscopic analysis of the problem using the partition function defined on the Keldysh contour as a starting point. We extend the renormalization group (RG) analysis developed for thermal transport in the disordered Fermi liquid and include scattering processes induced by the long-range Coulomb interaction in the sub-temperature energy range. For the thermal conductivity, unlike for the electrical conductivity, these scattering processes yield a logarithmic correction that may compete with the RG corrections. The interest in this correction arises from themore » fact that it violates the Wiedemann–Franz law. We checked that the sub-temperature correction to the thermal conductivity is not modified either by the inclusion of Fermi liquid interaction amplitudes or as a result of the RG flow. We therefore expect that the answer obtained for this correction is final. We use the theory to describe thermal transport on the metallic side of the metal–insulator transition in Si MOSFETs.« less

Top