Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
Evaluation of simulation-based scatter correction for 3-D PET cardiac imaging
NASA Astrophysics Data System (ADS)
Watson, C. C.; Newport, D.; Casey, M. E.; deKemp, R. A.; Beanlands, R. S.; Schmand, M.
1997-02-01
Quantitative imaging of the human thorax poses one of the most difficult challenges for three-dimensional (3-D) (septaless) positron emission tomography (PET), due to the strong attenuation of the annihilation radiation and the large contribution of scattered photons to the data. In [/sup 18/F] fluorodeoxyglucose (FDG) studies of the heart with the patient's arms in the field of view, the contribution of scattered events can exceed 50% of the total detected coincidences. Accurate correction for this scatter component is necessary for meaningful quantitative image analysis and tracer kinetic modeling. For this reason, the authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging. Here, they describe this algorithm and present scatter correction results from human and chest phantom studies.
Investigation on Beam-Blocker-Based Scatter Correction Method for Improving CT Number Accuracy
NASA Astrophysics Data System (ADS)
Lee, Hoyeon; Min, Jonghwan; Lee, Taewon; Pua, Rizza; Sabir, Sohail; Yoon, Kown-Ha; Kim, Hokyung; Cho, Seungryong
2017-03-01
Cone-beam computed tomography (CBCT) is gaining widespread use in various medical and industrial applications but suffers from substantially larger amount of scatter than that in the conventional diagnostic CT resulting in relatively poor image quality. Various methods that can reduce and/or correct for the scatter in the CBCT have therefore been developed. Scatter correction method that uses a beam-blocker has been considered a direct measurement-based approach providing accurate scatter estimation from the data in the shadows of the beam-blocker. To the best of our knowledge, there has been no record reporting the significance of the scatter from the beam-blocker itself in such correction methods. In this paper, we identified the scatter from the beam-blocker that is detected in the object-free projection data investigated its influence on the image accuracy of CBCT reconstructed images, and developed a scatter correction scheme that takes care of this scatter as well as the scatter from the scanned object.
Data consistency-driven scatter kernel optimization for x-ray cone-beam CT
NASA Astrophysics Data System (ADS)
Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong
2015-08-01
Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.
NASA Astrophysics Data System (ADS)
Ye, Huping; Li, Junsheng; Zhu, Jianhua; Shen, Qian; Li, Tongji; Zhang, Fangfang; Yue, Huanyin; Zhang, Bing; Liao, Xiaohan
2017-10-01
The absorption coefficient of water is an important bio-optical parameter for water optics and water color remote sensing. However, scattering correction is essential to obtain accurate absorption coefficient values in situ using the nine-wavelength absorption and attenuation meter AC9. Establishing the correction always fails in Case 2 water when the correction assumes zero absorption in the near-infrared (NIR) region and underestimates the absorption coefficient in the red region, which affect processes such as semi-analytical remote sensing inversion. In this study, the scattering contribution was evaluated by an exponential fitting approach using AC9 measurements at seven wavelengths (412, 440, 488, 510, 532, 555, and 715 nm) and by applying scattering correction. The correction was applied to representative in situ data of moderately turbid coastal water, highly turbid coastal water, eutrophic inland water, and turbid inland water. The results suggest that the absorption levels in the red and NIR regions are significantly higher than those obtained using standard scattering error correction procedures. Knowledge of the deviation between this method and the commonly used scattering correction methods will facilitate the evaluation of the effect on satellite remote sensing of water constituents and general optical research using different scattering-correction methods.
A quantitative reconstruction software suite for SPECT imaging
NASA Astrophysics Data System (ADS)
Namías, Mauro; Jeraj, Robert
2017-11-01
Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.
Scatter measurement and correction method for cone-beam CT based on single grating scan
NASA Astrophysics Data System (ADS)
Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua
2017-06-01
In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.
Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr; Lee, Taewon
2015-09-15
Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue compositionmore » for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.« less
TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisniega, A; Zbijewski, W; Stayman, J
Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less
Exact Time-Dependent Exchange-Correlation Potential in Electron Scattering Processes
NASA Astrophysics Data System (ADS)
Suzuki, Yasumitsu; Lacombe, Lionel; Watanabe, Kazuyuki; Maitra, Neepa T.
2017-12-01
We identify peak and valley structures in the exact exchange-correlation potential of time-dependent density functional theory that are crucial for time-resolved electron scattering in a model one-dimensional system. These structures are completely missed by adiabatic approximations that, consequently, significantly underestimate the scattering probability. A recently proposed nonadiabatic approximation is shown to correctly capture the approach of the electron to the target when the initial Kohn-Sham state is chosen judiciously, and it is more accurate than standard adiabatic functionals but ultimately fails to accurately capture reflection. These results may explain the underestimation of scattering probabilities in some recent studies on molecules and surfaces.
NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT
NASA Astrophysics Data System (ADS)
Sohlberg, A.; Watabe, H.; Iida, H.
2008-07-01
Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.
NASA Astrophysics Data System (ADS)
Narita, Y.; Iida, H.; Ebert, S.; Nakamura, T.
1997-12-01
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for three numerical phantoms for /sup 201/Tl. Data were reconstructed with ordered-subset EM algorithm including noise-less transmission data based attenuation correction. Accuracy of TDCS and TEW scatter corrections were assessed by comparison with simulated true primary data. The uniform cylindrical phantom simulation demonstrated better quantitative accuracy with TDCS than with TEW (-2.0% vs. 16.7%) and better S/N (6.48 vs. 5.05). A uniform ring myocardial phantom simulation demonstrated better homogeneity with TDCS than TEW in the myocardium; i.e., anterior-to-posterior wall count ratios were 0.99 and 0.76 with TDCS and TEW, respectively. For the MCAT phantom, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.
SU-E-I-07: An Improved Technique for Scatter Correction in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, S; Wang, Y; Lue, K
2014-06-01
Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends onmore » the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient tail information and therefore improve the accuracy of scatter estimation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Peng; Hutton, Brian F.; Holstensson, Maria
2015-12-15
Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effectsmore » of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were observed with both correction methods compared to no correction, especially for the images of {sup 99m}Tc in dual-radionuclide imaging where there is heavy contamination from {sup 123}I. In this case, the nontransmural defect contrast was improved from 0.39 to 0.47 with the TEW method and to 0.51 with their proposed method and the transmural defect contrast was improved from 0.62 to 0.74 with the TEW method and to 0.73 with their proposed method. In the patient study, the proposed method provided higher myocardium-to-blood pool contrast than that of the TEW method. Similar to the phantom experiment, the improvement was the most substantial for the images of {sup 99m}Tc in dual-radionuclide imaging. In this case, the myocardium-to-blood pool ratio was improved from 7.0 to 38.3 with the TEW method and to 63.6 with their proposed method. Compared to the TEW method, the proposed method also provided higher count levels in the reconstructed images in both phantom and patient studies, indicating reduced overestimation of scatter. Using the proposed method, consistent reconstruction results were obtained for both single-radionuclide data with scatter correction and dual-radionuclide data with scatter and crosstalk corrections, in both phantom and human studies. Conclusions: The authors demonstrate that the TEW method leads to overestimation in scatter and crosstalk for the CZT-based imaging system while the proposed scatter and crosstalk correction method can provide more accurate self-scatter and down-scatter estimations for quantitative single-radionuclide and dual-radionuclide imaging.« less
Improved determination of particulate absorption from combined filter pad and PSICAM measurements.
Lefering, Ina; Röttgers, Rüdiger; Weeks, Rebecca; Connor, Derek; Utschig, Christian; Heymann, Kerstin; McKee, David
2016-10-31
Filter pad light absorption measurements are subject to two major sources of experimental uncertainty: the so-called pathlength amplification factor, β, and scattering offsets, o, for which previous null-correction approaches are limited by recent observations of non-zero absorption in the near infrared (NIR). A new filter pad absorption correction method is presented here which uses linear regression against point-source integrating cavity absorption meter (PSICAM) absorption data to simultaneously resolve both β and the scattering offset. The PSICAM has previously been shown to provide accurate absorption data, even in highly scattering waters. Comparisons of PSICAM and filter pad particulate absorption data reveal linear relationships that vary on a sample by sample basis. This regression approach provides significantly improved agreement with PSICAM data (3.2% RMS%E) than previously published filter pad absorption corrections. Results show that direct transmittance (T-method) filter pad absorption measurements perform effectively at the same level as more complex geometrical configurations based on integrating cavity measurements (IS-method and QFT-ICAM) because the linear regression correction compensates for the sensitivity to scattering errors in the T-method. This approach produces accurate filter pad particulate absorption data for wavelengths in the blue/UV and in the NIR where sensitivity issues with PSICAM measurements limit performance. The combination of the filter pad absorption and PSICAM is therefore recommended for generating full spectral, best quality particulate absorption data as it enables correction of multiple errors sources across both measurements.
Bistatic scattering from a cone frustum
NASA Technical Reports Server (NTRS)
Ebihara, W.; Marhefka, R. J.
1986-01-01
The bistatic scattering from a perfectly conducting cone frustum is investigated using the Geometrical Theory of Diffraction (GTD). The first-order GTD edge-diffraction solution has been extended by correcting for its failure in the specular region off the curved surface and in the rim-caustic regions of the endcaps. The corrections are accomplished by the use of transition functions which are developed and introduced into the diffraction coefficients. Theoretical results are verified in the principal plane by comparison with the moment method solution and experimental measurements. The resulting solution for the scattered fields is accurate, easy to apply, and fast to compute.
Optimization-based scatter estimation using primary modulation for computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao
Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less
s -wave scattering length of a Gaussian potential
NASA Astrophysics Data System (ADS)
Jeszenszki, Peter; Cherny, Alexander Yu.; Brand, Joachim
2018-04-01
We provide accurate expressions for the s -wave scattering length for a Gaussian potential well in one, two, and three spatial dimensions. The Gaussian potential is widely used as a pseudopotential in the theoretical description of ultracold-atomic gases, where the s -wave scattering length is a physically relevant parameter. We first describe a numerical procedure to compute the value of the s -wave scattering length from the parameters of the Gaussian, but find that its accuracy is limited in the vicinity of singularities that result from the formation of new bound states. We then derive simple analytical expressions that capture the correct asymptotic behavior of the s -wave scattering length near the bound states. Expressions that are increasingly accurate in wide parameter regimes are found by a hierarchy of approximations that capture an increasing number of bound states. The small number of numerical coefficients that enter these expressions is determined from accurate numerical calculations. The approximate formulas combine the advantages of the numerical and approximate expressions, yielding an accurate and simple description from the weakly to the strongly interacting limit.
NASA Astrophysics Data System (ADS)
Devito, R. P.; Khoa, Dao T.; Austin, Sam M.; Berg, U. E. P.; Loc, Bui Minh
2012-02-01
Background: Analysis of data involving nuclei far from stability often requires the optical potential (OP) for neutron scattering. Because neutron data are seldom available, whereas proton scattering data are more abundant, it is useful to have estimates of the difference of the neutron and proton optical potentials. This information is contained in the isospin dependence of the nucleon OP. Here we attempt to provide it for the nucleon-208Pb system.Purpose: The goal of this paper is to obtain accurate n+208Pb scattering data and use it, together with existing p+208Pb and 208Pb(p,n)208BiIAS* data, to obtain an accurate estimate of the isospin dependence of the nucleon OP at energies in the 30-60-MeV range.Method: Cross sections for n+208Pb scattering were measured at 30.4 and 40.0 MeV, with a typical relative (normalization) accuracy of 2-4% (3%). An angular range of 15∘ to 130∘ was covered using the beam-swinger time-of-flight system at Michigan State University. These data were analyzed by a consistent optical-model study of the neutron data and of elastic p+208Pb scattering at 45 and 54 MeV. These results were combined with a coupled-channel analysis of the 208Pb(p,n) reaction at 45 MeV, exciting the 0+ isobaric analog state (IAS) in 208Bi.Results: The new data and analysis give an accurate estimate of the isospin impurity of the nucleon-208Pb OP at 30.4 MeV caused by the Coulomb correction to the proton OP. The corrections to the real proton OP given by the CH89 global systematics were found to be only a few percent, whereas for the imaginary potential it was greater than 20% at the nuclear surface. On the basis of the analysis of the measured elastic n+208Pb data at 40 MeV, a Coulomb correction of similar strength and shape was also predicted for the p+208Pb OP at energies around 54 MeV.Conclusions: Accurate neutron scattering data can be used in combination with proton scattering data and (p,n) charge exchange data leading to the IAS to obtain reliable estimates of the isospin impurity of the nucleon OP.
NASA Astrophysics Data System (ADS)
Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer
2018-03-01
Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).
Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging
NASA Astrophysics Data System (ADS)
Konik, Arda Bekir
Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model) digital phantoms. In addition, PET projection files for different sizes of MOBY phantoms were reconstructed in 6 different conditions including attenuation and scatter corrections. Selected regions were analyzed for these different reconstruction conditions and object sizes. Finally, real mouse data from the real version of the same small animal PET scanner we modeled in our simulations were analyzed for similar reconstruction conditions. Both our IDL and GATE simulations showed that, for small animal PET and SPECT, even the smallest size objects (˜2 cm diameter) showed ˜15% error when both attenuation and scatter were not corrected. However, a simple attenuation correction using a uniform attenuation map and object boundary obtained from emission data significantly reduces this error in non-lung regions (˜1% for smallest size and ˜6% for largest size). In lungs, emissions values were overestimated when only attenuation correction was performed. In addition, we did not observe any significant improvement between the uses of uniform or actual attenuation map (e.g., only ˜0.5% for largest size in PET studies). The scatter correction was not significant for smaller size objects, but became increasingly important for larger sizes objects. These results suggest that for all mouse sizes and most rat sizes, uniform attenuation correction can be performed using emission data only. For smaller sizes up to ˜ 4 cm, scatter correction is not required even in lung regions. For larger sizes if accurate quantization needed, additional transmission scan may be required to estimate an accurate attenuation map for both attenuation and scatter corrections.
Liu, Xinming; Shaw, Chris C; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C; Kappadath, S Cheenu
2006-02-28
We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images.Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.
Improved scatter correction with factor analysis for planar and SPECT imaging
NASA Astrophysics Data System (ADS)
Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw
2017-09-01
Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user-independent approach for scatter correction in nuclear medicine.
NASA Astrophysics Data System (ADS)
Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki
2016-12-01
We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.
NASA Astrophysics Data System (ADS)
Yang, Kai; Burkett, George, Jr.; Boone, John M.
2014-11-01
The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.
NASA Technical Reports Server (NTRS)
Jefferies, S. M.; Duvall, T. L., Jr.
1991-01-01
A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; ...
2010-11-19
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e – e – → e – e – (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
CORRECTING FOR INTERSTELLAR SCATTERING DELAY IN HIGH-PRECISION PULSAR TIMING: SIMULATION RESULTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palliyaguru, Nipuni; McLaughlin, Maura; Stinebring, Daniel
2015-12-20
Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any methodmore » to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.« less
Diaphragm correction factors for the FAC-IR-300 free-air ionization chamber.
Mohammadi, Seyed Mostafa; Tavakoli-Anbaran, Hossein
2018-02-01
A free-air ionization chamber FAC-IR-300, designed by the Atomic Energy Organization of Iran, is used as the primary Iranian national standard for the photon air kerma. For accurate air kerma measurements, the contribution from the scattered photons to the total energy released in the collecting volume must be eliminated. One of the sources of scattered photons is the chamber's diaphragm. In this paper, the diaphragm scattering correction factor, k dia , and the diaphragm transmission correction factor, k tr , were introduced. These factors represent corrections to the measured charge (or current) for the photons scattered from the diaphragm surface and the photons penetrated through the diaphragm volume, respectively. The k dia and k tr values were estimated by Monte Carlo simulations. The simulations were performed for the mono-energetic photons in the energy range of 20 - 300keV. According to the simulation results, in this energy range, the k dia values vary between 0.9997 and 0.9948, and k tr values decrease from 1.0000 to 0.9965. The corrections grow in significance with increasing energy of the primary photons. Copyright © 2017 Elsevier Ltd. All rights reserved.
Atmospheric correction for inland water based on Gordon model
NASA Astrophysics Data System (ADS)
Li, Yunmei; Wang, Haijun; Huang, Jiazhu
2008-04-01
Remote sensing technique is soundly used in water quality monitoring since it can receive area radiation information at the same time. But more than 80% radiance detected by sensors at the top of the atmosphere is contributed by atmosphere not directly by water body. Water radiance information is seriously confused by atmospheric molecular and aerosol scattering and absorption. A slight bias of evaluation for atmospheric influence can deduce large error for water quality evaluation. To inverse water composition accurately we have to separate water and air information firstly. In this paper, we studied on atmospheric correction methods for inland water such as Taihu Lake. Landsat-5 TM image was corrected based on Gordon atmospheric correction model. And two kinds of data were used to calculate Raleigh scattering, aerosol scattering and radiative transmission above Taihu Lake. Meanwhile, the influence of ozone and white cap were revised. One kind of data was synchronization meteorology data, and the other one was synchronization MODIS image. At last, remote sensing reflectance was retrieved from the TM image. The effect of different methods was analyzed using in situ measured water surface spectra. The result indicates that measured and estimated remote sensing reflectance were close for both methods. Compared to the method of using MODIS image, the method of using synchronization meteorology is more accurate. And the bias is close to inland water error criterion accepted by water quality inversing. It shows that this method is suitable for Taihu Lake atmospheric correction for TM image.
NASA Astrophysics Data System (ADS)
Bezur, L.; Marshall, J.; Ottaway, J. M.
A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quirk, Thomas, J., IV
2004-08-01
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Comptonmore » scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.« less
NASA Astrophysics Data System (ADS)
Oelze, Michael L.; O'Brien, William D.
2004-11-01
Backscattered rf signals used to construct conventional ultrasound B-mode images contain frequency-dependent information that can be examined through the backscattered power spectrum. The backscattered power spectrum is found by taking the magnitude squared of the Fourier transform of a gated time segment corresponding to a region in the scattering volume. When a time segment is gated, the edges of the gated regions change the frequency content of the backscattered power spectrum due to truncating of the waveform. Tapered windows, like the Hanning window, and longer gate lengths reduce the relative contribution of the gate-edge effects. A new gate-edge correction factor was developed that partially accounted for the edge effects. The gate-edge correction factor gave more accurate estimates of scatterer properties at small gate lengths compared to conventional windowing functions. The gate-edge correction factor gave estimates of scatterer properties within 5% of actual values at very small gate lengths (less than 5 spatial pulse lengths) in both simulations and from measurements on glass-bead phantoms. While the gate-edge correction factor gave higher accuracy of estimates at smaller gate lengths, the precision of estimates was not improved at small gate lengths over conventional windowing functions. .
Absorption and scattering of light by nonspherical particles. [in atmosphere
NASA Technical Reports Server (NTRS)
Bohren, C. F.
1986-01-01
Using the example of the polarization of scattered light, it is shown that the scattering matrices for identical, randomly ordered particles and for spherical particles are unequal. The spherical assumptions of Mie theory are therefore inconsistent with the random shapes and sizes of atmospheric particulates. The implications for corrections made to extinction measurements of forward scattering light are discussed. Several analytical methods are examined as potential bases for developing more accurate models, including Rayleigh theory, Fraunhoffer Diffraction theory, anomalous diffraction theory, Rayleigh-Gans theory, the separation of variables technique, the Purcell-Pennypacker method, the T-matrix method, and finite difference calculations.
Accurate Modeling of Dark-Field Scattering Spectra of Plasmonic Nanostructures.
Jiang, Liyong; Yin, Tingting; Dong, Zhaogang; Liao, Mingyi; Tan, Shawn J; Goh, Xiao Ming; Allioux, David; Hu, Hailong; Li, Xiangyin; Yang, Joel K W; Shen, Zexiang
2015-10-27
Dark-field microscopy is a widely used tool for measuring the optical resonance of plasmonic nanostructures. However, current numerical methods for simulating the dark-field scattering spectra were carried out with plane wave illumination either at normal incidence or at an oblique angle from one direction. In actual experiments, light is focused onto the sample through an annular ring within a range of glancing angles. In this paper, we present a theoretical model capable of accurately simulating the dark-field light source with an annular ring. Simulations correctly reproduce a counterintuitive blue shift in the scattering spectra from gold nanodisks with a diameter beyond 140 nm. We believe that our proposed simulation method can be potentially applied as a general tool capable of simulating the dark-field scattering spectra of plasmonic nanostructures as well as other dielectric nanostructures with sizes beyond the quasi-static limit.
Quantitation of tumor uptake with molecular breast imaging.
Bache, Steven T; Kappadath, S Cheenu
2017-09-01
We developed scatter and attenuation-correction techniques for quantifying images obtained with Molecular Breast Imaging (MBI) systems. To investigate scatter correction, energy spectra of a 99m Tc point source were acquired with 0-7-cm-thick acrylic to simulate scatter between the detector heads. System-specific scatter correction factor, k, was calculated as a function of thickness using a dual energy window technique. To investigate attenuation correction, a 7-cm-thick rectangular phantom containing 99m Tc-water simulating breast tissue and fillable spheres simulating tumors was imaged. Six spheres 10-27 mm in diameter were imaged with sphere-to-background ratios (SBRs) of 3.5, 2.6, and 1.7 and located at depths of 0.5, 1.5, and 2.5 cm from the center of the water bath for 54 unique tumor scenarios (3 SBRs × 6 sphere sizes × 3 depths). Phantom images were also acquired in-air under scatter- and attenuation-free conditions, which provided ground truth counts. To estimate true counts, T, from each tumor, the geometric mean (GM) of the counts within a prescribed region of interest (ROI) from the two projection images was calculated as T=C1C2eμtF, where C are counts within the square ROI circumscribing each sphere on detectors 1 and 2, μ is the linear attenuation coefficient of water, t is detector separation, and the factor F accounts for background activity. Four unique F definitions-standard GM, background-subtraction GM, MIRD Primer 16 GM, and a novel "volumetric GM"-were investigated. Error in T was calculated as the percentage difference with respect to in-air. Quantitative accuracy using the different GM definitions was calculated as a function of SBR, depth, and sphere size. Sensitivity of quantitative accuracy to ROI size was investigated. We developed an MBI simulation to investigate the robustness of our corrections for various ellipsoidal tumor shapes and detector separations. Scatter correction factor k varied slightly (0.80-0.95) over a compressed breast thickness range of 6-9 cm. Corrected energy spectra recovered general characteristics of scatter-free spectra. Quantitatively, photopeak counts were recovered to <10% compared to in-air conditions after scatter correction. After GM attenuation correction, mean errors (95% confidence interval, CI) for all 54 imaging scenarios were 149% (-154% to +455%), -14.0% (-38.4% to +10.4%), 16.8% (-14.7% to +48.2%), and 2.0% (-14.3 to +18.3%) for the standard GM, background-subtraction GM, MIRD 16 GM, and volumetric GM, respectively. Volumetric GM was less sensitive to SBR and sphere size, while all GM methods were insensitive to sphere depth. Simulation results showed that Volumetric GM method produced a mean error within 5% over all compressed breast thicknesses (3-14 cm), and that the use of an estimated radius for nonspherical tumors increases the 95% CI to at most ±23%, compared with ±16% for spherical tumors. Using DEW scatter- and our Volumetric GM attenuation-correction methodology yielded accurate estimates of tumor counts in MBI over various tumor sizes, shapes, depths, background uptake, and compressed breast thicknesses. Accurate tumor uptake can be converted to radiotracer uptake concentration, allowing three patient-specific metrics to be calculated for quantifying absolute uptake and relative uptake change for assessment of treatment response. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Mobberley, Sean David
Accurate, cross-scanner assessment of in-vivo air density used to quantitatively assess amount and distribution of emphysema in COPD subjects has remained elusive. Hounsfield units (HU) within tracheal air can be considerably more positive than -1000 HU. With the advent of new dual-source scanners which employ dedicated scatter correction techniques, it is of interest to evaluate how the quantitative measures of lung density compare between dual-source and single-source scan modes. This study has sought to characterize in-vivo and phantom-based air metrics using dual-energy computed tomography technology where the nature of the technology has required adjustments to scatter correction. Anesthetized ovine (N=6), swine (N=13: more human-like rib cage shape), lung phantom and a thoracic phantom were studied using a dual-source MDCT scanner (Siemens Definition Flash. Multiple dual-source dual-energy (DSDE) and single-source (SS) scans taken at different energy levels and scan settings were acquired for direct quantitative comparison. Density histograms were evaluated for the lung, tracheal, water and blood segments. Image data were obtained at 80, 100, 120, and 140 kVp in the SS mode (B35f kernel) and at 80, 100, 140, and 140-Sn (tin filtered) kVp in the DSDE mode (B35f and D30f kernels), in addition to variations in dose, rotation time, and pitch. To minimize the effect of cross-scatter, the phantom scans in the DSDE mode was obtained by reducing the tube current of one of the tubes to its minimum (near zero) value. When using image data obtained in the DSDE mode, the median HU values in the tracheal regions of all animals and the phantom were consistently closer to -1000 HU regardless of reconstruction kernel (chapters 3 and 4). Similarly, HU values of water and blood were consistently closer to their nominal values of 0 HU and 55 HU respectively. When using image data obtained in the SS mode the air CT numbers demonstrated a consistent positive shift of up to 35 HU with respect to the nominal -1000 HU value. In vivo data demonstrated considerable variability in tracheal, influenced by local anatomy with SS mode scanning while tracheal air was more consistent with DSDE imaging. Scatter effects in the lung parenchyma differed from adjacent tracheal measures. In summary, data suggest that enhanced scatter correction serves to provide more accurate CT lung density measures sought to quantitatively assess the presence and distribution of emphysema in COPD subjects. Data further suggest that CT images, acquired without adequate scatter correction, cannot be corrected by linear algorithms given the variability in tracheal air HU values and the independent scatter effects on lung parenchyma.
NASA Astrophysics Data System (ADS)
Tyynelä, J.; Leinonen, J.; Westbrook, C. D.; Moisseev, D.; Nousiainen, T.
2013-02-01
The applicability of the Rayleigh-Gans approximation (RGA) for scattering by snowflakes is studied in the microwave region of the electromagnetic spectrum. Both the shapes of the single ice crystals, or monomers, and their amounts in the modeled snowflakes are varied. For reference, the discrete-dipole approximation (DDA) is used to produce numerically accurate solutions to the single-scattering properties, such as the backscattering and extinction cross-sections, single-scattering albedo, and the asymmetry parameter. We find that the single-scattering albedo is the most accurate with only about 10% relative bias at maximum. The asymmetry parameter has about 0.12 absolute bias at maximum. The backscattering and extinction cross-sections show about - 65% relative biases at maximum, corresponding to about - 4.6 dB difference. Overall, the RGA agrees well with the DDA computations for all the cases studied and is more accurate for the integrated quantities, such as the single-scattering albedo and the asymmetry parameter than the cross-sections for the same snowflakes. The accuracy of the RGA seems to improve, when the number of monomers is increased in an aggregate, and decrease, when the frequency increases. It is also more accurate for less dense monomer shapes, such as stellar dendrites. The DDA and RGA results are well correlated; the sample correlation coefficients of those are close to unity throughout the study. Therefore, the accuracy of the RGA could be improved by applying appropriate correction factors.
A model-based scatter artifacts correction for cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Wei; Zhu, Jun; Wang, Luyao
2016-04-15
Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain ormore » projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection domain and image domain, respectively. The contrast of the in vivo human images is greatly improved after correction. Conclusions: The software-based technique has a number of advantages, such as high computational efficiency and accuracy, and the capability of performing scatter correction without modifying the clinical workflow (i.e., no extra scan/measurement data are needed) or modifying the imaging hardware. When implemented practically, this should improve the accuracy of CBCT image quantitation and significantly impact CBCT-based interventional procedures and adaptive radiation therapy.« less
Assessment of the Subgrid-Scale Models at Low and High Reynolds Numbers
NASA Technical Reports Server (NTRS)
Horiuti, K.
1996-01-01
Accurate SGS models must be capable of correctly representing the energy transfer between GS and SGS. Recent direct assessment of the energy transfer carried out using direct numerical simulation (DNS) data for wall-bounded flows revealed that the energy exchange is not unidirectional. Although GS kinetic energy is transferred to the SGS (forward scatter (F-scatter) on average, SGS energy is also transferred to the GS. The latter energy exchange (backward scatter (B-scatter) is very significant, i.e., the local energy exchange can be backward nearly as often as forward and the local rate of B-scatter is considerably higher than the net rate of energy dissipation.
Desjarlais, Michael P.; Scullard, Christian R.; Benedict, Lorin X.; ...
2017-03-13
We compute electrical and thermal conductivities of hydrogen plasmas in the non-degenerate regime using Kohn-Sham Density Functional Theory (DFT) and an application of the Kubo- Greenwood response formula, and demonstrate that for thermal conductivity, the mean-field treatment of the electron-electron (e-e) interaction therein is insufficient to reproduce the weak-coupling limit obtained by plasma kinetic theories. An explicit e-e scattering correction to the DFT is posited by appealing to Matthiessen's Rule and the results of our computations of conductivities with the quantum Lenard-Balescu (QLB) equation. Further motivation of our correction is provided by an argument arising from the Zubarev quantum kineticmore » theory approach. Significant emphasis is placed on our efforts to produce properly converged results for plasma transport using Kohn-Sham DFT, so that an accurate assessment of the importance and efficacy of our e-e scattering corrections to the thermal conductivity can be made.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr
Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, Dudu; Yang, Sichun; Lu, Lanyuan
2016-06-20
Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested againstmore » other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.« less
A novel scatter separation method for multi-energy x-ray imaging
NASA Astrophysics Data System (ADS)
Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.
2016-06-01
X-ray imaging coupled with recently emerged energy-resolved photon counting detectors provides the ability to differentiate material components and to estimate their respective thicknesses. However, such techniques require highly accurate images. The presence of scattered radiation leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in computed tomography (CT). The aim of the present study was to introduce and evaluate a partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. This evaluation was carried out with the aid of numerical simulations provided by an internal simulation tool, Sindbad-SFFD. A simplified numerical thorax phantom placed in a CT geometry was used. The attenuation images and CT slices obtained from corrected data showed a remarkable increase in local contrast and internal structure detectability when compared to uncorrected images. Scatter induced bias was also substantially decreased. In terms of quantitative performance, the developed approach proved to be quite accurate as well. The average normalized root-mean-square error between the uncorrected projections and the reference primary projections was around 23%. The application of PASSSA reduced this error to around 5%. Finally, in terms of voxel value accuracy, an increase by a factor >10 was observed for most inspected volumes-of-interest, when comparing the corrected and uncorrected total volumes.
Effect of Multiple Scattering on the Compton Recoil Current Generated in an EMP, Revisited
Farmer, William A.; Friedman, Alex
2015-06-18
Multiple scattering has historically been treated in EMP modeling through the obliquity factor. The validity of this approach is examined here. A simplified model problem, which correctly captures cyclotron motion, Doppler shifting due to the electron motion, and multiple scattering is first considered. The simplified problem is solved three ways: the obliquity factor, Monte-Carlo, and Fokker-Planck finite-difference. Because of the Doppler effect, skewness occurs in the distribution. It is demonstrated that the obliquity factor does not correctly capture this skewness, but the Monte-Carlo and Fokker-Planck finite-difference approaches do. Here, the obliquity factor and Fokker-Planck finite-difference approaches are then compared inmore » a fuller treatment, which includes the initial Klein-Nishina distribution of the electrons, and the momentum dependence of both drag and scattering. It is found that, in general, the obliquity factor is adequate for most situations. However, as the gamma energy increases and the Klein-Nishina becomes more peaked in the forward direction, skewness in the distribution causes greater disagreement between the obliquity factor and a more accurate model of multiple scattering.« less
Magota, Keiichi; Shiga, Tohru; Asano, Yukari; Shinyama, Daiki; Ye, Jinghan; Perkins, Amy E; Maniawski, Piotr J; Toyonaga, Takuya; Kobayashi, Kentaro; Hirata, Kenji; Katoh, Chietsugu; Hattori, Naoya; Tamaki, Nagara
2017-12-01
In 3-dimensional PET/CT imaging of the brain with 15 O-gas inhalation, high radioactivity in the face mask creates cold artifacts and affects the quantitative accuracy when scatter is corrected by conventional methods (e.g., single-scatter simulation [SSS] with tail-fitting scaling [TFS-SSS]). Here we examined the validity of a newly developed scatter-correction method that combines SSS with a scaling factor calculated by Monte Carlo simulation (MCS-SSS). Methods: We performed phantom experiments and patient studies. In the phantom experiments, a plastic bottle simulating a face mask was attached to a cylindric phantom simulating the brain. The cylindric phantom was filled with 18 F-FDG solution (3.8-7.0 kBq/mL). The bottle was filled with nonradioactive air or various levels of 18 F-FDG (0-170 kBq/mL). Images were corrected either by TFS-SSS or MCS-SSS using the CT data of the bottle filled with nonradioactive air. We compared the image activity concentration in the cylindric phantom with the true activity concentration. We also performed 15 O-gas brain PET based on the steady-state method on patients with cerebrovascular disease to obtain quantitative images of cerebral blood flow and oxygen metabolism. Results: In the phantom experiments, a cold artifact was observed immediately next to the bottle on TFS-SSS images, where the image activity concentrations in the cylindric phantom were underestimated by 18%, 36%, and 70% at the bottle radioactivity levels of 2.4, 5.1, and 9.7 kBq/mL, respectively. At higher bottle radioactivity, the image activity concentrations in the cylindric phantom were greater than 98% underestimated. For the MCS-SSS, in contrast, the error was within 5% at each bottle radioactivity level, although the image generated slight high-activity artifacts around the bottle when the bottle contained significantly high radioactivity. In the patient imaging with 15 O 2 and C 15 O 2 inhalation, cold artifacts were observed on TFS-SSS images, whereas no artifacts were observed on any of the MCS-SSS images. Conclusion: MCS-SSS accurately corrected the scatters in 15 O-gas brain PET when the 3-dimensional acquisition mode was used, preventing the generation of cold artifacts, which were observed immediately next to a face mask on TFS-SSS images. The MCS-SSS method will contribute to accurate quantitative assessments. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Greene, Samuel M; Shan, Xiao; Clary, David C
2016-02-28
We investigate which terms in Reduced-Dimensionality Semiclassical Transition State Theory (RD SCTST) contribute most significantly in rate constant calculations of hydrogen extraction and exchange reactions of hydrocarbons. We also investigate the importance of deep tunneling corrections to the theory. In addition, we introduce a novel formulation of the theory in Jacobi coordinates. For the reactions of H atoms with methane, ethane, and cyclopropane, we find that a one-dimensional (1-D) version of the theory without deep tunneling corrections compares well with 2-D SCTST results and accurate quantum scattering results. For the "heavy-light-heavy" H atom exchange reaction between CH3 and CH4, deep tunneling corrections are needed to yield 1-D results that compare well with 2-D results. The finding that accurate rate constants can be obtained from derivatives of the potential along only one dimension further validates RD SCTST as a computationally efficient yet accurate rate constant theory.
Ross, J S; Glenzer, S H; Palastro, J P; Pollock, B B; Price, D; Tynan, G R; Froula, D H
2010-10-01
We present simultaneous Thomson-scattering measurements of light scattered from ion-acoustic and electron-plasma fluctuations in a N(2) gas jet plasma. By varying the plasma density from 1.5×10(18) to 4.0×10(19) cm(-3) and the temperature from 100 to 600 eV, we observe the transition from the collective regime to the noncollective regime in the high-frequency Thomson-scattering spectrum. These measurements allow an accurate local measurement of fundamental plasma parameters: electron temperature, density, and ion temperature. Furthermore, experiments performed in the high densities typically found in laser produced plasmas result in scattering from electrons moving near the phase velocity of the relativistic plasma waves. Therefore, it is shown that even at low temperatures relativistic corrections to the scattered power must be included.
NASA Technical Reports Server (NTRS)
Pueschel, R. F.; Overbeck, V. R.; Snetsinger, K. G.; Russell, P. B.; Ferry, G. V.
1990-01-01
The use of the active scattering spectrometer probe (ASAS-X) to measure sulfuric acid aerosols on U-2 and ER-2 research aircraft has yielded results that are at times ambiguous due to the dependence of particles' optical signatures on refractive index as well as physical dimensions. The calibration correction of the ASAS-X optical spectrometer probe for stratospheric aerosol studies is validated through an independent and simultaneous sampling of the particles with impactors; sizing and counting of particles on SEM images yields total particle areas and volumes. Upon correction of calibration in light of these data, spectrometer results averaged over four size distributions are found to agree with similarly averaged impactor results to within a few percent: indicating that the optical properties or chemical composition of the sample aerosol must be known in order to achieve accurate optical aerosol spectrometer size analysis.
Bio-Optics of the Chesapeake Bay from Measurements and Radiative Transfer Calculations
NASA Technical Reports Server (NTRS)
Tzortziou, Maria; Herman, Jay R.; Gallegos, Charles L.; Neale, Patrick J.; Subramaniam, Ajit; Harding, Lawrence W., Jr.; Ahmad, Ziauddin
2005-01-01
We combined detailed bio-optical measurements and radiative transfer (RT) modeling to perform an optical closure experiment for optically complex and biologically productive Chesapeake Bay waters. We used this experiment to evaluate certain assumptions commonly used when modeling bio-optical processes, and to investigate the relative importance of several optical characteristics needed to accurately model and interpret remote sensing ocean-color observations in these Case 2 waters. Direct measurements were made of the magnitude, variability, and spectral characteristics of backscattering and absorption that are critical for accurate parameterizations in satellite bio-optical algorithms and underwater RT simulations. We found that the ratio of backscattering to total scattering in the mid-mesohaline Chesapeake Bay varied considerably depending on particulate loading, distance from land, and mixing processes, and had an average value of 0.0128 at 530 nm. Incorporating information on the magnitude, variability, and spectral characteristics of particulate backscattering into the RT model, rather than using a volume scattering function commonly assumed for turbid waters, was critical to obtaining agreement between RT calculations and measured radiometric quantities. In situ measurements of absorption coefficients need to be corrected for systematic overestimation due to scattering errors, and this correction commonly employs the assumption that absorption by particulate matter at near infrared wavelengths is zero.
Cloaking of arbitrarily shaped objects with homogeneous coatings
NASA Astrophysics Data System (ADS)
Forestiere, Carlo; Dal Negro, Luca; Miano, Giovanni
2014-05-01
We present a theory for the cloaking of arbitrarily shaped objects and demonstrate electromagnetic scattering cancellation through designed homogeneous coatings. First, in the small-particle limit, we expand the dipole moment of a coated object in terms of its resonant modes. By zeroing the numerator of the resulting rational function, we accurately predict the permittivity values of the coating layer that abates the total scattered power. Then, we extend the applicability of the method beyond the small-particle limit, deriving the radiation corrections of the scattering-cancellation permittivity within a perturbation approach. Our method permits the design of invisibility cloaks for irregularly shaped devices such as complex sensors and detectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4
2015-01-15
Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Artemyev, A. V., E-mail: ante0226@gmail.com; Mourenas, D.; Krasnoselskikh, V. V.
2015-06-15
In this paper, we study relativistic electron scattering by fast magnetosonic waves. We compare results of test particle simulations and the quasi-linear theory for different spectra of waves to investigate how a fine structure of the wave emission can influence electron resonant scattering. We show that for a realistically wide distribution of wave normal angles θ (i.e., when the dispersion δθ≥0.5{sup °}), relativistic electron scattering is similar for a wide wave spectrum and for a spectrum consisting in well-separated ion cyclotron harmonics. Comparisons of test particle simulations with quasi-linear theory show that for δθ>0.5{sup °}, the quasi-linear approximation describes resonantmore » scattering correctly for a large enough plasma frequency. For a very narrow θ distribution (when δθ∼0.05{sup °}), however, the effect of a fine structure in the wave spectrum becomes important. In this case, quasi-linear theory clearly fails in describing accurately electron scattering by fast magnetosonic waves. We also study the effect of high wave amplitudes on relativistic electron scattering. For typical conditions in the earth's radiation belts, the quasi-linear approximation cannot accurately describe electron scattering for waves with averaged amplitudes >300 pT. We discuss various applications of the obtained results for modeling electron dynamics in the radiation belts and in the Earth's magnetotail.« less
Anizan, Nadège; Carlier, Thomas; Hindorf, Cecilia; Barbet, Jacques; Bardiès, Manuel
2012-02-13
Noninvasive multimodality imaging is essential for preclinical evaluation of the biodistribution and pharmacokinetics of radionuclide therapy and for monitoring tumor response. Imaging with nonstandard positron-emission tomography [PET] isotopes such as 124I is promising in that context but requires accurate activity quantification. The decay scheme of 124I implies an optimization of both acquisition settings and correction processing. The PET scanner investigated in this study was the Inveon PET/CT system dedicated to small animal imaging. The noise equivalent count rate [NECR], the scatter fraction [SF], and the gamma-prompt fraction [GF] were used to determine the best acquisition parameters for mouse- and rat-sized phantoms filled with 124I. An image-quality phantom as specified by the National Electrical Manufacturers Association NU 4-2008 protocol was acquired and reconstructed with two-dimensional filtered back projection, 2D ordered-subset expectation maximization [2DOSEM], and 3DOSEM with maximum a posteriori [3DOSEM/MAP] algorithms, with and without attenuation correction, scatter correction, and gamma-prompt correction (weighted uniform distribution subtraction). Optimal energy windows were established for the rat phantom (390 to 550 keV) and the mouse phantom (400 to 590 keV) by combining the NECR, SF, and GF results. The coincidence time window had no significant impact regarding the NECR curve variation. Activity concentration of 124I measured in the uniform region of an image-quality phantom was underestimated by 9.9% for the 3DOSEM/MAP algorithm with attenuation and scatter corrections, and by 23% with the gamma-prompt correction. Attenuation, scatter, and gamma-prompt corrections decreased the residual signal in the cold insert. The optimal energy windows were chosen with the NECR, SF, and GF evaluation. Nevertheless, an image quality and an activity quantification assessment were required to establish the most suitable reconstruction algorithm and corrections for 124I small animal imaging.
Speed-dependent collision effects on radar back-scattering from the ionosphere
NASA Technical Reports Server (NTRS)
Theimer, O.
1981-01-01
A computer code to accurately compute the fluctuation spectrum for linearly speed dependent collision frequencies was developed. The effect of ignoring the speed dependence on the estimates of ionospheric parameters was determined. It is shown that disagreements between the rocket and the incoherent scatter estimates could be partially resolved if the correct speed dependence of the i-n collision frequency is not ignored. This problem is also relevant to the study of ionospheric irregularities in the auroral E-region and their effects on the radio communication with satellites.
NASA Astrophysics Data System (ADS)
Sunar, Ulas; Rohrbach, Daniel; Morgan, Janet; Zeitouni, Natalie
2013-03-01
Photodynamic Therapy (PDT) has proven to be an effective treatment option for nonmelanoma skin cancers. The ability to quantify the concentration of drug in the treated area is crucial for effective treatment planning as well as predicting outcomes. We utilized spatial frequency domain imaging for quantifying the accurate concentration of protoporphyrin IX (PpIX) in phantoms and in vivo. We correct fluorescence against the effects of native tissue absorption and scattering parameters. First we quantified the absorption and scattering of the tissue non-invasively. Then, we corrected raw fluorescence signal by compensating for optical properties to get the absolute drug concentration. After phantom experiments, we used basal cell carcinoma (BCC) model in Gli mice to determine optical properties and drug concentration in vivo at pre-PDT.
NASA Astrophysics Data System (ADS)
Huang, Wen-Min; Mou, Chung-Yu; Chang, Cheng-Hung
2010-02-01
While the scattering phase for several one-dimensional potentials can be exactly derived, less is known in multi-dimensional quantum systems. This work provides a method to extend the one-dimensional phase knowledge to multi-dimensional quantization rules. The extension is illustrated in the example of Bogomolny's transfer operator method applied in two quantum wells bounded by step potentials of different heights. This generalized semiclassical method accurately determines the energy spectrum of the systems, which indicates the substantial role of the proposed phase correction. Theoretically, the result can be extended to other semiclassical methods, such as Gutzwiller trace formula, dynamical zeta functions, and semiclassical Landauer-Büttiker formula. In practice, this recipe enhances the applicability of semiclassical methods to multi-dimensional quantum systems bounded by general soft potentials.
SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Y; Wu, P; Mao, T
2016-06-15
Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filteringmore » the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT images. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less
Retrieval of background surface reflectance with BRD components from pre-running BRDF
NASA Astrophysics Data System (ADS)
Choi, Sungwon; Lee, Kyeong-Sang; Jin, Donghyun; Lee, Darae; Han, Kyung-Soo
2016-10-01
Many countries try to launch satellite to observe the Earth surface. As important of surface remote sensing is increased, the reflectance of surface is a core parameter of the ground climate. But observing the reflectance of surface by satellite have weakness such as temporal resolution and being affected by view or solar angles. The bidirectional effects of the surface reflectance may make many noises to the time series. These noises can lead to make errors when determining surface reflectance. To correct bidirectional error of surface reflectance, using correction model for normalized the sensor data is necessary. A Bidirectional Reflectance Distribution Function (BRDF) is making accuracy higher method to correct scattering (Isotropic scattering, Geometric scattering, Volumetric scattering). To correct bidirectional error of surface reflectance, BRDF was used in this study. To correct bidirectional error of surface reflectance, we apply Bidirectional Reflectance Distribution Function (BRDF) to retrieve surface reflectance. And we apply 2 steps for retrieving Background Surface Reflectance (BSR). The first step is retrieving Bidirectional Reflectance Distribution (BRD) coefficients. Before retrieving BSR, we did pre-running BRDF to retrieve BRD coefficients to correct scatterings (Isotropic scattering, Geometric scattering, Volumetric scattering). In pre-running BRDF, we apply BRDF with observed surface reflectance of SPOT/VEGETATION (VGT-S1) and angular data to get BRD coefficients for calculating scattering. After that, we apply BRDF again in the opposite direction with BRD coefficients and angular data to retrieve BSR as a second step. As a result, BSR has very similar reflectance to one of VGT-S1. And reflectance in BSR is shown adequate. The highest reflectance of BSR is not over 0.4μm in blue channel, 0.45μm in red channel, 0.55μm in NIR channel. And for validation we compare reflectance of clear sky pixel from SPOT/VGT status map data. As a result of comparing BSR with VGT-S1, bias is from 0.0116 to 0.0158 and RMSE is from 0.0459 to 0.0545. They are very reasonable results, so we confirm that BSR is similar to VGT-S1. And weakness of this study is missing pixel in BSR which are observed less time to retrieve BRD components. If missing pixels are filled, BSR is better to retrieve surface products with more accuracy. And we think that after filling the missing pixel and being more accurate, it can be useful data to retrieve surface product which made by surface reflectance like cloud masking and retrieving aerosol.
NASA Astrophysics Data System (ADS)
Natraj, Vijay; Li, King-Fai; Yung, Yuk L.
2009-02-01
Tables that have been used as a reference for nearly 50 years for the intensity and polarization of reflected and transmitted light in Rayleigh scattering atmospheres have been found to be inaccurate, even to four decimal places. We convert the integral equations describing the X and Y functions into a pair of coupled integro-differential equations that can be efficiently solved numerically. Special care has been taken in evaluating Cauchy principal value integrals and their derivatives that appear in the solution of the Rayleigh scattering problem. The new approach gives results accurate to eight decimal places for the entire range of tabulation (optical thicknesses 0.02-1.0, surface reflectances 0-0.8, solar and viewing zenith angles 0°-88.85°, and relative azimuth angles 0°-180°), including the most difficult case of direct transmission in the direction of the sun. Revised tables have been created and stored electronically for easy reference by the planetary science and astrophysics community.
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
A modified TEW approach to scatter correction for In-111 and Tc-99m dual-isotope small-animal SPECT.
Prior, Paul; Timmins, Rachel; Petryk, Julia; Strydhorst, Jared; Duan, Yin; Wei, Lihui; Glenn Wells, R
2016-10-01
In dual-isotope (Tc-99m/In-111) small-animal single-photon emission computed tomography (SPECT), quantitative accuracy of Tc-99m activity measurements is degraded due to the detection of Compton-scattered photons in the Tc-99m photopeak window, which originate from the In-111 emissions (cross talk) and from the Tc-99m emission (self-scatter). The standard triple-energy window (TEW) estimates the total scatter (self-scatter and cross talk) using one scatter window on either side of the Tc-99m photopeak window, but the estimate is biased due to the presence of unscattered photons in the scatter windows. The authors present a modified TEW method to correct for total scatter that compensates for this bias and evaluate the method in phantoms and in vivo. The number of unscattered Tc-99m and In-111 photons present in each scatter-window projection is estimated based on the number of photons detected in the photopeak of each isotope, using the isotope-dependent energy resolution of the detector. The camera-head-specific energy resolutions for the 140 keV Tc-99m and 171 keV In-111 emissions were determined experimentally by separately sampling the energy spectra of each isotope. Each sampled spectrum was fit with a Linear + Gaussian function. The fitted Gaussian functions were integrated across each energy window to determine the proportion of unscattered photons from each emission detected in the scatter windows. The method was first tested and compared to the standard TEW in phantoms containing Tc-99m:In-111 activity ratios between 0.15 and 6.90. True activities were determined using a dose calibrator, and SPECT activities were estimated from CT-attenuation-corrected images with and without scatter-correction. The method was then tested in vivo in six rats using In-111-liposome and Tc-99m-tetrofosmin to generate cross talk in the area of the myocardium. The myocardium was manually segmented using the SPECT and CT images, and partial-volume correction was performed using a template-based approach. The rat heart was counted in a well-counter to determine the true activity. In the phantoms without correction for Compton-scatter, Tc-99m activity quantification errors as high as 85% were observed. The standard TEW method quantified Tc-99m activity with an average accuracy of -9.0% ± 0.7%, while the modified TEW was accurate within 5% of truth in phantoms with Tc-99m:In-111 activity ratios ≥0.52. Without scatter-correction, In-111 activity was quantified with an average accuracy of 4.1%, and there was no dependence of accuracy on the activity ratio. In rat myocardia, uncorrected images were overestimated by an average of 23% ± 5%, and the standard TEW had an accuracy of -13.8% ± 1.6%, while the modified TEW yielded an accuracy of -4.0% ± 1.6%. Cross talk and self-scatter were shown to produce quantification errors in phantoms as well as in vivo. The standard TEW provided inaccurate results due to the inclusion of unscattered photons in the scatter windows. The modified TEW improved the scatter estimate and reduced the quantification errors in phantoms and in vivo.
Laplace Transform Based Radiative Transfer Studies
NASA Astrophysics Data System (ADS)
Hu, Y.; Lin, B.; Ng, T.; Yang, P.; Wiscombe, W.; Herath, J.; Duffy, D.
2006-12-01
Multiple scattering is the major uncertainty for data analysis of space-based lidar measurements. Until now, accurate quantitative lidar data analysis has been limited to very thin objects that are dominated by single scattering, where photons from the laser beam only scatter a single time with particles in the atmosphere before reaching the receiver, and simple linear relationship between physical property and lidar signal exists. In reality, multiple scattering is always a factor in space-based lidar measurement and it dominates space- based lidar returns from clouds, dust aerosols, vegetation canopy and phytoplankton. While multiple scattering are clear signals, the lack of a fast-enough lidar multiple scattering computation tool forces us to treat the signal as unwanted "noise" and use simple multiple scattering correction scheme to remove them. Such multiple scattering treatments waste the multiple scattering signals and may cause orders of magnitude errors in retrieved physical properties. Thus the lack of fast and accurate time-dependent radiative transfer tools significantly limits lidar remote sensing capabilities. Analyzing lidar multiple scattering signals requires fast and accurate time-dependent radiative transfer computations. Currently, multiple scattering is done with Monte Carlo simulations. Monte Carlo simulations take minutes to hours and are too slow for interactive satellite data analysis processes and can only be used to help system / algorithm design and error assessment. We present an innovative physics approach to solve the time-dependent radiative transfer problem. The technique utilizes FPGA based reconfigurable computing hardware. The approach is as following, 1. Physics solution: Perform Laplace transform on the time and spatial dimensions and Fourier transform on the viewing azimuth dimension, and convert the radiative transfer differential equation solving into a fast matrix inversion problem. The majority of the radiative transfer computation goes to matrix inversion processes, FFT and inverse Laplace transforms. 2. Hardware solutions: Perform the well-defined matrix inversion, FFT and Laplace transforms on highly parallel, reconfigurable computing hardware. This physics-based computational tool leads to accurate quantitative analysis of space-based lidar signals and improves data quality of current lidar mission such as CALIPSO. This presentation will introduce the basic idea of this approach, preliminary results based on SRC's FPGA-based Mapstation, and how we may apply it to CALIPSO data analysis.
Identifying the theory of dark matter with direct detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.
2015-12-01
Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either 'heavy' or 'light' mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less
Identifying the theory of dark matter with direct detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.
2015-12-29
Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either “heavy” or “light” mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less
NASA Astrophysics Data System (ADS)
Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.
2015-03-01
Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.
Smith, Peter D [Santa Fe, NM; Claytor, Thomas N [White Rock, NM; Berry, Phillip C [Albuquerque, NM; Hills, Charles R [Los Alamos, NM
2010-10-12
An x-ray detector is disclosed that has had all unnecessary material removed from the x-ray beam path, and all of the remaining material in the beam path made as light and as low in atomic number as possible. The resulting detector is essentially transparent to x-rays and, thus, has greatly reduced internal scatter. The result of this is that x-ray attenuation data measured for the object under examination are much more accurate and have an increased dynamic range. The benefits of this improvement are that beam hardening corrections can be made accurately, that computed tomography reconstructions can be used for quantitative determination of material properties including density and atomic number, and that lower exposures may be possible as a result of the increased dynamic range.
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.
1993-01-01
We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.
Modelling the physics in iterative reconstruction for transmission computed tomography
Nuyts, Johan; De Man, Bruno; Fessler, Jeffrey A.; Zbijewski, Wojciech; Beekman, Freek J.
2013-01-01
There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of X-ray CT imaging. IR has the ability to significantly reduce patient dose, it provides the flexibility to reconstruct images from arbitrary X-ray system geometries and it allows to include detailed models of photon transport and detection physics, to accurately correct for a wide variety of image degrading effects. This paper reviews discretisation issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. Widespread implementation of IR with highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling. PMID:23739261
On the radiative properties of soot aggregates part 1: Necking and overlapping
NASA Astrophysics Data System (ADS)
Yon, J.; Bescond, A.; Liu, F.
2015-09-01
There is a strong interest in accurately modelling the radiative properties of soot aggregates (also known as black carbon particles) emitted from combustion systems and fires to gain improved understanding of the role of black carbon to global warming. This study conducted a systematic investigation of the effects of overlapping and necking between neighbouring primary particles on the radiative properties of soot aggregates using the discrete dipole approximation. The degrees of overlapping and necking are quantified by the overlapping and necking parameters. Realistic soot aggregates were generated numerically by constructing overlapping and necking to fractal aggregates formed by point-touch primary particles simulated using a diffusion-limited cluster aggregation algorithm. Radiative properties (differential scattering, absorption, total scattering, specific extinction, asymmetry factor and single scattering albedo) were calculated using the experimentally measured soot refractive index over the spectral range of 266-1064 nm for 9 combinations of the overlapping and necking parameters. Overlapping and necking affect significantly the absorption and scattering properties of soot aggregates, especially in the near UV spectrum due to the enhanced multiple scattering effects within an aggregate. By using correctly modified aggregate properties (fractal dimension, prefactor, primary particle radius, and the number of primary particle) and by accounting for the effects of multiple scattering, the simple Rayleigh-Debye-Gans theory for fractal aggregates can reproduce reasonably accurate radiative properties of realistic soot aggregates.
Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data
NASA Technical Reports Server (NTRS)
Song, S.; Moore, R. K.
1996-01-01
The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2012-01-01
A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.
Exact Rayleigh scattering calculations for use with the Nimbus-7 Coastal Zone Color Scanner
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Brown, James W.; Evans, Robert H.
1988-01-01
The radiance reflected from a plane-parallel atmosphere and flat sea surface in the absence of aerosols has been determined with an exact multiple scattering code to improve the analysis of Nimbus-7 CZCS imagery. It is shown that the single scattering approximation normally used to compute this radiance can result in errors of up to 5 percent for small and moderate solar zenith angles. A scheme to include the effect of variations in the surface pressure in the exact computation of the Rayleigh radiance is discussed. The results of an application of these computations to CZCS imagery suggest that accurate atmospheric corrections can be obtained for solar zenith angles at least as large as 65 deg.
Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.
Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A
2017-01-01
Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-01-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299
Morphology supporting function: attenuation correction for SPECT/CT, PET/CT, and PET/MR imaging
Lee, Tzu C.; Alessio, Adam M.; Miyaoka, Robert M.; Kinahan, Paul E.
2017-01-01
Both SPECT, and in particular PET, are unique in medical imaging for their high sensitivity and direct link to a physical quantity, i.e. radiotracer concentration. This gives PET and SPECT imaging unique capabilities for accurately monitoring disease activity for the purposes of clinical management or therapy development. However, to achieve a direct quantitative connection between the underlying radiotracer concentration and the reconstructed image values several confounding physical effects have to be estimated, notably photon attenuation and scatter. With the advent of dual-modality SPECT/CT, PET/CT, and PET/MR scanners, the complementary CT or MR image data can enable these corrections, although there are unique challenges for each combination. This review covers the basic physics underlying photon attenuation and scatter and summarizes technical considerations for multimodal imaging with regard to PET and SPECT quantification and methods to address the challenges for each multimodal combination. PMID:26576737
NASA Technical Reports Server (NTRS)
Fahr, A.; Braun, W.; Kurylo, M. J.
1993-01-01
Ultraviolet absorption cross sections of CH3CFCl2(HCFC-141b) were determined in the gas phase (190-260 nm) and liquid phase (230-260 mm) at 298 K. The liquid phase absorption cross sections were then converted into accurate gas phase values using a previously described procedure. It has been demonstrated that scattered light from the shorter-wavelength region (as little as several parts per thousand) can seriously compromise the absorption cross-section measurement, particularly at longer wavelengths where cross sections are low, and can be a source of discrepancies in the cross sections of weakly absorbing halocarbons reported in the literature. A modeling procedure was developed to assess the effect of scattered light on the measured absorption cross section in our experiments, thereby permitting appropriate corrections to be made on the experimental values. Modeled and experimental results were found to be in good agreement. Experimental results from this study were compared with other available determinations and provide accurate input for calculating the atmospheric lifetime of HCFC-141b.
A Q-Band Free-Space Characterization of Carbon Nanotube Composites
Hassan, Ahmed M.; Garboczi, Edward J.
2016-01-01
We present a free-space measurement technique for non-destructive non-contact electrical and dielectric characterization of nano-carbon composites in the Q-band frequency range of 30 GHz to 50 GHz. The experimental system and error correction model accurately reconstruct the conductivity of composite materials that are either thicker than the wave penetration depth, and therefore exhibit negligible microwave transmission (less than −40 dB), or thinner than the wave penetration depth and, therefore, exhibit significant microwave transmission. This error correction model implements a fixed wave propagation distance between antennas and corrects the complex scattering parameters of the specimen from two references, an air slab having geometrical propagation length equal to that of the specimen under test, and a metallic conductor, such as an aluminum plate. Experimental results were validated by reconstructing the relative dielectric permittivity of known dielectric materials and then used to determine the conductivity of nano-carbon composite laminates. This error correction model can simplify routine characterization of thin conducting laminates to just one measurement of scattering parameters, making the method attractive for research, development, and for quality control in the manufacturing environment. PMID:28057959
NASA Astrophysics Data System (ADS)
Konik, Arda; Madsen, Mark T.; Sunderland, John J.
2012-10-01
In human emission tomography, combined PET/CT and SPECT/CT cameras provide accurate attenuation maps for sophisticated scatter and attenuation corrections. Having proven their potential, these scanners are being adapted for small animal imaging using similar correction approaches. However, attenuation and scatter effects in small animal imaging are substantially less than in human imaging. Hence, the value of sophisticated corrections is not obvious for small animal imaging considering the additional cost and complexity of these methods. In this study, using GATE Monte Carlo package, we simulated the Inveon small animal SPECT (single pinhole collimator) scanner to find the scatter fractions of various sizes of the NEMA-mouse (diameter: 2-5.5 cm , length: 7 cm), NEMA-rat (diameter: 3-5.5 cm, length: 15 cm) and MOBY (diameter: 2.1-5.5 cm, length: 3.5-9.1 cm) phantoms. The simulations were performed for three radionuclides commonly used in small animal SPECT studies:99mTc (140 keV), 111In (171 keV 90% and 245 keV 94%) and 125I (effective 27.5 keV). For the MOBY phantoms, the total Compton scatter fractions ranged (over the range of phantom sizes) from 4-10% for 99mTc (126-154 keV), 7-16% for 111In (154-188 keV), 3-7% for 111In (220-270 keV) and 17-30% for 125I (15-45 keV) including the scatter contributions from the tungsten collimator, lead shield and air (inside and outside the camera heads). For the NEMA-rat phantoms, the scatter fractions ranged from 10-15% (99mTc), 17-23% 111In: 154-188 keV), 8-12% (111In: 220-270 keV) and 32-40% (125I). Our results suggest that energy window methods based on solely emission data are sufficient for all mouse and most rat studies for 99mTc and 111In. However, more sophisticated methods may be needed for 125I.
Experimental validation of a multi-energy x-ray adapted scatter separation method
NASA Astrophysics Data System (ADS)
Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.
2016-12-01
Both in radiography and computed tomography (CT), recently emerged energy-resolved x-ray photon counting detectors enable the identification and quantification of individual materials comprising the inspected object. However, the approaches used for these operations require highly accurate x-ray images. The accuracy of the images is severely compromised by the presence of scattered radiation, which leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in CT. The aim of the present study was to experimentally evaluate a recently introduced partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. For this purpose, a prototype x-ray system was used. Several radiographic acquisitions of an anthropomorphic thorax phantom were performed. Reference primary images were obtained via the beam-stop (BS) approach. The attenuation images acquired from PASSSA-corrected data showed a substantial increase in local contrast and internal structure contour visibility when compared to uncorrected images. A substantial reduction of scatter induced bias was also achieved. Quantitatively, the developed method proved to be in relatively good agreement with the BS data. The application of the proposed scatter correction technique lowered the initial normalized root-mean-square error (NRMSE) of 45% between the uncorrected total and the reference primary spectral images by a factor of 9, thus reducing it to around 5%.
GEO-LEO reflectance band inter-comparison with BRDF and atmospheric scattering corrections
NASA Astrophysics Data System (ADS)
Chang, Tiejun; Xiong, Xiaoxiong Jack; Keller, Graziela; Wu, Xiangqian
2017-09-01
The inter-comparison of the reflective solar bands between the instruments onboard a geostationary orbit satellite and onboard a low Earth orbit satellite is very helpful to assess their calibration consistency. GOES-R was launched on November 19, 2016 and Himawari 8 was launched October 7, 2014. Unlike the previous GOES instruments, the Advanced Baseline Imager on GOES-16 (GOES-R became GOES-16 after November 29 when it reached orbit) and the Advanced Himawari Imager (AHI) on Himawari 8 have onboard calibrators for the reflective solar bands. The assessment of calibration is important for their product quality enhancement. MODIS and VIIRS, with their stringent calibration requirements and excellent on-orbit calibration performance, provide good references. The simultaneous nadir overpass (SNO) and ray-matching are widely used inter-comparison methods for reflective solar bands. In this work, the inter-comparisons are performed over a pseudo-invariant target. The use of stable and uniform calibration sites provides comparison with appropriate reflectance level, accurate adjustment for band spectral coverage difference, reduction of impact from pixel mismatching, and consistency of BRDF and atmospheric correction. The site in this work is a desert site in Australia (latitude -29.0 South; longitude 139.8 East). Due to the difference in solar and view angles, two corrections are applied to have comparable measurements. The first is the atmospheric scattering correction. The satellite sensor measurements are top of atmosphere reflectance. The scattering, especially Rayleigh scattering, should be removed allowing the ground reflectance to be derived. Secondly, the angle differences magnify the BRDF effect. The ground reflectance should be corrected to have comparable measurements. The atmospheric correction is performed using a vector version of the Second Simulation of a Satellite Signal in the Solar Spectrum modeling and BRDF correction is performed using a semi-empirical model. AHI band 1 (0.47μm) shows good matching with VIIRS band M3 with difference of 0.15%. AHI band 5 (1.69μm) shows largest difference in comparison with VIIRS M10.
A study on scattering correction for γ-photon 3D imaging test method
NASA Astrophysics Data System (ADS)
Xiao, Hui; Zhao, Min; Liu, Jiantang; Chen, Hao
2018-03-01
A pair of 511KeV γ-photons is generated during a positron annihilation. Their directions differ by 180°. The moving path and energy information can be utilized to form the 3D imaging test method in industrial domain. However, the scattered γ-photons are the major factors influencing the imaging precision of the test method. This study proposes a γ-photon single scattering correction method from the perspective of spatial geometry. The method first determines possible scattering points when the scattered γ-photon pair hits the detector pair. The range of scattering angle can then be calculated according to the energy window. Finally, the number of scattered γ-photons denotes the attenuation of the total scattered γ-photons along its moving path. The corrected γ-photons are obtained by deducting the scattered γ-photons from the original ones. Two experiments are conducted to verify the effectiveness of the proposed scattering correction method. The results concluded that the proposed scattering correction method can efficiently correct scattered γ-photons and improve the test accuracy.
Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo
2005-10-01
An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.
Tang, Bin; Wei, Biao; Wu, De-Cao; Mi, De-Ling; Zhao, Jing-Xiao; Feng, Peng; Jiang, Shang-Hai; Mao, Ben-Jiang
2014-11-01
Eliminating turbidity is a direct effect spectroscopy detection of COD key technical problems. This stems from the UV-visible spectroscopy detected key quality parameters depend on an accurate and effective analysis of water quality parameters analytical model, and turbidity is an important parameter that affects the modeling. In this paper, we selected formazine turbidity solution and standard solution of potassium hydrogen phthalate to study the turbidity affect of UV--visible absorption spectroscopy detection of COD, at the characteristics wavelength of 245, 300, 360 and 560 nm wavelength point several characteristics with the turbidity change in absorbance method of least squares curve fitting, thus analyzes the variation of absorbance with turbidity. The results show, In the ultraviolet range of 240 to 380 nm, as the turbidity caused by particle produces compounds to the organics, it is relatively complicated to test the turbidity affections on the water Ultraviolet spectra; in the visible region of 380 to 780 nm, the turbidity of the spectrum weakens with wavelength increases. Based on this, this paper we study the multiplicative scatter correction method affected by the turbidity of the water sample spectra calibration test, this method can correct water samples spectral affected by turbidity. After treatment, by comparing the spectra before, the results showed that the turbidity caused by wavelength baseline shift points have been effectively corrected, and features in the ultraviolet region has not diminished. Then we make multiplicative scatter correction for the three selected UV liquid-visible absorption spectroscopy, experimental results shows that on the premise of saving the characteristic of the Ultraviolet-Visible absorption spectrum of water samples, which not only improve the quality of COD spectroscopy detection SNR, but also for providing an efficient data conditioning regimen for establishing an accurate of the chemical measurement methods.
SU-E-J-135: Feasibility of Using Quantitative Cone Beam CT for Proton Adaptive Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jingqian, W; Wang, Q; Zhang, X
2015-06-15
Purpose: To investigate the feasibility of using scatter corrected cone beam CT (CBCT) for proton adaptive planning. Methods: Phantom study was used to evaluate the CT number difference between the planning CT (pCT), quantitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units using adaptive scatter kernel superposition (ASKS) technique, and raw CBCT (rCBCT). After confirming the CT number accuracy, prostate patients, each with a pCT and several sets of weekly CBCT, were investigated for this study. Spot scanning proton treatment plans were independently generated on pCT, qCBCT and rCBCT. The treatment plans were then recalculated on all images. Dose-volume-histogrammore » (DVH) parameters and gamma analysis were used to compare between dose distributions. Results: Phantom study suggested that Hounsfield unit accuracy for different materials are within 20 HU for qCBCT and over 250 HU for rCBCT. For prostate patients, proton dose could be calculated accurately on qCBCT but not on rCBCT. When the original plan was recalculated on qCBCT, tumor coverage was maintained when anatomy was consistent with pCT. However, large dose variance was observed when patient anatomy change. Adaptive plan using qCBCT was able to recover tumor coverage and reduce dose to normal tissue. Conclusion: It is feasible to use qu antitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units for proton dose calculation and adaptive planning in proton therapy. Partly supported by Varian Medical Systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J; Park, Y; Sharp, G
Purpose: To establish a method to evaluate the dosimetric impact of anatomic changes in head and neck patients during proton therapy by using scatter-corrected cone-beam CT (CBCT) images. Methods: The water equivalent path length (WEPL) was calculated to the distal edge of PTV contours by using tomographic images available for six head and neck patients received photon therapy. The proton range variation was measured by calculating the difference between the distal WEPLs calculated with the planning CT and weekly treatment CBCT images. By performing an automatic rigid registration, six degrees-of-freedom (DOF) correction was made to the CBCT images to accountmore » for the patient setup uncertainty. For accurate WEPL calculations, an existing CBCT scatter correction algorithm, whose performance was already proven for phantom images, was calibrated for head and neck patient images. Specifically, two different image similarity measures, mutual information (MI) and mean square error (MSE), were tested for the deformable image registration (DIR) in the CBCT scatter correction algorithm. Results: The impact of weight loss was reflected in the distal WEPL differences with the aid of the automatic rigid registration reducing the influence of patient setup uncertainty on the WEPL calculation results. The WEPL difference averaged over distal area was 2.9 ± 2.9 (mm) across all fractions of six patients and its maximum, mostly found at the last available fraction, was 6.2 ± 3.4 (mm). The MSE-based DIR successfully registered each treatment CBCT image to the planning CT image. On the other hand, the MI-based DIR deformed the skin voxels in the planning CT image to the immobilization mask in the treatment CBCT image, most of which was cropped out of the planning CT image. Conclusion: The dosimetric impact of anatomic changes was evaluated by calculating the distal WEPL difference with the existing scatter-correction algorithm appropriately calibrated. Jihun Kim, Yang-Kyun Park, Gregory Sharp, and Brian Winey have received grant support from the NCI Federal Share of program income earned by Massachusetts General Hospital on C06 CA059267, Proton Therapy Research and Treatment Center.« less
Robust scatter correction method for cone-beam CT using an interlacing-slit plate
NASA Astrophysics Data System (ADS)
Huang, Kui-Dong; Xu, Zhe; Zhang, Ding-Hua; Zhang, Hua; Shi, Wen-Long
2016-06-01
Cone-beam computed tomography (CBCT) has been widely used in medical imaging and industrial nondestructive testing, but the presence of scattered radiation will cause significant reduction of image quality. In this article, a robust scatter correction method for CBCT using an interlacing-slit plate (ISP) is carried out for convenient practice. Firstly, a Gaussian filtering method is proposed to compensate the missing data of the inner scatter image, and simultaneously avoid too-large values of calculated inner scatter and smooth the inner scatter field. Secondly, an interlacing-slit scan without detector gain correction is carried out to enhance the practicality and convenience of the scatter correction method. Finally, a denoising step for scatter-corrected projection images is added in the process flow to control the noise amplification The experimental results show that the improved method can not only make the scatter correction more robust and convenient, but also achieve a good quality of scatter-corrected slice images. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Aeronautical Science Fund of China (2014ZE53059), and Fundamental Research Funds for Central Universities of China (3102014KYJD022)
Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.
Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua
2018-02-01
Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, L; Yan, H; Jia, X
2014-06-01
Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated bymore » Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.« less
Cui, T.J.; Chew, W.C.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.; Abraham, J.D.
2000-01-01
Two numerical models to simulate an enhanced very early time electromagnetic (VETEM) prototype system that is used for buried-object detection and environmental problems are presented. In the first model, the transmitting and receiving loop antennas accurately analyzed using the method of moments (MoM), and then conjugate gradient (CG) methods with the fast Fourier transform (FFT) are utilized to investigate the scattering from buried conducting plates. In the second model, two magnetic dipoles are used to replace the transmitter and receiver. Both the theory and formulation are correct and the simulation results for the primary magnetic field and the reflected magnetic field are accurate.
A Novel Simple Phantom for Verifying the Dose of Radiation Therapy
Lee, J. H.; Chang, L. T.; Shiau, A. C.; Chen, C. W.; Liao, Y. J.; Li, W. J.; Lee, M. S.; Hsu, S. M.
2015-01-01
A standard protocol of dosimetric measurements is used by the organizations responsible for verifying that the doses delivered in radiation-therapy institutions are within authorized limits. This study evaluated a self-designed simple auditing phantom for use in verifying the dose of radiation therapy; the phantom design, dose audit system, and clinical tests are described. Thermoluminescent dosimeters (TLDs) were used as postal dosimeters, and mailable phantoms were produced for use in postal audits. Correction factors are important for converting TLD readout values from phantoms into the absorbed dose in water. The phantom scatter correction factor was used to quantify the difference in the scattered dose between a solid water phantom and homemade phantoms; its value ranged from 1.084 to 1.031. The energy-dependence correction factor was used to compare the TLD readout of the unit dose irradiated by audit beam energies with 60Co in the solid water phantom; its value was 0.99 to 1.01. The setup-condition factor was used to correct for differences in dose-output calibration conditions. Clinical tests of the device calibrating the dose output revealed that the dose deviation was within 3%. Therefore, our homemade phantoms and dosimetric system can be applied for accurately verifying the doses applied in radiation-therapy institutions. PMID:25883980
Exact Rayleigh scattering calculations for use with the Nimbus-7 Coastal Zone Color Scanner.
Gordon, H R; Brown, J W; Evans, R H
1988-03-01
For improved analysis of Coastal Zone Color Scanner (CZCS) imagery, the radiance reflected from a planeparallel atmosphere and flat sea surface in the absence of aerosols (Rayleigh radiance) has been computed with an exact multiple scattering code, i.e., including polarization. The results indicate that the single scattering approximation normally used to compute this radiance can cause errors of up to 5% for small and moderate solar zenith angles. At large solar zenith angles, such as encountered in the analysis of high-latitude imagery, the errors can become much larger, e.g.,>10% in the blue band. The single scattering error also varies along individual scan lines. Comparison with multiple scattering computations using scalar transfer theory, i.e., ignoring polarization, show that scalar theory can yield errors of approximately the same magnitude as single scattering when compared with exact computations at small to moderate values of the solar zenith angle. The exact computations can be easily incorporated into CZCS processing algorithms, and, for application to future instruments with higher radiometric sensitivity, a scheme is developed with which the effect of variations in the surface pressure could be easily and accurately included in the exact computation of the Rayleigh radiance. Direct application of these computations to CZCS imagery indicates that accurate atmospheric corrections can be made with solar zenith angles at least as large as 65 degrees and probably up to at least 70 degrees with a more sensitive instrument. This suggests that the new Rayleigh radiance algorithm should produce more consistent pigment retrievals, particularly at high latitudes.
NASA Astrophysics Data System (ADS)
Lozano, A. I.; Oller, J. C.; Krupa, K.; Ferreira da Silva, F.; Limão-Vieira, P.; Blanco, F.; Muñoz, A.; Colmenares, R.; García, G.
2018-06-01
A novel experimental setup has been implemented to provide accurate electron scattering cross sections from molecules at low and intermediate impact energies (1-300 eV) by measuring the attenuation of a magnetically confined linear electron beam from a molecular target. High-resolution electron energy is achieved through confinement in a magnetic gas trap where electrons are cooled by successive collisions with N2. Additionally, we developed and present a method to correct systematic errors arising from energy and angular resolution limitations. The accuracy of the entire measurement procedure is validated by comparing the N2 total scattering cross section in the considered energy range with benchmark values available in the literature.
2010-01-01
throughout the entire 3D volume which made quantification of the different tissues in the breast possible. The p eaks representing glandular and fat in...coefficients. Keywords: tissue quantification , absolute attenuation coefficient, scatter correction, computed tomography, tomography... tissue types. 1-4 Accurate measurements of t he quantification and di fferentiation of numerous t issues can be useful to identify di sease from
Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.
Temel, Burcin; Mills, Greg; Metiu, Horia
2008-03-27
We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.
[Spectral scatter correction of coal samples based on quasi-linear local weighted method].
Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng
2014-07-01
The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.
Standardizing Type Ia supernovae optical brightness using near-infrared rebrightening time
NASA Astrophysics Data System (ADS)
Shariff, H.; Dhawan, S.; Jiao, X.; Leibundgut, B.; Trotta, R.; van Dyk, D. A.
2016-12-01
Accurate standardization of Type Ia supernovae (SNIa) is instrumental to the usage of SNIa as distance indicators. We analyse a homogeneous sample of 22 low-z SNIa, observed by the Carnegie Supernova Project in the optical and near-infrared (NIR). We study the time of the second peak in the J band, t2, as an alternative standardization parameter of SNIa peak optical brightness, as measured by the standard SALT2 parameter mB. We use BAHAMAS, a Bayesian hierarchical model for SNIa cosmology, to estimate the residual scatter in the Hubble diagram. We find that in the absence of a colour correction, t2 is a better standardization parameter compared to stretch: t2 has a 1σ posterior interval for the Hubble residual scatter of σΔμ = {0.250, 0.257} mag, compared to σΔμ = {0.280, 0.287} mag when stretch (x1) alone is used. We demonstrate that when employed together with a colour correction, t2 and stretch lead to similar residual scatter. Using colour, stretch and t2 jointly as standardization parameters does not result in any further reduction in scatter, suggesting that t2 carries redundant information with respect to stretch and colour. With a much larger SNIa NIR sample at higher redshift in the future, t2 could be a useful quantity to perform robustness checks of the standardization procedure.
Electron kinetic effects on optical diagnostics in fusion plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirnov, V. V.; Den Hartog, D. J.; Duff, J.
At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP) and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. We calculate electron thermal corrections to the interferometric phase and polarization state of an EM wave propagating along tangential and poloidal chords (Faraday and Cotton-Mouton polarimetry) and perform analysis of the degree of polarization for incoherent TS. The precision of the previous lowest order linear in τ = T{sub e}/m{sub e}c{sup 2} model may be insufficient; we present a more precise model with τ{sup 2}-order corrections to satisfy themore » high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH and RF current drive effects. The classical problem of degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of T{sup e} measurement relevant to ITER operational scenarios.« less
NASA Astrophysics Data System (ADS)
Thelen, J.-C.; Havemann, S.; Taylor, J. P.
2012-06-01
Here, we present a new prototype algorithm for the simultaneous retrieval of the atmospheric profiles (temperature, humidity, ozone and aerosol) and the surface reflectance from hyperspectral radiance measurements obtained from air/space-borne, hyperspectral imagers such as the 'Airborne Visible/Infrared Imager (AVIRIS) or Hyperion on board of the Earth Observatory 1. The new scheme, proposed here, consists of a fast radiative transfer code, based on empirical orthogonal functions (EOFs), in conjunction with a 1D-Var retrieval scheme. The inclusion of an 'exact' scattering code based on spherical harmonics, allows for an accurate treatment of Rayleigh scattering and scattering by aerosols, water droplets and ice-crystals, thus making it possible to also retrieve cloud and aerosol optical properties, although here we will concentrate on non-cloudy scenes. We successfully tested this new approach using two hyperspectral images taken by AVIRIS, a whiskbroom imaging spectrometer operated by the NASA Jet Propulsion Laboratory.
Mirnov, V V; Brower, D L; Den Hartog, D J; Ding, W X; Duff, J; Parke, E
2014-11-01
At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = Te/mec(2) model may be insufficient; we present a more precise model with τ(2)-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of Te measurement relevant to ITER operational scenarios.
Cahuantzi, Roberto; Buckley, Alastair
2017-09-01
Making accurate and reliable measurements of solar irradiance is important for understanding performance in the photovoltaic energy sector. In this paper, we present design details and performance of a number of fibre optic couplers for use in irradiance measurement systems employing remote light sensors applicable for either spectrally resolved or broadband measurement. The angular and spectral characteristics of different coupler designs are characterised and compared with existing state-of-the-art commercial technology. The new coupler designs are fabricated from polytetrafluorethylene (PTFE) rods and operate through forward scattering of incident sunlight on the front surfaces of the structure into an optic fibre located in a cavity to the rear of the structure. The PTFE couplers exhibit up to 4.8% variation in scattered transmission intensity between 425 nm and 700 nm and show minimal specular reflection, making the designs accurate and reliable over the visible region. Through careful geometric optimization near perfect cosine dependence on the angular response of the coupler can be achieved. The PTFE designs represent a significant improvement over the state of the art with less than 0.01% error compared with ideal cosine response for angles of incidence up to 50°.
Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika
2017-10-01
In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
The beam stop array method to measure object scatter in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook
2014-03-01
Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.
Simulation of inverse Compton scattering and its implications on the scattered linewidth
NASA Astrophysics Data System (ADS)
Ranjan, N.; Terzić, B.; Krafft, G. A.; Petrillo, V.; Drebot, I.; Serafini, L.
2018-03-01
Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. In this paper, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model to describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016), 10.1103/PhysRevAccelBeams.19.121302], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.
Simulation of inverse Compton scattering and its implications on the scattered linewidth
Ranjan, N.; Terzić, B.; Krafft, G. A.; ...
2018-03-06
Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. Here in this article, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model tomore » describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016)], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.« less
Heat-Flux Measurements in Laser-Produced Plasmas Using Thomson Scattering from Electron Plasma Waves
NASA Astrophysics Data System (ADS)
Henchen, R. J.; Goncharov, V. N.; Cao, D.; Katz, J.; Froula, D. H.; Rozmus, W.
2017-10-01
An experiment was designed to measure heat flux in coronal plasmas using collective Thomson scattering. Adjustments to the electron distribution function resulting from heat flux affect the shape of the collective Thomson scattering features through wave-particle resonance. The amplitude of the Spitzer-Härm electron distribution function correction term (f1) was varied to match the data and determines the value of the heat flux. Independent measurements of temperature and density obtained from Thomson scattering were used to infer the classical heat flux (q = - κ∇Te) . Time-resolved Thomson-scattering data were obtained at five locations in the corona along the target normal in a blowoff plasma formed from a planar Al target with 1.5 kJ of 351-nm laser light in a 2-ns square pulse. The flux measured through the Thomson-scattering spectra is a factor of 5 less than the κ∇Te measurements. The lack of collisions of heat-carrying electrons suggests a nonlocal model is needed to accurately describe the heat flux. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
NASA Astrophysics Data System (ADS)
Babic, Steven; McNiven, Andrea; Battista, Jerry; Jordan, Kevin
2009-04-01
The dosimetry of small fields as used in stereotactic radiotherapy, radiosurgery and intensity-modulated radiation therapy can be challenging and inaccurate due to partial volume averaging effects and possible disruption of charged particle equilibrium. Consequently, there exists a need for an integrating, tissue equivalent dosimeter with high spatial resolution to avoid perturbing the radiation beam and artificially broadening the measured beam penumbra. In this work, radiochromic ferrous xylenol-orange (FX) and leuco crystal violet (LCV) micelle gels were used to measure relative dose factors (RDFs), percent depth dose profiles and relative lateral beam profiles of 6 MV x-ray pencil beams of diameter 28.1, 9.8 and 4.9 mm. The pencil beams were produced via stereotactic collimators mounted on a Varian 2100 EX linear accelerator. The gels were read using optical computed tomography (CT). Data sets were compared quantitatively with dosimetric measurements made with radiographic (Kodak EDR2) and radiochromic (GAFChromic® EBT) film, respectively. Using a fast cone-beam optical CT scanner (Vista™), corrections for diffusion in the FX gel data yielded RDFs that were comparable to those obtained by minimally diffusing LCV gels. Considering EBT film-measured RDF data as reference, cone-beam CT-scanned LCV gel data, corrected for scattered stray light, were found to be in agreement within 0.5% and -0.6% for the 9.8 and 4.9 mm diameter fields, respectively. The validity of the scattered stray light correction was confirmed by general agreement with RDF data obtained from the same LCV gel read out with a laser CT scanner that is less prone to the acceptance of scattered stray light. Percent depth dose profiles and lateral beam profiles were found to agree within experimental error for the FX gel (corrected for diffusion), LCV gel (corrected for scattered stray light), and EBT and EDR2 films. The results from this study reveal that a three-dimensional dosimetry method utilizing optical CT-scanned radiochromic gels allows for the acquisition of a self-consistent volumetric data set in a single exposure, with sufficient spatial resolution to accurately characterize small fields.
Meng, Bowen; Lee, Ho; Xing, Lei; Fahimian, Benjamin P.
2013-01-01
Purpose: X-ray scatter results in a significant degradation of image quality in computed tomography (CT), representing a major limitation in cone-beam CT (CBCT) and large field-of-view diagnostic scanners. In this work, a novel scatter estimation and correction technique is proposed that utilizes peripheral detection of scatter during the patient scan to simultaneously acquire image and patient-specific scatter information in a single scan, and in conjunction with a proposed compressed sensing scatter recovery technique to reconstruct and correct for the patient-specific scatter in the projection space. Methods: The method consists of the detection of patient scatter at the edges of the field of view (FOV) followed by measurement based compressed sensing recovery of the scatter through-out the projection space. In the prototype implementation, the kV x-ray source of the Varian TrueBeam OBI system was blocked at the edges of the projection FOV, and the image detector in the corresponding blocked region was used for scatter detection. The design enables image data acquisition of the projection data on the unblocked central region of and scatter data at the blocked boundary regions. For the initial scatter estimation on the central FOV, a prior consisting of a hybrid scatter model that combines the scatter interpolation method and scatter convolution model is estimated using the acquired scatter distribution on boundary region. With the hybrid scatter estimation model, compressed sensing optimization is performed to generate the scatter map by penalizing the L1 norm of the discrete cosine transform of scatter signal. The estimated scatter is subtracted from the projection data by soft-tuning, and the scatter-corrected CBCT volume is obtained by the conventional Feldkamp-Davis-Kress algorithm. Experimental studies using image quality and anthropomorphic phantoms on a Varian TrueBeam system were carried out to evaluate the performance of the proposed scheme. Results: The scatter shading artifacts were markedly suppressed in the reconstructed images using the proposed method. On the Catphan©504 phantom, the proposed method reduced the error of CT number to 13 Hounsfield units, 10% of that without scatter correction, and increased the image contrast by a factor of 2 in high-contrast regions. On the anthropomorphic phantom, the spatial nonuniformity decreased from 10.8% to 6.8% after correction. Conclusions: A novel scatter correction method, enabling unobstructed acquisition of the high frequency image data and concurrent detection of the patient-specific low frequency scatter data at the edges of the FOV, is proposed and validated in this work. Relative to blocker based techniques, rather than obstructing the central portion of the FOV which degrades and limits the image reconstruction, compressed sensing is used to solve for the scatter from detection of scatter at the periphery of the FOV, enabling for the highest quality reconstruction in the central region and robust patient-specific scatter correction. PMID:23298098
Brookes, Emre; Vachette, Patrice; Rocco, Mattia; Pérez, Javier
2016-01-01
Size-exclusion chromatography coupled with SAXS (small-angle X-ray scattering), often performed using a flow-through capillary, should allow direct collection of monodisperse sample data. However, capillary fouling issues and non-baseline-resolved peaks can hamper its efficacy. The UltraScan solution modeler (US-SOMO) HPLC-SAXS (high-performance liquid chromatography coupled with SAXS) module provides a comprehensive framework to analyze such data, starting with a simple linear baseline correction and symmetrical Gaussian decomposition tools [Brookes, Pérez, Cardinali, Profumo, Vachette & Rocco (2013 ▸). J. Appl. Cryst. 46, 1823–1833]. In addition to several new features, substantial improvements to both routines have now been implemented, comprising the evaluation of outcomes by advanced statistical tools. The novel integral baseline-correction procedure is based on the more sound assumption that the effect of capillary fouling on scattering increases monotonically with the intensity scattered by the material within the X-ray beam. Overlapping peaks, often skewed because of sample interaction with the column matrix, can now be accurately decomposed using non-symmetrical modified Gaussian functions. As an example, the case of a polydisperse solution of aldolase is analyzed: from heavily convoluted peaks, individual SAXS profiles of tetramers, octamers and dodecamers are extracted and reliably modeled. PMID:27738419
D'estanque, Emmanuel; Hedon, Christophe; Lattuca, Benoît; Bourdon, Aurélie; Benkiran, Meriem; Verd, Aurélie; Roubille, François; Mariano-Goulart, Denis
2017-08-01
Dual-isotope 201 Tl/ 123 I-MIBG SPECT can assess trigger zones (dysfunctions in the autonomic nervous system located in areas of viable myocardium) that are substrate for ventricular arrhythmias after STEMI. This study evaluated the necessity of delayed acquisition and scatter correction for dual-isotope 201 Tl/ 123 I-MIBG SPECT studies with a CZT camera to identify trigger zones after revascularization in patients with STEMI in routine clinical settings. Sixty-nine patients were prospectively enrolled after revascularization to undergo 201 Tl/ 123 I-MIBG SPECT using a CZT camera (Discovery NM 530c, GE). The first acquisition was a single thallium study (before MIBG administration); the second and the third were early and late dual-isotope studies. We compared the scatter-uncorrected and scatter-corrected (TEW method) thallium studies with the results of magnetic resonance imaging or transthoracic echography (reference standard) to diagnose myocardial necrosis. Summed rest scores (SRS) were significantly higher in the delayed MIBG studies than the early MIBG studies. SRS and necrosis surface were significantly higher in the delayed thallium studies with scatter correction than without scatter correction, leading to less trigger zone diagnosis for the scatter-corrected studies. Compared with the scatter-uncorrected studies, the late thallium scatter-corrected studies provided the best diagnostic values for myocardial necrosis assessment. Delayed acquisitions and scatter-corrected dual-isotope 201 Tl/ 123 I-MIBG SPECT acquisitions provide an improved evaluation of trigger zones in routine clinical settings after revascularization for STEMI.
A single-scattering correction for the seismo-acoustic parabolic equation.
Collins, Michael D
2012-04-01
An efficient single-scattering correction that does not require iterations is derived and tested for the seismo-acoustic parabolic equation. The approach is applicable to problems involving gradual range dependence in a waveguide with fluid and solid layers, including the key case of a sloping fluid-solid interface. The single-scattering correction is asymptotically equivalent to a special case of a single-scattering correction for problems that only have solid layers [Küsel et al., J. Acoust. Soc. Am. 121, 808-813 (2007)]. The single-scattering correction has a simple interpretation (conservation of interface conditions in an average sense) that facilitated its generalization to problems involving fluid layers. Promising results are obtained for problems in which the ocean bottom interface has a small slope.
Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong
2011-02-21
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.
Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging
NASA Astrophysics Data System (ADS)
Eldib, Mootaz; Bini, Jason; Robson, Philip M.; Calcagno, Claudia; Faul, David D.; Tsoumpas, Charalampos; Fayad, Zahi A.
2015-06-01
The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use.
NASA Astrophysics Data System (ADS)
Perim de Faria, Julia; Bundke, Ulrich; Onasch, Timothy B.; Freedman, Andrew; Petzold, Andreas
2016-04-01
The necessity to quantify the direct impact of aerosol particles on climate forcing is already well known; assessing this impact requires continuous and systematic measurements of the aerosol optical properties. Two of the main parameters that need to be accurately measured are the aerosol optical depth and single scattering albedo (SSA, defined as the ratio of particulate scattering to extinction). The measurement of single scattering albedo commonly involves the measurement of two optical parameters, the scattering and the absorption coefficients. Although there are well established technologies to measure both of these parameters, the use of two separate instruments with different principles and uncertainties represents potential sources of significant errors and biases. Based on the recently developed cavity attenuated phase shift particle extinction monitor (CAPS PM_{ex) instrument, the CAPS PM_{ssa instrument combines the CAPS technology to measure particle extinction with an integrating sphere capable of simultaneously measuring the scattering coefficient of the same sample. The scattering channel is calibrated to the extinction channel, such that the accuracy of the single scattering albedo measurement is only a function of the accuracy of the extinction measurement and the nephelometer truncation losses. This gives the instrument an accurate and direct measurement of the single scattering albedo. In this study, we assess the measurements of both the extinction and scattering channels of the CAPS PM_{ssa through intercomparisons with Mie theory, as a fundamental comparison, and with proven technologies, such as integrating nephelometers and filter-based absorption monitors. For comparison, we use two nephelometers, a TSI 3563 and an Aurora 4000, and two measurements of the absorption coefficient, using a Particulate Soot Absorption Photometer (PSAP) and a Multi Angle Absorption Photometer (MAAP). We also assess the indirect absorption coefficient measurement from the CAPS PM_{ssa (calculated as the difference from the measured extinction and scattering). The study was carried out in the laboratory with controlled particle generation systems. We used both light absorbing aerosols (Regal 400R pigment black from Cabot Corp. and colloidal graphite - Aquadag - from Agar Scientific) and purely scattering aerosols (ammonium sulphate and polystyrene latex spheres), covering single scattering albedo values from approximately 0.4 to 1.0. A new truncation angle correction for the CAPS PM_{ssa integrated sphere is proposed.
An investigation of light transport through scattering bodies with non-scattering regions.
Firbank, M; Arridge, S R; Schweiger, M; Delpy, D T
1996-04-01
Near-infra-red (NIR) spectroscopy is increasingly being used for monitoring cerebral oxygenation and haemodynamics. One current concern is the effect of the clear cerebrospinal fluid upon the distribution of light in the head. There are difficulties in modelling clear layers in scattering systems. The Monte Carlo model should handle clear regions accurately, but is too slow to be used for realistic geometries. The diffusion equation can be solved quickly for realistic geometries, but is only valid in scattering regions. In this paper we describe experiments carried out on a solid slab phantom to investigate the effect of clear regions. The experimental results were compared with the different models of light propagation. We found that the presence of a clear layer had a significant effect upon the light distribution, which was modelled correctly by Monte Carlo techniques, but not by diffusion theory. A novel approach to calculating the light transport was developed, using diffusion theory to analyze the scattering regions combined with a radiosity approach to analyze the propagation through the clear region. Results from this approach were found to agree with both the Monte Carlo and experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirnov, V. V.; Hartog, D. J. Den; Duff, J.
2014-11-15
At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = T{sub e}/m{sub e}c{sup 2} model may be insufficient; we present a more precise model with τ{sup 2}-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused bymore » equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of T{sub e} measurement relevant to ITER operational scenarios.« less
NASA Astrophysics Data System (ADS)
Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.
2012-07-01
The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.
OMPS Limb Profiler Instrument Performance Assessment
NASA Technical Reports Server (NTRS)
Jaross, Glen R.; Bhartia, Pawan K.; Chen, Grace; Kowitt, Mark; Haken, Michael; Chen, Zhong; Xu, Philippe; Warner, Jeremy; Kelly, Thomas
2014-01-01
Following the successful launch of the Ozone Mapping and Profiler Suite (OMPS) aboard the Suomi National Polar-orbiting Partnership (SNPP) spacecraft, the NASA OMPS Limb team began an evaluation of instrument and data product performance. The focus of this paper is the instrument performance in relation to the original design criteria. Performance that is closer to expectations increases the likelihood that limb scatter measurements by SNPP OMPS and successor instruments can form the basis for accurate long-term monitoring of ozone vertical profiles. The team finds that the Limb instrument operates mostly as designed and basic performance meets or exceeds the original design criteria. Internally scattered stray light and sensor pointing knowledge are two design challenges with the potential to seriously degrade performance. A thorough prelaunch characterization of stray light supports software corrections that are accurate to within 1% in radiances up to 60 km for the wavelengths used in deriving ozone. Residual stray light errors at 1000nm, which is useful in retrievals of stratospheric aerosols, currently exceed 10%. Height registration errors in the range of 1 km to 2 km have been observed that cannot be fully explained by known error sources. An unexpected thermal sensitivity of the sensor also causes wavelengths and pointing to shift each orbit in the northern hemisphere. Spectral shifts of as much as 0.5nm in the ultraviolet and 5 nm in the visible, and up to 0.3 km shifts in registered height, must be corrected in ground processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn
Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but theirmore » level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE Healthcare), followed by a CT scan for attenuation correction (AC). For each experiment, separate images were created including reconstruction with no corrections (NC), with AC, with attenuation and dual-energy window (DEW) scatter correction (ACSC), with attenuation and partial volume correction (PVC) applied (ACPVC), and with attenuation, scatter, and PVC applied (ACSCPVC). The DEW SC method used was modified to account for the presence of the low-energy tail. Results: T-tests showed that the mean error in absolute activity measurement was reduced significantly for AC and ACSC compared to NC for both (hot and cold) datasets (p < 0.001) and that ACSC, ACPVC, and ACSCPVC show significant reductions in mean differences compared to AC (p ≤ 0.001) without increasing the uncertainty (p > 0.4). The effect of SC and PVC was significant in reducing errors over AC in both datasets (p < 0.001 and p < 0.01, respectively), resulting in a mean error of 5% ± 4%. Conclusions: Quantitative measurements of cardiac {sup 99m}Tc activity are achievable using attenuation and scatter corrections, with the authors’ dedicated cardiac SPECT camera. Partial volume corrections offer improvements in measurement accuracy in AC images and ACSC images with elevated background activity; however, these improvements are not significant in ACSC images with low background activity.« less
Yan, Hao; Mou, Xuanqin; Tang, Shaojie; Xu, Qiong; Zankl, Maria
2010-11-07
Scatter correction is an open problem in x-ray cone beam (CB) CT. The measurement of scatter intensity with a moving beam stop array (BSA) is a promising technique that offers a low patient dose and accurate scatter measurement. However, when restoring the blocked primary fluence behind the BSA, spatial interpolation cannot well restore the high-frequency part, causing streaks in the reconstructed image. To address this problem, we deduce a projection correlation (PC) to utilize the redundancy (over-determined information) in neighbouring CB views. PC indicates that the main high-frequency information is contained in neighbouring angular projections, instead of the current projection itself, which provides a guiding principle that applies to high-frequency information restoration. On this basis, we present the projection correlation based view interpolation (PC-VI) algorithm; that it outperforms the use of only spatial interpolation is validated. The PC-VI based moving BSA method is developed. In this method, PC-VI is employed instead of spatial interpolation, and new moving modes are designed, which greatly improve the performance of the moving BSA method in terms of reliability and practicability. Evaluation is made on a high-resolution voxel-based human phantom realistically including the entire procedure of scatter measurement with a moving BSA, which is simulated by analytical ray-tracing plus Monte Carlo simulation with EGSnrc. With the proposed method, we get visually artefact-free images approaching the ideal correction. Compared with the spatial interpolation based method, the relative mean square error is reduced by a factor of 6.05-15.94 for different slices. PC-VI does well in CB redundancy mining; therefore, it has further potential in CBCT studies.
Eldib, Mootaz; Bini, Jason; Calcagno, Claudia; Robson, Philip M; Mani, Venkatesh; Fayad, Zahi A
2014-02-01
Attenuation correction for magnetic resonance (MR) coils is a new challenge that came about with the development of combined MR and positron emission tomography (PET) imaging. This task is difficult because such coils are not directly visible on either PET or MR acquisitions with current combined scanners and are therefore not easily localized in the field of view. This issue becomes more evident when trying to localize flexible MR coils (eg, cardiac or body matrix coil) that change position and shape from patient to patient and from one imaging session to another. In this study, we proposed a novel method to localize and correct for the attenuation and scatter of a flexible MR cardiac coil, using MR fiducial markers placed on the surface of the coil to allow for accurate registration of a template computed tomography (CT)-based attenuation map. To quantify the attenuation properties of the cardiac coil, a uniform cylindrical water phantom injected with 18F-fluorodeoxyglucose (18F-FDG) was imaged on a sequential MR/PET system with and without the flexible cardiac coil. After establishing the need to correct for the attenuation of the coil, we tested the feasibility of several methods to register a precomputed attenuation map to correct for the attenuation. To accomplish this, MR and CT visible markers were placed on the surface of the cardiac flexible coil. Using only the markers as a driver for registration, the CT image was registered to the reference image through a combination of rigid and deformable registration. The accuracy of several methods was compared for the deformable registration, including B-spline, thin-plate spline, elastic body spline, and volume spline. Finally, we validated our novel approach both in phantom and patient studies. The findings from the phantom experiments indicated that the presence of the coil resulted in a 10% reduction in measured 18F-FDG activity when compared with the phantom-only scan. Local underestimation reached 22% in regions of interest close to the coil. Various registration methods were tested, and the volume spline was deemed to be the most accurate, as measured by the Dice similarity metric. The results of our phantom experiments showed that the bias in the 18F-FDG quantification introduced by the presence of the coil could be reduced by using our registration method. An overestimation of only 1.9% of the overall activity for the phantom scan with the coil attenuation map was measured when compared with the baseline phantom scan without coil. A local overestimation of less than 3% was observed in the ROI analysis when using the proposed method to correct for the attenuation of the flexible cardiac coil. Quantitative results from the patient study agreed well with the phantom findings. We presented and validated an accurate method to localize and register a CT-based attenuation map to correct for the attenuation and scatter of flexible MR coils. This method may be translated to clinical use to produce quantitatively accurate measurements with the use of flexible MR coils during MR/PET imaging.
Analytically based photon scatter modeling for a multipinhole cardiac SPECT camera.
Pourmoghaddas, Amir; Wells, R Glenn
2016-11-01
Dedicated cardiac SPECT scanners have improved performance over standard gamma cameras allowing reductions in acquisition times and/or injected activity. One approach to improving performance has been to use pinhole collimators, but this can cause position-dependent variations in attenuation, sensitivity, and spatial resolution. CT attenuation correction (AC) and an accurate system model can compensate for many of these effects; however, scatter correction (SC) remains an outstanding issue. In addition, in cameras using cadmium-zinc-telluride-based detectors, a large portion of unscattered photons is detected with reduced energy (low-energy tail). Consequently, application of energy-based SC approaches in these cameras leads to a higher increase in noise than with standard cameras due to the subtraction of true counts detected in the low-energy tail. Model-based approaches with parallel-hole collimator systems accurately calculate scatter based on the physics of photon interactions in the patient and camera and generate lower-noise estimates of scatter than energy-based SC. In this study, the accuracy of a model-based SC method was assessed using physical phantom studies on the GE-Discovery NM530c and its performance was compared to a dual energy window (DEW)-SC method. The analytical photon distribution (APD) method was used to calculate the distribution of probabilities that emitted photons will scatter in the surrounding scattering medium and be subsequently detected. APD scatter calculations for 99m Tc-SPECT (140 ± 14 keV) were validated with point-source measurements and 15 anthropomorphic cardiac-torso phantom experiments and varying levels of extra-cardiac activity causing scatter inside the heart. The activity inserted into the myocardial compartment of the phantom was first measured using a dose calibrator. CT images were acquired on an Infinia Hawkeye (GE Healthcare) SPECT/CT and coregistered with emission data for AC. For comparison, DEW scatter projections (120 ± 6 keV ) were also extracted from the acquired list-mode SPECT data. Either APD or DEW scatter projections were subtracted from corresponding 140 keV measured projections and then reconstructed with AC (APD-SC and DEW-SC). Quantitative accuracy of the activity measured in the heart for the APD-SC and DEW-SC images was assessed against dose calibrator measurements. The difference between modeled and acquired projections was measured as the root-mean-squared-error (RMSE). APD-modeled projections for a clinical cardiac study were also evaluated. APD-modeled projections showed good agreement with SPECT measurements and had reduced noise compared to DEW scatter estimates. APD-SC reduced mean error in activity measurement compared to DEW-SC in images and the reduction was statistically significant where the scatter fraction (SF) was large (mean SF = 28.5%, T-test p = 0.007). APD-SC reduced measurement uncertainties as well; however, the difference was not found to be statistically significant (F-test p > 0.5). RMSE comparisons showed that elevated levels of scatter did not significantly contribute to a change in RMSE (p > 0.2). Model-based APD scatter estimation is feasible for dedicated cardiac SPECT scanners with pinhole collimators. APD-SC images performed better than DEW-SC images and improved the accuracy of activity measurement in high-scatter scenarios.
NASA Astrophysics Data System (ADS)
Xie, Shi-Peng; Luo, Li-Min
2012-06-01
The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.
A combined surface/volume scattering retracking algorithm for ice sheet satellite altimetry
NASA Technical Reports Server (NTRS)
Davis, Curt H.
1992-01-01
An algorithm that is based upon a combined surface-volume scattering model is developed. It can be used to retrack individual altimeter waveforms over ice sheets. An iterative least-squares procedure is used to fit the combined model to the return waveforms. The retracking algorithm comprises two distinct sections. The first generates initial model parameter estimates from a filtered altimeter waveform. The second uses the initial estimates, the theoretical model, and the waveform data to generate corrected parameter estimates. This retracking algorithm can be used to assess the accuracy of elevations produced from current retracking algorithms when subsurface volume scattering is present. This is extremely important so that repeated altimeter elevation measurements can be used to accurately detect changes in the mass balance of the ice sheets. By analyzing the distribution of the model parameters over large portions of the ice sheet, regional and seasonal variations in the near-surface properties of the snowpack can be quantified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, A; Peterson, T; Johnson, L
2015-06-15
Purpose: To evaluate the impact of the exceptional energy resolution of germanium detectors for preclinical SPECT in comparison to conventional detectors. Methods: A cylindrical water phantom was created in GATE with a spherical Tc-99m source in the center. Sixty-four projections over 360 degrees using a pinhole collimator were simulated. The same phantom was simulated using air instead of water to establish the true reconstructed voxel intensity without attenuation. Attenuation correction based on the Chang method was performed on MLEM reconstructed images from the water phantom to determine a quantitative measure of the effectiveness of the attenuation correction. Similarly, a NEMAmore » phantom was simulated, and the effectiveness of the attenuation correction was evaluated. Both simulations were carried out using both NaI detectors with an energy resolution of 10% FWHM and Ge detectors with an energy resolution of 1%. Results: Analysis shows that attenuation correction without scatter correction using germanium detectors can reconstruct a small spherical source to within 3.5%. Scatter analysis showed that for standard sized objects in a preclinical scanner, a NaI detector has a scatter-to-primary ratio between 7% and 12.5% compared to between 0.8% and 1.5% for a Ge detector. Preliminary results from line profiles through the NEMA phantom suggest that applying attenuation correction without scatter correction provides acceptable results for the Ge detectors but overestimates the phantom activity using NaI detectors. Due to the decreased scatter, we believe that the spillover ratio for the air and water cylinders in the NEMA phantom will be lower using germanium detectors compared to NaI detectors. Conclusion: This work indicates that the superior energy resolution of germanium detectors allows for less scattered photons to be included within the energy window compared to traditional SPECT detectors. This may allow for quantitative SPECT without implementing scatter correction, reducing uncertainties introduced by scatter correction algorithms. Funding provided by NIH/NIBIB grant R01EB013677; Todd Peterson, Ph.D., has had a research contract with PHDs Co., Knoxville, TN.« less
NASA Astrophysics Data System (ADS)
D'Souza, Maximian Felix
1995-01-01
The purpose of the present study was to determine the changes in regional cerebral blood flow (rCBF) with a cognitive task of semantic word retrieval (verbal fluency) in patients with multiple sclerosis (MS) and compare with the rCBF distribution of normal controls. Two groups of patients with low and high verbal fluency scores and two groups of normal controls were selected to determine a relationship between rCBF and verbal performance. A three-detector gamma camera (TRIAD 88) was used with radiotracer Tc-99m HMPAO and single photon emission computed tomography (SPECT) to obtain 3D rCBF maps. The performance characteristics of the camera was comprehensively studied before being utilized for clinical studies. In addition, technical improvements were implemented in the form of scatter correction and MRI-SPECT coregistration to potentially enhance the quantitative accuracy of the rCBF data. The performance analysis of the gamma camera showed remarkable consistency among the three-detector heads and yielded results that were consistent with the manufacturer's specification. Measurements of physical objects also showed excellent image quality. The coregistration of SPECT and MRI images allowed more accurate anatomical localization for extraction of regional blood flow information. The validation of the scatter correction technique with physical phantoms indicated marked improvements in quantitative accuracy. There was marked difference in activation patterns between patients and normals. In normals, individually subjects showed either an increase or a decrease in blood flow to left frontal and temporal, however, on average, there was not a statistically significant change. The lack of significant change may suggest large variability among subjects chosen or that the individual changes are not large enough to be significant. The results from MS patients showed several left cortical areas with statistically significant change in blood flow after cognitive activation, especially in the low fluent group, with decreased flow. Scatter corrected data yielded mostly right sided significant increases in blood flow. Further studies must be conducted to further evaluate the scatter correction technique. Additional studies on MS patients must focus on correlating lesion volume, location and number to the rCBF distribution.
A new method for spatial structure detection of complex inner cavities based on 3D γ-photon imaging
NASA Astrophysics Data System (ADS)
Xiao, Hui; Zhao, Min; Liu, Jiantang; Liu, Jiao; Chen, Hao
2018-05-01
This paper presents a new three-dimensional (3D) imaging method for detecting the spatial structure of a complex inner cavity based on positron annihilation and γ-photon detection. This method first marks carrier solution by a certain radionuclide and injects it into the inner cavity where positrons are generated. Subsequently, γ-photons are released from positron annihilation, and the γ-photon detector ring is used for recording the γ-photons. Finally, the two-dimensional (2D) image slices of the inner cavity are constructed by the ordered-subset expectation maximization scheme and the 2D image slices are merged to the 3D image of the inner cavity. To eliminate the artifact in the reconstructed image due to the scattered γ-photons, a novel angle-traversal model is proposed for γ-photon single-scattering correction, in which the path of the single scattered γ-photon is analyzed from a spatial geometry perspective. Two experiments are conducted to verify the effectiveness of the proposed correction model and the advantage of the proposed testing method in detecting the spatial structure of the inner cavity, including the distribution of gas-liquid multi-phase mixture inside the inner cavity. The above two experiments indicate the potential of the proposed method as a new tool for accurately delineating the inner structures of industrial complex parts.
Multiple scattering calculations of relativistic electron energy loss spectra
NASA Astrophysics Data System (ADS)
Jorissen, K.; Rehr, J. J.; Verbeeck, J.
2010-04-01
A generalization of the real-space Green’s-function approach is presented for ab initio calculations of relativistic electron energy loss spectra (EELS) which are particularly important in anisotropic materials. The approach incorporates relativistic effects in terms of the transition tensor within the dipole-selection rule. In particular, the method accounts for relativistic corrections to the magic angle in orientation resolved EELS experiments. The approach is validated by a study of the graphite CK edge, for which we present an accurate magic angle measurement consistent with the predicted value.
What are the correct ρ0(770 ) meson mass and width values?
NASA Astrophysics Data System (ADS)
Bartoš, Erik; Dubnička, Stanislav; Liptaj, Andrej; Dubničková, Anna Zuzana; Kamiński, Robert
2017-12-01
The accuracy of the Gounaris-Sakurai pion electromagnetic form factor model at the elastic region, in which just the ρ0(770 ) resonance appears, is investigated by the particular analysis of the most accurate P-wave isovector π π scattering phase shift δ11(t ) data, obtained by the Garcia-Martin-Kamiński-Peláez-Yndurain approach, and by an application of the Unitary&Analytic pion electromagnetic structure model to a description of the newest precise data on the e+e-→π+π- process.
NASA Astrophysics Data System (ADS)
Custo, Anna; Wells, William M., III; Barnett, Alex H.; Hillman, Elizabeth M. C.; Boas, David A.
2006-07-01
An efficient computation of the time-dependent forward solution for photon transport in a head model is a key capability for performing accurate inversion for functional diffuse optical imaging of the brain. The diffusion approximation to photon transport is much faster to simulate than the physically correct radiative transport equation (RTE); however, it is commonly assumed that scattering lengths must be much smaller than all system dimensions and all absorption lengths for the approximation to be accurate. Neither of these conditions is satisfied in the cerebrospinal fluid (CSF). Since line-of-sight distances in the CSF are small, of the order of a few millimeters, we explore the idea that the CSF scattering coefficient may be modeled by any value from zero up to the order of the typical inverse line-of-sight distance, or approximately 0.3 mm-1, without significantly altering the calculated detector signals or the partial path lengths relevant for functional measurements. We demonstrate this in detail by using a Monte Carlo simulation of the RTE in a three-dimensional head model based on clinical magnetic resonance imaging data, with realistic optode geometries. Our findings lead us to expect that the diffusion approximation will be valid even in the presence of the CSF, with consequences for faster solution of the inverse problem.
Low dose scatter correction for digital chest tomosynthesis
NASA Astrophysics Data System (ADS)
Inscoe, Christina R.; Wu, Gongting; Shan, Jing; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping
2015-03-01
Digital chest tomosynthesis (DCT) provides superior image quality and depth information for thoracic imaging at relatively low dose, though the presence of strong photon scatter degrades the image quality. In most chest radiography, anti-scatter grids are used. However, the grid also blocks a large fraction of the primary beam photons requiring a significantly higher imaging dose for patients. Previously, we have proposed an efficient low dose scatter correction technique using a primary beam sampling apparatus. We implemented the technique in stationary digital breast tomosynthesis, and found the method to be efficient in correcting patient-specific scatter with only 3% increase in dose. In this paper we reported the feasibility study of applying the same technique to chest tomosynthesis. This investigation was performed utilizing phantom and cadaver subjects. The method involves an initial tomosynthesis scan of the object. A lead plate with an array of holes, or primary sampling apparatus (PSA), was placed above the object. A second tomosynthesis scan was performed to measure the primary (scatter-free) transmission. This PSA data was used with the full-field projections to compute the scatter, which was then interpolated to full-field scatter maps unique to each projection angle. Full-field projection images were scatter corrected prior to reconstruction. Projections and reconstruction slices were evaluated and the correction method was found to be effective at improving image quality and practical for clinical implementation.
NASA Astrophysics Data System (ADS)
Hori, Yuki; Hirano, Yoshiyuki; Koshino, Kazuhiro; Moriguchi, Tetsuaki; Iguchi, Satoshi; Yamamoto, Akihide; Enmi, Junichiro; Kawashima, Hidekazu; Zeniya, Tsutomu; Morita, Naomi; Nakagawara, Jyoji; Casey, Michael E.; Iida, Hidehiro
2014-09-01
Use of 15O labeled oxygen (15O2) and positron emission tomography (PET) allows quantitative assessment of the regional metabolic rate of oxygen (CMRO2) in vivo, which is essential to understanding the pathological status of patients with cerebral vascular and neurological disorders. The method has, however, been challenging, when a 3D PET scanner is employed, largely attributed to the presence of gaseous radioactivity in the trachea and the inhalation system, which results in a large amount of scatter and random events in the PET assessment. The present study was intended to evaluate the adequacy of using a recently available commercial 3D PET scanner in the assessment of regional cerebral radioactivity distribution during an inhalation of 15O2. Systematic experiments were carried out on a brain phantom. Experiments were also performed on a healthy volunteer following a recently developed protocol for simultaneous assessment of CMRO2 and cerebral blood flow, which involves sequential administration of 15O2 and C15O2. A particular intention was to evaluate the adequacy of the scatter-correction procedures. The phantom experiment demonstrated that errors were within 3% at the practically maximum radioactivity in the face mask, with the greatest radioactivity in the lung. The volunteer experiment demonstrated that the counting rate was at peak during the 15O gas inhalation period, within a verified range. Tomographic images represented good quality over the entire FOV, including the lower part of the cerebral structures and the carotid artery regions. The scatter-correction procedures appeared to be important, particularly in the process to compensate for the scatter originating outside the FOV. Reconstructed images dramatically changed if the correction was carried out using inappropriate procedures. This study demonstrated that accurate reconstruction could be obtained when the scatter compensation was appropriately carried out. This study also suggested the feasibility of using a state-of-the-art 3D PET scanner in the quantitative PET imaging during inhalation of 15O labeled oxygen.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, Wayne R.; Howells, M. R.; Yashchuk, V. V.
2008-09-30
An implementation of the two-dimensional statistical scattering theory of Church and Takacs for the prediction of scattering from x-ray mirrors is presented with a graphical user interface. The process of this development has clarified several problems which are of significant interest to the synchrotron community. These problems have been addressed to some extent, for example, for large astronomical telescopes, and at the National Ignition Facility for normal incidence optics, but not in the synchrotron community for grazing incidence optics. Since it is based on the Power Spectral Density (PSD) to provide a description of the deviations from ideal shape ofmore » the surface, accurate prediction of the scattering requires an accurate estimation of the PSD. Specifically, the spatial frequency range of measurement must be the correct one for the geometry of use of the optic--including grazing incidence and coherence effects, and the modifications to the PSD of the Optical Transfer Functions (OTF) of the measuring instruments must be removed. A solution for removal of OTF effects has been presented previously, the Binary Pseudo-Random Grating. Typically, the frequency range of a single instrument does not cover the range of interest, requiring the stitching together of PSD estimations. This combination generates its own set of difficulties in two dimensions. Fitting smooth functions to two dimensional PSDs, particularly in the case of spatial non-isotropy of the surface, which is often the case for optics in synchrotron beam lines, can be difficult. The convenient, and physically accurate fractal for one dimension does not readily transfer to two dimensions. Finally, a completely statistical description of scattering must be integrated with a deterministic low spatial frequency component in order to completely model the intensity near the image. An outline for approaching these problems, and our proposed experimental program is given.« less
Robust finger vein ROI localization based on flexible segmentation.
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-10-24
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.
Robust Finger Vein ROI Localization Based on Flexible Segmentation
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-01-01
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769
Large Electroweak Corrections to Vector-Boson Scattering at the Large Hadron Collider.
Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu
2017-06-30
For the first time full next-to-leading-order electroweak corrections to off-shell vector-boson scattering are presented. The computation features the complete matrix elements, including all nonresonant and off-shell contributions, to the electroweak process pp→μ^{+}ν_{μ}e^{+}ν_{e}jj and is fully differential. We find surprisingly large corrections, reaching -16% for the fiducial cross section, as an intrinsic feature of the vector-boson-scattering processes. We elucidate the origin of these large electroweak corrections upon using the double-pole approximation and the effective vector-boson approximation along with leading-logarithmic corrections.
Reversal of photon-scattering errors in atomic qubits.
Akerman, N; Kotler, S; Glickman, Y; Ozeri, R
2012-09-07
Spontaneous photon scattering by an atomic qubit is a notable example of environment-induced error and is a fundamental limit to the fidelity of quantum operations. In the scattering process, the qubit loses its distinctive and coherent character owing to its entanglement with the photon. Using a single trapped ion, we show that by utilizing the information carried by the photon, we are able to coherently reverse this process and correct for the scattering error. We further used quantum process tomography to characterize the photon-scattering error and its correction scheme and demonstrate a correction fidelity greater than 85% whenever a photon was measured.
Active and Passive 3D Vector Radiative Transfer with Preferentially-Aligned Ice Particles
NASA Astrophysics Data System (ADS)
Adams, I. S.; Munchak, S. J.; Pelissier, C.; Kuo, K. S.; Heymsfield, G. M.
2017-12-01
To support the observation of clouds and precipitation using combinations of radars and radiometers, a forward model capable of representing diverse sensing geometries for active and passive instruments is necessary for correctly interpreting and consistently combining multi-sensor measurements from ground-based, airborne, and spaceborne platforms. As such, the Atmospheric Radiative Transfer Simulator (ARTS) uses Monte Carlo integration to produce radar reflectivities and radiometric brightness temperatures for three-dimensional cloud and precipitation input fields. This radiative transfer framework is capable of efficiently sampling Gaussian antenna beams and fully accounting for multiple scattering. By relying on common ray-tracing tools, gaseous absorption models, and scattering properties, the model reproduces accurate and consistent radar and radiometer observables. While such a framework is an important component for simulating remote sensing observables, the key driver for self-consistent radiative transfer calculations of clouds and precipitation is scattering data. Research over the past decade has demonstrated that spheroidal models of frozen hydrometeors cannot accurately reproduce all necessary scattering properties at all desired frequencies. The discrete dipole approximation offers flexibility in calculating scattering for arbitrary particle geometries, but at great computational expense. When considering scattering for certain pristine ice particles, the Extended Boundary Condition Method, or T-Matrix, is much more computationally efficient; however, convergence for T-Matrix calculations fails at large size parameters and high aspect ratios. To address these deficiencies, we implemented the Invariant Imbedding T-Matrix Method (IITM). A brief overview of ARTS and IITM will be given, including details for handling preferentially-aligned hydrometeors. Examples highlighting the performance of the model for simulating space-based and airborne measurements will be offered, and some case studies showing the response to particle type and orientation will be presented. Simulations of polarized radar (Z, LDR, ZDR) and radiometer (Stokes I and Q) quantities will be used to demonstrate the capabilities of the model.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Southern Medical University, Guangzhou; Bai, T
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less
Analysis of position-dependent Compton scatter in scintimammography with mild compression
NASA Astrophysics Data System (ADS)
Williams, M. B.; Narayanan, D.; More, M. J.; Goodale, P. J.; Majewski, S.; Kieper, D. A.
2003-10-01
In breast scintigraphy using /sup 99m/Tc-sestamibi the relatively low radiotracer uptake in the breast compared to that in other organs such as the heart results in a large fraction of the detected events being Compton scattered gamma-rays. In this study, our goal was to determine whether generalized conclusions regarding scatter-to-primary ratios at various locations within the breast image are possible, and if so, to use them to make explicit scatter corrections to the breast scintigrams. Energy spectra were obtained from patient scans for contiguous regions of interest (ROIs) centered left to right within the image of the breast, and extending from the chest wall edge of the image to the anterior edge. An anthropomorphic torso phantom with fillable internal organs and a compressed-shape breast containing water only was used to obtain realistic position-dependent scatter-only spectra. For each ROI, the measured patient energy spectrum was fitted with a linear combination of the scatter-only spectrum from the anthropomorphic phantom and the scatter-free spectrum from a point source. We found that although there is a very strong dependence on location within the breast of the scatter-to-primary ratio, the spectra are well modeled by a linear combination of position-dependent scatter-only spectra and a position-independent scatter-free spectrum, resulting in a set of position-dependent correction factors. These correction factors can be used along with measured emission spectra from a given breast to correct for the Compton scatter in the scintigrams. However, the large variation among patients in the magnitude of the position-dependent scatter makes the success of universal correction approaches unlikely.
Scatter correction using a primary modulator on a clinical angiography C-arm CT system.
Bier, Bastian; Berger, Martin; Maier, Andreas; Kachelrieß, Marc; Ritschl, Ludwig; Müller, Kerstin; Choi, Jang-Hwan; Fahrig, Rebecca
2017-09-01
Cone beam computed tomography (CBCT) suffers from a large amount of scatter, resulting in severe scatter artifacts in the reconstructions. Recently, a new scatter correction approach, called improved primary modulator scatter estimation (iPMSE), was introduced. That approach utilizes a primary modulator that is inserted between the X-ray source and the object. This modulation enables estimation of the scatter in the projection domain by optimizing an objective function with respect to the scatter estimate. Up to now the approach has not been implemented on a clinical angiography C-arm CT system. In our work, the iPMSE method is transferred to a clinical C-arm CBCT. Additional processing steps are added in order to compensate for the C-arm scanner motion and the automatic X-ray tube current modulation. These challenges were overcome by establishing a reference modulator database and a block-matching algorithm. Experiments with phantom and experimental in vivo data were performed to evaluate the method. We show that scatter correction using primary modulation is possible on a clinical C-arm CBCT. Scatter artifacts in the reconstructions are reduced with the newly extended method. Compared to a scan with a narrow collimation, our approach showed superior results with an improvement of the contrast and the contrast-to-noise ratio for the phantom experiments. In vivo data are evaluated by comparing the results with a scan with a narrow collimation and with a constant scatter correction approach. Scatter correction using primary modulation is possible on a clinical CBCT by compensating for the scanner motion and the tube current modulation. Scatter artifacts could be reduced in the reconstructions of phantom scans and in experimental in vivo data. © 2017 American Association of Physicists in Medicine.
Prospects for altimetry and scatterometry in the 90's. [satellite oceanography
NASA Technical Reports Server (NTRS)
Townsend, W. F.
1985-01-01
Current NASA plans for altimetry and scatterometry of the oceans using spaceborne instrumentation are outlined. The data of interest covers geostrophic and wind-driven circulation, heat content, the horizontal heat flux of the ocean, and the interactions between atmosphere and ocean and ocean and climate. A proposed TOPEX satellite is to be launched in 1991, carrying a radar altimeter to measure the ocean surface topography. Employing dual-wavelength operation would furnish ionospheric correction data. Multibeam instruments could also be flown on the multiple-instrument polar orbiting platforms comprising the Earth Observation System. A microwave radar scatterometer, which functions on the basis of Bragg scattering of microwave energy off of wavelets, would operate at various view angles and furnish wind speeds accurate to 1.5 m/sec and directions accurate to 20 deg.
Schoen, K; Snow, W M; Kaiser, H; Werner, S A
2005-01-01
The neutron index of refraction is generally derived theoretically in the Fermi approximation. However, the Fermi approximation neglects the effects of the binding of the nuclei of a material as well as multiple scattering. Calculations by Nowak introduced correction terms to the neutron index of refraction that are quadratic in the scattering length and of order 10(-3) fm for hydrogen and deuterium. These correction terms produce a small shift in the final value for the coherent scattering length of H2 in a recent neutron interferometry experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, X; Zhang, Z; Xie, Y
Purpose: X-ray scatter photons result in significant image quality degradation of cone-beam CT (CBCT). Measurement based algorithms using beam blocker directly acquire the scatter samples and achieve significant improvement on the quality of CBCT image. Within existing algorithms, single-scan and stationary beam blocker proposed previously is promising due to its simplicity and practicability. Although demonstrated effectively on tabletop system, the blocker fails to estimate the scatter distribution on clinical CBCT system mainly due to the gantry wobble. In addition, the uniform distributed blocker strips in our previous design results in primary data loss in the CBCT system and leads tomore » the image artifacts due to data insufficiency. Methods: We investigate the motion behavior of the beam blocker in each projection and design an optimized non-uniform blocker strip distribution which accounts for the data insufficiency issue. An accurate scatter estimation is then achieved from the wobble modeling. Blocker wobble curve is estimated using threshold-based segmentation algorithms in each projection. In the blocker design optimization, the quality of final image is quantified using the number of the primary data loss voxels and the mesh adaptive direct search algorithm is applied to minimize the objective function. Scatter-corrected CT images are obtained using the optimized blocker. Results: The proposed method is evaluated using Catphan@504 phantom and a head patient. On the Catphan©504, our approach reduces the average CT number error from 115 Hounsfield unit (HU) to 11 HU in the selected regions of interest, and improves the image contrast by a factor of 1.45 in the high-contrast regions. On the head patient, the CT number error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial non-uniformity is decreased from 27% to 5% after correction. Conclusion: The proposed optimized blocker design is practical and attractive for CBCT guided radiation therapy. This work is supported by grants from Guangdong Innovative Research Team Program of China (Grant No. 2011S013), National 863 Programs of China (Grant Nos. 2012AA02A604 and 2015AA043203), the National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917)« less
NASA Technical Reports Server (NTRS)
Bhatia, Anand K.
2008-01-01
Applications of the hybrid theory to the scattering of electrons from Ile+ and Li++ and resonances in these systems, A. K. Bhatia, NASA/Goddard Space Flight Center- The Hybrid theory of electron-hydrogen elastic scattering [I] is applied to the S-wave scattering of electrons from He+ and Li++. In this method, both short-range and long-range correlations are included in the Schrodinger equation at the same time. Phase shifts obtained in this calculation have rigorous lower bounds to the exact phase shifts and they are compared with those obtained using the Feshbach projection operator formalism [2], the close-coupling approach [3], and Harris-Nesbet method [4]. The agreement among all the calculations is very good. These systems have doubly-excited or Feshbach resonances embedded in the continuum. The resonance parameters for the lowest ' S resonances in He and Li+ are calculated and they are compared with the results obtained using the Feshbach projection operator formalism [5,6]. It is concluded that accurate resonance parameters can be obtained by the present method, which has the advantage of including corrections due to neighboring resonances and the continuum in which these resonances are embedded.
Laser pulsing in linear Compton scattering
Krafft, G. A.; Johnson, E.; Deitrick, K.; ...
2016-12-16
Previous work on calculating energy spectra from Compton scattering events has either neglected considering the pulsed structure of the incident laser beam, or has calculated these effects in an approximate way subject to criticism. In this paper, this problem has been reconsidered within a linear plane wave model for the incident laser beam. By performing the proper Lorentz transformation of the Klein-Nishina scattering cross section, a spectrum calculation can be created which allows the electron beam energy spread and emittance effects on the spectrum to be accurately calculated, essentially by summing over the emission of each individual electron. Such anmore » approach has the obvious advantage that it is easily integrated with a particle distribution generated by particle tracking, allowing precise calculations of spectra for realistic particle distributions in collision. The method is used to predict the energy spectrum of radiation passing through an aperture for the proposed Old Dominion University inverse Compton source. In addition, as discussed in the body of the paper, many of the results allow easy scaling estimates to be made of the expected spectrum. A misconception in the literature on Compton scattering of circularly polarized beams is corrected and recorded.« less
NASA Astrophysics Data System (ADS)
Liu, Chunlei; Ding, Wenrui; Li, Hongguang; Li, Jiankun
2017-09-01
Haze removal is a nontrivial work for medium-altitude unmanned aerial vehicle (UAV) image processing because of the effects of light absorption and scattering. The challenges are attributed mainly to image distortion and detail blur during the long-distance and large-scale imaging process. In our work, a metadata-assisted nonuniform atmospheric scattering model is proposed to deal with the aforementioned problems of medium-altitude UAV. First, to better describe the real atmosphere, we propose a nonuniform atmospheric scattering model according to the aerosol distribution, which directly benefits the image distortion correction. Second, considering the characteristics of long-distance imaging, we calculate the depth map, which is an essential clue to modeling, on the basis of UAV metadata information. An accurate depth map reduces the color distortion compared with the depth of field obtained by other existing methods based on priors or assumptions. Furthermore, we use an adaptive median filter to address the problem of fuzzy details caused by the global airlight value. Experimental results on both real flight and synthetic images demonstrate that our proposed method outperforms four other existing haze removal methods.
Roy-Steiner-equation analysis of pion-nucleon scattering
NASA Astrophysics Data System (ADS)
Hoferichter, Martin; Ruiz de Elvira, Jacobo; Kubis, Bastian; Meißner, Ulf-G.
2016-04-01
We review the structure of Roy-Steiner equations for pion-nucleon scattering, the solution for the partial waves of the t-channel process ππ → N ¯ N, as well as the high-accuracy extraction of the pion-nucleon S-wave scattering lengths from data on pionic hydrogen and deuterium. We then proceed to construct solutions for the lowest partial waves of the s-channel process πN → πN and demonstrate that accurate solutions can be found if the scattering lengths are imposed as constraints. Detailed error estimates of all input quantities in the solution procedure are performed and explicit parameterizations for the resulting low-energy phase shifts as well as results for subthreshold parameters and higher threshold parameters are presented. Furthermore, we discuss the extraction of the pion-nucleon σ-term via the Cheng-Dashen low-energy theorem, including the role of isospin-breaking corrections, to obtain a precision determination consistent with all constraints from analyticity, unitarity, crossing symmetry, and pionic-atom data. We perform the matching to chiral perturbation theory in the subthreshold region and detail the consequences for the chiral convergence of the threshold parameters and the nucleon mass.
Modeling of polychromatic attenuation using computed tomography reconstructed images
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
1999-01-01
This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.
NASA Technical Reports Server (NTRS)
Gould, R. J.
1979-01-01
Higher-order electromagnetic processes involving particles at ultrahigh energies are discussed, with particular attention given to Compton scattering with the emission of an additional photon (double Compton scattering). Double Compton scattering may have significance in the interaction of a high-energy electron with the cosmic blackbody photon gas. At high energies the cross section for double Compton scattering is large, though this effect is largely canceled by the effects of radiative corrections to ordinary Compton scattering. A similar cancellation takes place for radiative pair production and the associated radiative corrections to the radiationless process. This cancellation is related to the well-known cancellation of the infrared divergence in electrodynamics.
NASA Astrophysics Data System (ADS)
Pahlevaninezhad, H.; Lee, A. M. D.; Hyun, C.; Lam, S.; MacAulay, C.; Lane, P. M.
2013-03-01
In this paper, we conduct a phantom study for modeling the autofluorescence (AF) properties of tissue. A combined optical coherence tomography (OCT) and AF imaging system is proposed to measure the strength of the AF signal in terms of the scattering layer thickness and concentration. The combined AF-OCT system is capable of estimating the AF loss due to scattering in the epithelium using the thickness and scattering concentration calculated from the co-registered OCT images. We define a correction factor to account for scattering losses in the epithelium and calculate a scatteringcorrected AF signal. We believe the scattering-corrected AF will reduce the diagnostic false-positives rate in the early detection of airway lesions due to confounding factors such as increased epithelial thickness and inflammations.
Dose measurement in heterogeneous phantoms with an extrapolation chamber
NASA Astrophysics Data System (ADS)
Deblois, Francois
A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water(TM) and bone-equivalent material was used for determining absolute dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x-rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The air gaps used were between 2 and 3 mm and the sensitive air volume of the extrapolation chamber was remotely controlled through the motion of the motorized piston with a precision of +/-0.0025 mm. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain dose data for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC from 0.7 to ˜2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water(TM) PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). The collecting electrode material in comparison with the polarizing electrode material has a larger effect on the electrode correction factor; the thickness of thin electrodes, on the other hand, has a negligible effect on dose determination. The uncalibrated hybrid PEEC is an accurate and absolute device for measuring the dose directly in bone material in conjunction with appropriate correction factors determined with Monte Carlo techniques.
Kaneta, Tomohiro; Kurihara, Hideyuki; Hakamatsuka, Takashi; Ito, Hiroshi; Maruoka, Shin; Fukuda, Hiroshi; Takahashi, Shoki; Yamada, Shogo
2004-12-01
123I-15-(p-iodophenyl)-3-(R,S)-methylpentadecanoic acid (BMIPP) and 99mTc-tetrofosmin (TET) are widely used for evaluation of myocardial fatty acid metabolism and perfusion, respectively. ECG-gated TET SPECT is also used for evaluation of myocardial wall motion. These tests are often performed on the same day to minimize both the time required and inconvenience to patients and medical staff. However, as 123I and 99mTc have similar emission energies (159 keV and 140 keV, respectively), it is necessary to consider not only scattered photons, but also primary photons of each radionuclide detected in the wrong window (cross-talk). In this study, we developed and evaluated the effectiveness of a new scatter and cross-talk correction imaging protocol. Fourteen patients with ischemic heart disease or heart failure (8 men and 6 women with a mean age of 69.4 yr, ranging from 45 to 94 yr) were enrolled in this study. In the routine one-day acquisition protocol, BMIPP SPECT was performed in the morning, with TET SPECT performed 4 h later. An additional SPECT was performed just before injection of TET with the energy window for 99mTc. These data correspond to the scatter and cross-talk factor of the next TET SPECT. The correction was performed by subtraction of the scatter and cross-talk factor from TET SPECT. Data are presented as means +/- S.E. Statistical analyses were performed using Wilcoxon's matched-pairs signed-ranks test, and p < 0.05 was considered significant. The percentage of scatter and cross-talk relative to the corrected total count was 26.0 +/- 5.3%. EDV and ESV after correction were significantly greater than those before correction (p = 0.019 and 0.016, respectively). After correction, EF was smaller than that before correction, but the difference was not significant. Perfusion scores (17 segments per heart) were significantly lower after as compared with those before correction (p < 0.001). Scatter and cross-talk correction revealed significant differences in EDV, ESV, and perfusion scores. These observations indicate that scatter and cross-talk correction is required for one-day acquisition of 123I-BMIPP and 99mTc-tetrofosmin SPECT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, A; Casares-Magaz, O; Elstroem, U
Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less
Infrared weak corrections to strongly interacting gauge boson scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciafaloni, Paolo; Urbano, Alfredo
2010-04-15
We evaluate the impact of electroweak corrections of infrared origin on strongly interacting longitudinal gauge boson scattering, calculating all-order resummed expressions at the double log level. As a working example, we consider the standard model with a heavy Higgs. At energies typical of forthcoming experiments (LHC, International Linear Collider, Compact Linear Collider), the corrections are in the 10%-40% range, with the relative sign depending on the initial state considered and on whether or not additional gauge boson emission is included. We conclude that the effect of radiative electroweak corrections should be included in the analysis of longitudinal gauge boson scattering.
Mentrup, Detlef; Jockel, Sascha; Menser, Bernd; Neitzel, Ulrich
2016-06-01
The aim of this work was to experimentally compare the contrast improvement factors (CIFs) of a newly developed software-based scatter correction to the CIFs achieved by an antiscatter grid. To this end, three aluminium discs were placed in the lung, the retrocardial and the abdominal areas of a thorax phantom, and digital radiographs of the phantom were acquired both with and without a stationary grid. The contrast generated by the discs was measured in both images, and the CIFs achieved by grid usage were determined for each disc. Additionally, the non-grid images were processed with a scatter correction software. The contrasts generated by the discs were determined in the scatter-corrected images, and the corresponding CIFs were calculated. The CIFs obtained with the grid and with the software were in good agreement. In conclusion, the experiment demonstrates quantitatively that software-based scatter correction allows restoring the image contrast of a non-grid image in a manner comparable with an antiscatter grid. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.
2017-02-01
The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.
Analytical model of diffuse reflectance spectrum of skin tissue
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.; Firago, V. A.; Sobchuk, A. N.
2014-01-01
We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonis, Antonios; Zhang, Xiaoguang
2012-01-01
This is a comment on the paper by Aftab Alam, Brian G. Wilson, and D. D. Johnson [1], proposing the solution of the near-field corrections (NFC s) problem for the Poisson equation for extended, e.g., space filling, charge densities. We point out that the problem considered by the authors can be simply avoided by means of performing certain integrals in a particular order, while their method does not address the genuine problem of NFC s that arises when the solution of the Poisson equation is attempted within multiple scattering theory. We also point out a flaw in their line ofmore » reasoning leading to the expression for the potential inside the bounding sphere of a cell that makes it inapplicable to certain geometries.« less
NASA Astrophysics Data System (ADS)
Gonis, A.; Zhang, X.-G.
2012-09-01
This is a Comment on the paper by Alam, Wilson, and Johnson [Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.84.205106 84, 205106 (2011)], proposing the solution of the near-field corrections (NFCs) problem for the Poisson equation for extended, e.g., space-filling charge densities. We point out that the problem considered by the authors can be simply avoided by means of performing certain integrals in a particular order, whereas, their method does not address the genuine problem of NFCs that arises when the solution of the Poisson equation is attempted within multiple-scattering theory. We also point out a flaw in their line of reasoning, leading to the expression for the potential inside the bounding sphere of a cell that makes it inapplicable for certain geometries.
NASA Astrophysics Data System (ADS)
Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis
2010-01-01
Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.
Correction of scatter in megavoltage cone-beam CT
NASA Astrophysics Data System (ADS)
Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.
2001-03-01
The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.
Investigating biomass burning aerosol morphology using a laser imaging nephelometer
NASA Astrophysics Data System (ADS)
Manfred, Katherine M.; Washenfelder, Rebecca A.; Wagner, Nicholas L.; Adler, Gabriela; Erdesz, Frank; Womack, Caroline C.; Lamb, Kara D.; Schwarz, Joshua P.; Franchin, Alessandro; Selimovic, Vanessa; Yokelson, Robert J.; Murphy, Daniel M.
2018-02-01
Particle morphology is an important parameter affecting aerosol optical properties that are relevant to climate and air quality, yet it is poorly constrained due to sparse in situ measurements. Biomass burning is a large source of aerosol that generates particles with different morphologies. Quantifying the optical contributions of non-spherical aerosol populations is critical for accurate radiative transfer models, and for correctly interpreting remote sensing data. We deployed a laser imaging nephelometer at the Missoula Fire Sciences Laboratory to sample biomass burning aerosol from controlled fires during the FIREX intensive laboratory study. The laser imaging nephelometer measures the unpolarized scattering phase function of an aerosol ensemble using diode lasers at 375 and 405 nm. Scattered light from the bulk aerosol in the instrument is imaged onto a charge-coupled device (CCD) using a wide-angle field-of-view lens, which allows for measurements at 4-175° scattering angle with ˜ 0.5° angular resolution. Along with a suite of other instruments, the laser imaging nephelometer sampled fresh smoke emissions both directly and after removal of volatile components with a thermodenuder at 250 °C. The total integrated aerosol scattering signal agreed with both a cavity ring-down photoacoustic spectrometer system and a traditional integrating nephelometer within instrumental uncertainties. We compare the measured scattering phase functions at 405 nm to theoretical models for spherical (Mie) and fractal (Rayleigh-Debye-Gans) particle morphologies based on the size distribution reported by an optical particle counter. Results from representative fires demonstrate that particle morphology can vary dramatically for different fuel types. In some cases, the measured phase function cannot be described using Mie theory. This study demonstrates the capabilities of the laser imaging nephelometer instrument to provide realtime, in situ information about dominant particle morphology, which is vital for understanding remote sensing data and accurately describing the aerosol population in radiative transfer calculations.
NASA Astrophysics Data System (ADS)
Manfred, K.; Adler, G. A.; Erdesz, F.; Franchin, A.; Lamb, K. D.; Schwarz, J. P.; Wagner, N.; Washenfelder, R. A.; Womack, C.; Murphy, D. M.
2017-12-01
Particle morphology has important implications for light scattering and radiative transfer, but can be difficult to measure. Biomass burning and other important aerosol sources can generate a mixture of both spherical and non-spherical particle morphologies, and it is necessary to represent these populations correctly in models. We describe a laser imaging nephelometer that measures the unpolarized scattering phase function of bulk aerosol at 375 and 405 nm using a wide-angle lens and CCD. We deployed this instrument to the Missoula Fire Sciences Laboratory to measure biomass burning aerosol morphology from controlled fires during the recent FIREX intensive laboratory study. Total integrated scattering signal agreed with that determined by a cavity ring-down photoacoustic spectrometer system and a traditional integrating nephelometer within instrument uncertainties. We compared measured scattering phase functions at 405 nm to theoretical models for spherical (Mie) and fractal (Rayleigh-Debye-Gans) particle morphologies based on the size distribution reported by an optical particle counter. We show that particle morphology can vary dramatically for different fuel types, and present results for two representative fires (pine tree vs arid shrub). We find that Mie theory is inadequate to describe the actual behavior of realistic aerosols from biomass burning in some situations. This study demonstrates the capabilities of the laser imaging nephelometer instrument to provide real-time, in situ information about dominant particle morphology that is vital for accurate radiative transfer calculations.
Scatter characterization and correction for simultaneous multiple small-animal PET imaging.
Prasad, Rameshwar; Zaidi, Habib
2014-04-01
The rapid growth and usage of small-animal positron emission tomography (PET) in molecular imaging research has led to increased demand on PET scanner's time. One potential solution to increase throughput is to scan multiple rodents simultaneously. However, this is achieved at the expense of deterioration of image quality and loss of quantitative accuracy owing to enhanced effects of photon attenuation and Compton scattering. The purpose of this work is, first, to characterize the magnitude and spatial distribution of the scatter component in small-animal PET imaging when scanning single and multiple rodents simultaneously and, second, to assess the relevance and evaluate the performance of scatter correction under similar conditions. The LabPET™-8 scanner was modelled as realistically as possible using Geant4 Application for Tomographic Emission Monte Carlo simulation platform. Monte Carlo simulations allow the separation of unscattered and scattered coincidences and as such enable detailed assessment of the scatter component and its origin. Simple shape-based and more realistic voxel-based phantoms were used to simulate single and multiple PET imaging studies. The modelled scatter component using the single-scatter simulation technique was compared to Monte Carlo simulation results. PET images were also corrected for attenuation and the combined effect of attenuation and scatter on single and multiple small-animal PET imaging evaluated in terms of image quality and quantitative accuracy. A good agreement was observed between calculated and Monte Carlo simulated scatter profiles for single- and multiple-subject imaging. In the LabPET™-8 scanner, the detector covering material (kovar) contributed the maximum amount of scatter events while the scatter contribution due to lead shielding is negligible. The out-of field-of-view (FOV) scatter fraction (SF) is 1.70, 0.76, and 0.11% for lower energy thresholds of 250, 350, and 400 keV, respectively. The increase in SF ranged between 25 and 64% when imaging multiple subjects (three to five) of different size simultaneously in comparison to imaging a single subject. The spill-over ratio (SOR) increases with increasing the number of subjects in the FOV. Scatter correction improved the SOR for both water and air cold compartments of single and multiple imaging studies. The recovery coefficients for different body parts of the mouse whole-body and rat whole-body anatomical models were improved for multiple imaging studies following scatter correction. The magnitude and spatial distribution of the scatter component in small-animal PET imaging of single and multiple subjects simultaneously were characterized, and its impact was evaluated in different situations. Scatter correction improves PET image quality and quantitative accuracy for single rat and simultaneous multiple mice and rat imaging studies, whereas its impact is insignificant in single mouse imaging.
Processing Raman Spectra of High-Pressure Hydrogen Flames
NASA Technical Reports Server (NTRS)
Nguyen, Quang-Viet; Kojima, Jun
2006-01-01
The Raman Code automates the analysis of laser-Raman-spectroscopy data for diagnosis of combustion at high pressure. On the basis of the theory of molecular spectroscopy, the software calculates the rovibrational and pure rotational Raman spectra of H2, O2, N2, and H2O in hydrogen/air flames at given temperatures and pressures. Given a set of Raman spectral data from measurements on a given flame and results from the aforementioned calculations, the software calculates the thermodynamic temperature and number densities of the aforementioned species. The software accounts for collisional spectral-line-broadening effects at pressures up to 60 bar (6 MPa). The line-broadening effects increase with pressure and thereby complicate the analysis. The software also corrects for spectral interference ("cross-talk") among the various chemical species. In the absence of such correction, the cross-talk is a significant source of error in temperatures and number densities. This is the first known comprehensive computer code that, when used in conjunction with a spectral calibration database, can process Raman-scattering spectral data from high-pressure hydrogen/air flames to obtain temperatures accurate to within 10 K and chemical-species number densities accurate to within 2 percent.
NASA Astrophysics Data System (ADS)
Fishkin, Joshua B.; So, Peter T. C.; Cerussi, Albert E.; Gratton, Enrico; Fantini, Sergio; Franceschini, Maria Angela
1995-03-01
We have measured the optical absorption and scattering coefficient spectra of a multiple-scattering medium (i.e., a biological tissue-simulating phantom comprising a lipid colloid) containing methemoglobin by using frequency-domain techniques. The methemoglobin absorption spectrum determined in the multiple-scattering medium is in excellent agreement with a corrected methemoglobin absorption spectrum obtained from a steady-state spectrophotometer measurement of the optical density of a minimally scattering medium. The determination of the corrected methemoglobin absorption spectrum takes into account the scattering from impurities in the methemoglobin solution containing no lipid colloid. Frequency-domain techniques allow for the separation of the absorbing from the scattering properties of multiple-scattering media, and these techniques thus provide an absolute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, A; Paysan, P; Brehm, M
2016-06-15
Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less
WE-AB-207A-07: A Planning CT-Guided Scatter Artifact Correction Method for CBCT Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, X; Liu, T; Dong, X
Purpose: Cone beam computed tomography (CBCT) imaging is on increasing demand for high-performance image-guided radiotherapy such as online tumor delineation and dose calculation. However, the current CBCT imaging has severe scatter artifacts and its current clinical application is therefore limited to patient setup based mainly on the bony structures. This study’s purpose is to develop a CBCT artifact correction method. Methods: The proposed scatter correction method utilizes the planning CT to improve CBCT image quality. First, an image registration is used to match the planning CT with the CBCT to reduce the geometry difference between the two images. Then, themore » planning CT-based prior information is entered into the Bayesian deconvolution framework to iteratively perform a scatter artifact correction for the CBCT mages. This technique was evaluated using Catphan phantoms with multiple inserts. Contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR), and the image spatial nonuniformity (ISN) in selected volume of interests (VOIs) were calculated to assess the proposed correction method. Results: Post scatter correction, the CNR increased by a factor of 1.96, 3.22, 3.20, 3.46, 3.44, 1.97 and 1.65, and the SNR increased by a factor 1.05, 2.09, 1.71, 3.95, 2.52, 1.54 and 1.84 for the Air, PMP, LDPE, Polystryrene, Acrylic, Delrin and Teflon inserts, respectively. The ISN decreased from 21.1% to 4.7% in the corrected images. All values of CNR, SNR and ISN in the corrected CBCT image were much closer to those in the planning CT images. The results demonstrated that the proposed method reduces the relevant artifacts and recovers CT numbers. Conclusion: We have developed a novel CBCT artifact correction method based on CT image, and demonstrated that the proposed CT-guided correction method could significantly reduce scatter artifacts and improve the image quality. This method has great potential to correct CBCT images allowing its use in adaptive radiotherapy.« less
Assessment of polarization effect on aerosol retrievals from MODIS
NASA Astrophysics Data System (ADS)
Korkin, S.; Lyapustin, A.
2010-12-01
Light polarization affects the total intensity of scattered radiation. In this work, we compare aerosol retrievals performed by code MAIAC [1] with and without taking polarization into account. The MAIAC retrievals are based on the look-up tables (LUT). For this work, MAIAC was run using two different LUTs, the first one generated using the scalar code SHARM [2], and the second one generated with the vector code Modified Vector Discrete Ordinates Method (MVDOM). MVDOM is a new code suitable for computations with highly anisotropic phase functions, including cirrus clouds and snow [3]. To this end, the solution of the vector radiative transfer equation (VRTE) is represented as a sum of anisotropic and regular components. The anisotropic component is evaluated in the Small Angle Modification of the Spherical Harmonics Method (MSH) [4]. The MSH is formulated in the frame of reference of the solar beam where z-axis lies along the solar beam direction. In this case, the MSH solution for anisotropic part is nearly symmetric in azimuth, and is computed analytically. In scalar case, this solution coincides with the Goudsmit-Saunderson small-angle approximation [5]. To correct for an analytical separation of the anisotropic part of the signal, the transfer equation for the regular part contains a correction source function term [6]. Several examples of polarization impact on aerosol retrievals over different surface types will be presented. 1. Lyapustin A., Wang Y., Laszlo I., Kahn R., Korkin S., Remer L., Levy R., and Reid J. S. Multi-Angle Implementation of Atmospheric Correction (MAIAC): Part 2. Aerosol Algorithm. J. Geophys. Res., submitted (2010). 2. Lyapustin A., Muldashev T., Wang Y. Code SHARM: fast and accurate radiative transfer over spatially variable anisotropic surfaces. In: Light Scattering Reviews 5. Chichester: Springer, 205 - 247 (2010). 3. Budak, V.P., Korkin S.V. On the solution of a vectorial radiative transfer equation in an arbitrary three-dimensional turbid medium with anisotropic scattering. JQSRT, 109, 220-234 (2008). 4. Budak V.P., Sarmin S.E. Solution of radiative transfer equation by the method of spherical harmonics in the small angle modification. Atmospheric and Oceanic Optics, 3, 898-903 (1990). 5. Goudsmit S., Saunderson J.L. Multiple scattering of electrons. Phys. Rev., 57, 24-29 (1940). 6. Budak V.P, Klyuykov D.A., Korkin S.V. Convergence acceleration of radiative transfer equation solution at strongly anisotropic scattering. In: Light Scattering Reviews 5. Chichester: Springer, 147 - 204 (2010).
Modeling of scattering from ice surfaces
NASA Astrophysics Data System (ADS)
Dahlberg, Michael Ross
Theoretical research is proposed to study electromagnetic wave scattering from ice surfaces. A mathematical formulation that is more representative of the electromagnetic scattering from ice, with volume mechanisms included, and capable of handling multiple scattering effects is developed. This research is essential to advancing the field of environmental science and engineering by enabling more accurate inversion of remote sensing data. The results of this research contributed towards a more accurate representation of the scattering from ice surfaces, that is computationally more efficient and that can be applied to many remote-sensing applications.
NASA Astrophysics Data System (ADS)
Dolgos, Gergely; Martins, J. Vanderlei; Remer, Lorraine A.; Correia, Alexandre L.; Tabacniks, Manfredo; Lima, Adriana R.
2010-02-01
Characterization of aerosol scattering and absorption properties is essential to accurate radiative transfer calculations in the atmosphere. Applications of this work include remote sensing of aerosols, corrections for aerosol distortions in satellite imagery of the surface, global climate models, and atmospheric beam propagation. Here we demonstrate successful instrument development at the Laboratory for Aerosols, Clouds and Optics at UMBC that better characterizes aerosol scattering phase matrix using an imaging polar nephelometer (LACO-I-Neph) and enables measurement of spectral aerosol absorption from 200 nm to 2500 nm. The LACO-I-Neph measures the scattering phase function from 1.5° to 178.5° scattering angle with sufficient sensitivity to match theoretical expectations of Rayleigh scattering of various gases. Previous measurements either lack a sufficiently wide range of measured scattering angles or their sensitivity is too low and therefore the required sample amount is prohibitively high for in situ measurements. The LACO-I-Neph also returns expected characterization of the linear polarization signal of Rayleigh scattering. Previous work demonstrated the ability of measuring spectral absorption of aerosol particles using a reflectance technique characterization of aerosol samples collected on Nuclepore filters. This first generation methodology yielded absorption measurements from 350 nm to 2500 nm. Here we demonstrate the possibility of extending this wavelength range into the deep UV, to 200 nm. This extended UV region holds much promise in identifying and characterizing aerosol types and species. The second generation, deep UV, procedure requires careful choice of filter substrates. Here the choice of substrates is explored and preliminary results are provided.
ERIC Educational Resources Information Center
Young, Andrew T.
1982-01-01
The correct usage of such terminology as "Rayleigh scattering,""Rayleigh lines,""Raman lines," and "Tyndall scattering" is resolved during an historical excursion through the physics of light-scattering by gas molecules. (Author/JN)
He, Min; Hu, Yongxiang; Huang, Jian Ping; Stamnes, Knut
2016-12-26
There are considerable demands for accurate atmospheric correction of satellite observations of the sea surface or subsurface signal. Surface and sub-surface reflection under "clear" atmospheric conditions can be used to study atmospheric correction for the simplest possible situation. Here "clear" sky means a cloud-free atmosphere with sufficiently small aerosol particles. The "clear" aerosol concept is defined according to the spectral dependence of the scattering cross section on particle size. A 5-year combined CALIPSO and AMSR-E data set was used to derive the aerosol optical depth (AOD) from the lidar signal reflected from the sea surface. Compared with the traditional lidar-retrieved AOD, which relies on lidar backscattering measurements and an assumed lidar ratio, the AOD retrieved through the surface reflectance method depends on both scattering and absorption because it is based on two-way attenuation of the lidar signal transmitted to and then reflected from the surface. The results show that the clear sky AOD derived from the surface signal agrees with the clear sky AOD available in the CALIPSO level 2 database in the westerly wind belt located in the southern hemisphere, but yields significantly higher aerosol loadings in the tropics and in the northern hemisphere.
Guide-star-based computational adaptive optics for broadband interferometric tomography
Adie, Steven G.; Shemonski, Nathan D.; Graf, Benedikt W.; Ahmad, Adeel; Scott Carney, P.; Boppart, Stephen A.
2012-01-01
We present a method for the numerical correction of optical aberrations based on indirect sensing of the scattered wavefront from point-like scatterers (“guide stars”) within a three-dimensional broadband interferometric tomogram. This method enables the correction of high-order monochromatic and chromatic aberrations utilizing guide stars that are revealed after numerical compensation of defocus and low-order aberrations of the optical system. Guide-star-based aberration correction in a silicone phantom with sparse sub-resolution-sized scatterers demonstrates improvement of resolution and signal-to-noise ratio over a large isotome. Results in highly scattering muscle tissue showed improved resolution of fine structure over an extended volume. Guide-star-based computational adaptive optics expands upon the use of image metrics for numerically optimizing the aberration correction in broadband interferometric tomography, and is analogous to phase-conjugation and time-reversal methods for focusing in turbid media. PMID:23284179
Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI.
Heußer, Thorsten; Mann, Philipp; Rank, Christopher M; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc; Freitag, Martin T
2017-01-01
Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient's diagnosis. Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts.
Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI
Rank, Christopher M.; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A.; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc
2017-01-01
Objectives Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Methods Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. Results The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient’s diagnosis. Conclusion Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts. PMID:28817656
Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram
2016-12-26
An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.
Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2016-01-01
A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.
NASA Astrophysics Data System (ADS)
Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.
2015-09-01
This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.
NASA Technical Reports Server (NTRS)
Grund, Christian John; Eloranta, Edwin W.
1990-01-01
Cirrus clouds reflect incoming solar radiation and trap outgoing terrestrial radiation; therefore, accurate estimation of the global energy balance depends upon knowledge of the optical and physical properties of these clouds. Scattering and absorption by cirrus clouds affect measurements made by many satellite-borne and ground based remote sensors. Scattering of ambient light by the cloud, and thermal emissions from the cloud can increase measurement background noise. Multiple scattering processes can adversely affect the divergence of optical beams propagating through these clouds. Determination of the optical thickness and the vertical and horizontal extent of cirrus clouds is necessary to the evaluation of all of these effects. Lidar can be an effective tool for investigating these properties. During the FIRE cirrus IFO in Oct. to Nov. 1986, the High Spectral Resolution Lidar (HSRL) was operated from a rooftop site on the campus of the University of Wisconsin at Madison, Wisconsin. Approximately 124 hours of fall season data were acquired under a variety of cloud optical thickness conditions. Since the IFO, the HSRL data set was expanded by more than 63.5 hours of additional data acquired during all seasons. Measurements are presented for the range in optical thickness and backscattering phase function of the cirrus clouds, as well as contour maps of extinction corrected backscatter cross sections indicating cloud morphology. Color enhanced images of range-time indicator (RTI) displays a variety of cirrus clouds with approximately 30 sec time resolution are presented. The importance of extinction correction on the interpretation of cloud height and structure from lidar observations of optically thick cirrus are demonstrated.
Scatter correction for x-ray conebeam CT using one-dimensional primary modulation
NASA Astrophysics Data System (ADS)
Zhu, Lei; Gao, Hewei; Bennett, N. Robert; Xing, Lei; Fahrig, Rebecca
2009-02-01
Recently, we developed an efficient scatter correction method for x-ray imaging using primary modulation. A two-dimensional (2D) primary modulator with spatially variant attenuating materials is inserted between the x-ray source and the object to separate primary and scatter signals in the Fourier domain. Due to the high modulation frequency in both directions, the 2D primary modulator has a strong scatter correction capability for objects with arbitrary geometries. However, signal processing on the modulated projection data requires knowledge of the modulator position and attenuation. In practical systems, mainly due to system gantry vibration, beam hardening effects and the ramp-filtering in the reconstruction, the insertion of the 2D primary modulator results in artifacts such as rings in the CT images, if no post-processing is applied. In this work, we eliminate the source of artifacts in the primary modulation method by using a one-dimensional (1D) modulator. The modulator is aligned parallel to the ramp-filtering direction to avoid error magnification, while sufficient primary modulation is still achieved for scatter correction on a quasicylindrical object, such as a human body. The scatter correction algorithm is also greatly simplified for the convenience and stability in practical implementations. The method is evaluated on a clinical CBCT system using the Catphan© 600 phantom. The result shows effective scatter suppression without introducing additional artifacts. In the selected regions of interest, the reconstruction error is reduced from 187.2HU to 10.0HU if the proposed method is used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, L; Lee, H; Wang, J
2014-06-01
Purpose: To evaluate a moving-blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods: XML code was generated to enable concurrent CBCT acquisition and VMAT delivery in Varian TrueBeam developer mode. A physical attenuator (i.e., “blocker”) consisting of equal spaced lead strips (3.2mm strip width and 3.2mm gap in between) was mounted between the x-ray source and patient at a source to blocker distance of 232mm. The blocker was simulated to be moving back and forth along the gantry rotation axis during themore » CBCT acquisition. Both MV and kV scatter signal were estimated simultaneously from the blocked regions of the imaging panel, and interpolated into the un-blocked regions. Scatter corrected CBCT was then reconstructed from un-blocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan 600 phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using moving blocker for MV-kV scatter correction. Results: MV scatter greatly degrades the CBCT image quality by increasing the CT number inaccuracy and decreasing the image contrast, in addition to the shading artifacts caused by kV scatter. The artifacts were substantially reduced in the moving blocker corrected CBCT images in both Catphan and pelvis phantoms. Quantitatively, CT number error in selected regions of interest reduced from 377 in the kV-MV contaminated CBCT image to 38 for the Catphan phantom. Conclusions: The moving-blockerbased strategy can successfully correct MV and kV scatter simultaneously in CBCT projection data acquired with concurrent VMAT delivery. This work was supported in part by a grant from the Cancer Prevention and Research Institute of Texas (RP130109) and a grant from the American Cancer Society (RSG-13-326-01-CCE)« less
Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals
NASA Technical Reports Server (NTRS)
Wang, Meng-Hua; King, Michael D.
1997-01-01
We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic Stratocumulus Transition Experiment (ASTEX) conducted near the Azores in June 1992 and compare these results to corresponding retrievals obtained using 0.88 Am. These results provide an example of the Rayleigh scattering effects on thin clouds and further test the Rayleigh correction scheme. Using a nonabsorbing near-infrared wavelength lambda (0.88 Am) in retrieving cloud optical thickness is only applicable over oceans, however, since most land surfaces are highly reflective at 0.88 Am. Hence successful global retrievals of cloud optical thickness should remove Rayleigh scattering effects when using reflectance measurements at 0.66 Am.
NASA Astrophysics Data System (ADS)
Wozniak, Kaitlin T.; Germer, Thomas A.; Butler, Sam C.; Brooks, Daniel R.; Huxlin, Krystel R.; Ellis, Jonathan D.
2018-02-01
We present measurements of light scatter induced by a new ultrafast laser technique being developed for laser refractive correction in transparent ophthalmic materials such as cornea, contact lenses, and/or intraocular lenses. In this new technique, called intra-tissue refractive index shaping (IRIS), a 405 nm femtosecond laser is focused and scanned below the corneal surface, inducing a spatially-varying refractive index change that corrects vision errors. In contrast with traditional laser correction techniques, such as laser in-situ keratomileusis (LASIK) or photorefractive keratectomy (PRK), IRIS does not operate via photoablation, but rather changes the refractive index of transparent materials such as cornea and hydrogels. A concern with any laser eye correction technique is additional scatter induced by the process, which can adversely affect vision, especially at night. The goal of this investigation is to identify sources of scatter induced by IRIS and to mitigate possible effects on visual performance in ophthalmic applications. Preliminary light scattering measurements on patterns written into hydrogel showed four sources of scatter, differentiated by distinct behaviors: (1) scattering from scanned lines; (2) scattering from stitching errors, resulting from adjacent scanning fields not being aligned to one another; (3) diffraction from Fresnel zone discontinuities; and (4) long-period variations in the scans that created distinct diffraction peaks, likely due to inconsistent line spacing in the writing instrument. By knowing the nature of these different scattering errors, it will now be possible to modify and optimize the design of IRIS structures to mitigate potential deficits in visual performance in human clinical trials.
Rayleigh, Compton and K-shell radiative resonant Raman scattering in 83Bi for 88.034 keV γ-rays
NASA Astrophysics Data System (ADS)
Kumar, Sanjeev; Sharma, Veena; Mehta, D.; Singh, Nirmal
2007-11-01
The Rayleigh, Compton and K-shell radiative resonant Raman scattering cross-sections for the 88.034 keV γ-rays have been measured in the 83Bi (K-shell binding energy = 90.526 keV) element. The measurements have been performed at 130° scattering angle using reflection-mode geometrical arrangement involving the 109Cd radioisotope as photon source and an LEGe detector. Computer simulations were exercised to determine distributions of the incident and emission angles, which were further used in evaluation of the absorption corrections for the incident and emitted photons in the target. The measured cross-sections for the Rayleigh scattering are compared with the modified form-factors (MFs) corrected for the anomalous-scattering factors (ASFs) and the S-matrix calculations; and those for the Compton scattering are compared with the Klein-Nishina cross-sections corrected for the non-relativistic Hartree-Fock incoherent scattering function S(x, Z). The ratios of the measured KL2, KL3, KM and KN2,3 radiative resonant Raman scattering cross-sections are found to be in general agreement with those of the corresponding measured fluorescence transition probabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prior, P; Timmins, R; Wells, R G
Dual isotope SPECT allows simultaneous measurement of two different tracers in vivo. With In111 (emission energies of 171keV and 245keV) and Tc99m (140keV), quantification of Tc99m is degraded by cross talk from the In111 photons that scatter and are detected at an energy corresponding to Tc99m. TEW uses counts recorded in two narrow windows surrounding the Tc99m primary window to estimate scatter. Iterative TEW corrects for the bias introduced into the TEW estimate resulting from un-scattered counts detected in the scatter windows. The contamination in the scatter windows is iteratively estimated and subtracted as a fraction of the scatter-corrected primarymore » window counts. The iterative TEW approach was validated with a small-animal SPECT/CT camera using a 2.5mL plastic container holding thoroughly mixed Tc99m/In111 activity fractions of 0.15, 0.28, 0.52, 0.99, 2.47 and 6.90. Dose calibrator measurements were the gold standard. Uncorrected for scatter, the Tc99m activity was over-estimated by as much as 80%. Unmodified TEW underestimated the Tc99m activity by 13%. With iterative TEW corrections applied in projection space, the Tc99m activity was estimated within 5% of truth across all activity fractions above 0.15. This is an improvement over the non-iterative TEW, which could not sufficiently correct for scatter in the 0.15 and 0.28 phantoms.« less
Library based x-ray scatter correction for dedicated cone beam breast CT
Shi, Linxi; Karellas, Andrew; Zhu, Lei
2016-01-01
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the geant4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging. PMID:27487870
Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna
2017-07-20
The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.
Higher Order Heavy Quark Corrections to Deep-Inelastic Scattering
NASA Astrophysics Data System (ADS)
Blümlein, Johannes; DeFreitas, Abilio; Schneider, Carsten
2015-04-01
The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q2. We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring αs (MZ), the charm quark mass mc, and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.
Lee, Ho; Fahimian, Benjamin P; Xing, Lei
2017-03-21
This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method's performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.
NASA Astrophysics Data System (ADS)
Lee, Ho; Fahimian, Benjamin P.; Xing, Lei
2017-03-01
This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.
Scatter correction method for x-ray CT using primary modulation: Phantom studies
Gao, Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun, Mingshan; Star-Lack, Josh; Zhu, Lei
2010-01-01
Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan©600 phantom, an anthropomorphic chest phantom, and the Catphan©600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan©600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan©600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy. PMID:20229902
Characterization and correction of cupping effect artefacts in cone beam CT
Hunter, AK; McDavid, WD
2012-01-01
Objective The purpose of this study was to demonstrate and correct the cupping effect artefact that occurs owing to the presence of beam hardening and scatter radiation during image acquisition in cone beam CT (CBCT). Methods A uniform aluminium cylinder (6061) was used to demonstrate the cupping effect artefact on the Planmeca Promax 3D CBCT unit (Planmeca OY, Helsinki, Finland). The cupping effect was studied using a line profile plot of the grey level values using ImageJ software (National Institutes of Health, Bethesda, MD). A hardware-based correction method using copper pre-filtration was used to address this artefact caused by beam hardening and a software-based subtraction algorithm was used to address scatter contamination. Results The hardware-based correction used to address the effects of beam hardening suppressed the cupping effect artefact but did not eliminate it. The software-based correction used to address the effects of scatter resulted in elimination of the cupping effect artefact. Conclusion Compensating for the presence of beam hardening and scatter radiation improves grey level uniformity in CBCT. PMID:22378754
NASA Astrophysics Data System (ADS)
Zhai, Peng-Wang; Hu, Yongxiang; Josset, Damien B.; Trepte, Charles R.; Lucker, Patricia L.; Lin, Bing
2012-06-01
We have developed a Vector Radiative Transfer (VRT) code for coupled atmosphere and ocean systems based on the successive order of scattering (SOS) method. In order to achieve efficiency and maintain accuracy, the scattering matrix is expanded in terms of the Wigner d functions and the delta fit or delta-M technique is used to truncate the commonly-present large forward scattering peak. To further improve the accuracy of the SOS code, we have implemented the analytical first order scattering treatment using the exact scattering matrix of the medium in the SOS code. The expansion and truncation techniques are kept for higher order scattering. The exact first order scattering correction was originally published by Nakajima and Takana.1 A new contribution of this work is to account for the exact secondary light scattering caused by the light reflected by and transmitted through the rough air-sea interface.
Cloud and Aerosol Measurements from the GLAS Polar Orbiting Lidar: First Year Results
NASA Technical Reports Server (NTRS)
Spinhirne, J. D.; Palm, S. P.; Hlavka, D. L.; Hart, W. D.; Mahesh, A.; Welton, E. J.
2004-01-01
The Geoscience Laser Altimeter System (GLAS) launched in 2003 is the first polar orbiting satellite lidar. The instrument was designed for high performance observations of the distribution and optical scattering cross sections of clouds and aerosol. GLAS is approaching six months of on orbit data operation. These data from thousands of orbits illustrate the ability of space lidar to accurately and dramatically measure the height distribution of global cloud and aerosol to an unprecedented degree. There were many intended science applications of the GLAS data and significant results have already been realized. One application is the accurate height distribution and coverage of global cloud cover with one goal of defining the limitation and inaccuracies of passive retrievals. Comparison to MODIS cloud retrievals shows notable discrepancies. Initial comparisons to NOAA 14&15 satellite cloud retrievals show basic similarity in overall cloud coverage, but important differences in height distribution. Because of the especially poor performance of passive cloud retrievals in polar regions, and partly because of high orbit track densities, the GLAS measurements are by far the most accurate measurement of Arctic and Antarctica cloud cover from space to date. Global aerosol height profiling is a fundamentally new measurement from space with multiple applications. A most important aerosol application is providing input to global aerosol generation and transport models. Another is improved measurement of aerosol optical depth. Oceanic surface energy flux derivation from PBL and LCL height measurements is another application of GLAS data that is being pursued. A special area of work for GLAS data is the correction and application of multiple scattering effects. Stretching of surface return pulses in excess of 40 m from cloud propagation effects and other interesting multiple scattering phenomena have been observed. As an EOS project instrument, GLAS data products are openly available to the science community. First year results from GLAS are summarized.
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.
2015-08-01
Non-contrast CT reliably detects fresh blood in the brain and is the current front-line imaging modality for intracranial hemorrhage such as that occurring in acute traumatic brain injury (contrast ~40-80 HU, size > 1 mm). We are developing flat-panel detector (FPD) cone-beam CT (CBCT) to facilitate such diagnosis in a low-cost, mobile platform suitable for point-of-care deployment. Such a system may offer benefits in the ICU, urgent care/concussion clinic, ambulance, and sports and military theatres. However, current FPD-CBCT systems face significant challenges that confound low-contrast, soft-tissue imaging. Artifact correction can overcome major sources of bias in FPD-CBCT but imparts noise amplification in filtered backprojection (FBP). Model-based reconstruction improves soft-tissue image quality compared to FBP by leveraging a high-fidelity forward model and image regularization. In this work, we develop a novel penalized weighted least-squares (PWLS) image reconstruction method with a noise model that includes accurate modeling of the noise characteristics associated with the two dominant artifact corrections (scatter and beam-hardening) in CBCT and utilizes modified weights to compensate for noise amplification imparted by each correction. Experiments included real data acquired on a FPD-CBCT test-bench and an anthropomorphic head phantom emulating intra-parenchymal hemorrhage. The proposed PWLS method demonstrated superior noise-resolution tradeoffs in comparison to FBP and PWLS with conventional weights (viz. at matched 0.50 mm spatial resolution, CNR = 11.9 compared to CNR = 5.6 and CNR = 9.9, respectively) and substantially reduced image noise especially in challenging regions such as skull base. The results support the hypothesis that with high-fidelity artifact correction and statistical reconstruction using an accurate post-artifact-correction noise model, FPD-CBCT can achieve image quality allowing reliable detection of intracranial hemorrhage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devan, Joshua D.
2015-01-01
Neutrinos are a nearly massless, neutral particle in the Standard Model that only interact via the weak interaction. Experimental confirmation of neutrino oscillations, in which a neutrino created as a particular type (electron, muon or tau) can be observed as a different type after propagating some distance, earned the 2015 Nobel Prize in Physics. Neutrino oscillation experiments rely on accurate measurements of neutrino interactions with matter, such as that presented here. Neutrinos also provide a unique probe of the nucleus, complementary to electron scattering experiments. This thesis presents a measurement of the charged-current inclusive cross section for muon neutrinos and antineutrinos in the energy range 2 to 50 GeV with the MINERvA detector. MINERvA is a neutrino scattering experiment in the NuMI neutrino beam at Fermilab, near Chicago. A cross section measures the probability of an interaction occurring, measured here as a function of neutrino energy. To extract a cross section from data, the observed rate of interactions is corrected for detector efficiency and divided by the number of scattering nucleons in the target and the flux of neutrinos in the beam. The neutrino flux is determined with the low-more » $$\
Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.
Tam, W G; Zardecki, A
1982-07-01
Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.
NASA Astrophysics Data System (ADS)
Sahu, Sanjay Kumar; Shanmugam, Palanisamy
2018-02-01
Scattering by water molecules and particulate matters determines the path and distance of photon propagation in underwater medium. Consequently, photon angle of scattering (given by scattering phase function) requires to be considered in addition to the extinction coefficient of the aquatic medium governed by the absorption and scattering coefficients in channel characterization for an underwater wireless optical communication (UWOC) system. This study focuses on analyzing the received signal power and impulse response of UWOC channel based on Monte-Carlo simulations for different water types, link distances, link geometries and transceiver parameters. A newly developed scattering phase function (referred to as SS phase function), which represents the real water types more accurately like the Petzold phase function, is considered for quantification of the channel characteristics along with the effects of absorption and scattering coefficients. A comparison between the results simulated using various phase function models and the experimental measurements of Petzold revealed that the SS phase function model predicts values closely matching with the actual values of the Petzold's phase function, which further establishes the importance of using a correct scattering phase function model while estimating the channel capacity of UWOC system in terms of the received power and channel impulse response. Results further demonstrate a great advantage of considering the nonzero probability of receiving scattered photons in estimating channel capacity rather than considering the reception of only ballistic photons as in Beer's Law, which severely underestimates the received power and affects the range of communication especially in the scattering water column. The received power computed based on the Monte-Carlo method by considering the receiver aperture sizes and field of views in different water types are further analyzed and discussed. These results are essential for evaluating the underwater link budget and constructing different system and design parameters for an UWOC system.
Method for measuring multiple scattering corrections between liquid scintillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Spatial frequency spectrum of the x-ray scatter distribution in CBCT projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bootsma, G. J.; Verhaegen, F.; Department of Oncology, Medical Physics Unit, McGill University, Montreal, Quebec H3G 1A4
2013-11-15
Purpose: X-ray scatter is a source of significant image quality loss in cone-beam computed tomography (CBCT). The use of Monte Carlo (MC) simulations separating primary and scattered photons has allowed the structure and nature of the scatter distribution in CBCT to become better elucidated. This work seeks to quantify the structure and determine a suitable basis function for the scatter distribution by examining its spectral components using Fourier analysis.Methods: The scatter distribution projection data were simulated using a CBCT MC model based on the EGSnrc code. CBCT projection data, with separated primary and scatter signal, were generated for a 30.6more » cm diameter water cylinder [single angle projection with varying axis-to-detector distance (ADD) and bowtie filters] and two anthropomorphic phantoms (head and pelvis, 360 projections sampled every 1°, with and without a compensator). The Fourier transform of the resulting scatter distributions was computed and analyzed both qualitatively and quantitatively. A novel metric called the scatter frequency width (SFW) is introduced to determine the scatter distribution's frequency content. The frequency content results are used to determine a set basis functions, consisting of low-frequency sine and cosine functions, to fit and denoise the scatter distribution generated from MC simulations using a reduced number of photons and projections. The signal recovery is implemented using Fourier filtering (low-pass Butterworth filter) and interpolation. Estimates of the scatter distribution are used to correct and reconstruct simulated projections.Results: The spatial and angular frequencies are contained within a maximum frequency of 0.1 cm{sup −1} and 7/(2π) rad{sup −1} for the imaging scenarios examined, with these values varying depending on the object and imaging setup (e.g., ADD and compensator). These data indicate spatial and angular sampling every 5 cm and π/7 rad (∼25°) can be used to properly capture the scatter distribution, with reduced sampling possible depending on the imaging scenario. Using a low-pass Butterworth filter, tuned with the SFW values, to denoise the scatter projection data generated from MC simulations using 10{sup 6} photons resulted in an error reduction of greater than 85% for the estimating scatter in single and multiple projections. Analysis showed that the use of a compensator helped reduce the error in estimating the scatter distribution from limited photon simulations by more than 37% when compared to the case without a compensator for the head and pelvis phantoms. Reconstructions of simulated head phantom projections corrected by the filtered and interpolated scatter estimates showed improvements in overall image quality.Conclusions: The spatial frequency content of the scatter distribution in CBCT is found to be contained within the low frequency domain. The frequency content is modulated both by object and imaging parameters (ADD and compensator). The low-frequency nature of the scatter distribution allows for a limited set of sine and cosine basis functions to be used to accurately represent the scatter signal in the presence of noise and reduced data sampling decreasing MC based scatter estimation time. Compensator induced modulation of the scatter distribution reduces the frequency content and improves the fitting results.« less
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Dong, Jing; Tearney, Guillermo J.; Pitris, Costas
2018-02-01
Catheter-based Optical Coherence Tomography (OCT) devices allow real-time and comprehensive imaging of the human esophagus. Hence, they provide the potential to overcome some of the limitations of endoscopy and biopsy, allowing earlier diagnosis and better prognosis for esophageal adenocarcinoma patients. However, the large number of images produced during every scan makes manual evaluation of the data exceedingly difficult. In this study, we propose a fully automated tissue characterization algorithm, capable of discriminating normal tissue from Barrett's Esophagus (BE) and dysplasia through entire three-dimensional (3D) data sets, acquired in vivo. The method is based on both the estimation of the scatterer size of the esophageal epithelial cells, using the bandwidth of the correlation of the derivative (COD) method, as well as intensity-based characteristics. The COD method can effectively estimate the scatterer size of the esophageal epithelium cells in good agreement with the literature. As expected, both the mean scatterer size and its standard deviation increase with increasing severity of disease (i.e. from normal to BE to dysplasia). The differences in the distribution of scatterer size for each tissue type are statistically significant, with a p value of < 0.0001. However, the scatterer size by itself cannot be used to accurately classify the various tissues. With the addition of intensity-based statistics the correct classification rates for all three tissue types range from 83 to 100% depending on the lesion size.
Characterization of Scattered X-Ray Photons in Dental Cone-Beam Computed Tomography.
Yang, Ching-Ching
2016-01-01
Scatter is a very important artifact causing factor in dental cone-beam CT (CBCT), which has a major influence on the detectability of details within images. This work aimed to improve the image quality of dental CBCT through scatter correction. Scatter was estimated in the projection domain from the low frequency component of the difference between the raw CBCT projection and the projection obtained by extrapolating the model fitted to the raw projections acquired with 2 different sizes of axial field-of-view (FOV). The function for curve fitting was optimized by using Monte Carlo simulation. To validate the proposed method, an anthropomorphic phantom and a water-filled cylindrical phantom with rod inserts simulating different tissue materials were scanned using 120 kVp, 5 mA and 9-second scanning time covering an axial FOV of 4 cm and 13 cm. The detectability of the CT image was evaluated by calculating the contrast-to-noise ratio (CNR). Beam hardening and cupping artifacts were observed in CBCT images without scatter correction, especially in those acquired with 13 cm FOV. These artifacts were reduced in CBCT images corrected by the proposed method, demonstrating its efficacy on scatter correction. After scatter correction, the image quality of CBCT was improved in terms of target detectability which was quantified as the CNR for rod inserts in the cylindrical phantom. Hopefully the calculations performed in this work can provide a route to reach a high level of diagnostic image quality for CBCT imaging used in oral and maxillofacial structures whilst ensuring patient dose as low as reasonably achievable, which may ultimately make CBCT scan a reliable and safe tool in clinical practice.
Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.
2017-01-01
Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335
Hajjarian, Zeinab; Nadkarni, Seemantini K
2013-01-01
Biological fluids fulfill key functionalities such as hydrating, protecting, and nourishing cells and tissues in various organ systems. They are capable of these versatile tasks owing to their distinct structural and viscoelastic properties. Characterizing the viscoelastic properties of bio-fluids is of pivotal importance for monitoring the development of certain pathologies as well as engineering synthetic replacements. Laser Speckle Rheology (LSR) is a novel optical technology that enables mechanical evaluation of tissue. In LSR, a coherent laser beam illuminates the tissue and temporal speckle intensity fluctuations are analyzed to evaluate mechanical properties. The rate of temporal speckle fluctuations is, however, influenced by both optical and mechanical properties of tissue. Therefore, in this paper, we develop and validate an approach to estimate and compensate for the contributions of light scattering to speckle dynamics and demonstrate the capability of LSR for the accurate extraction of viscoelastic moduli in phantom samples and biological fluids of varying optical and mechanical properties.
Hajjarian, Zeinab; Nadkarni, Seemantini K.
2013-01-01
Biological fluids fulfill key functionalities such as hydrating, protecting, and nourishing cells and tissues in various organ systems. They are capable of these versatile tasks owing to their distinct structural and viscoelastic properties. Characterizing the viscoelastic properties of bio-fluids is of pivotal importance for monitoring the development of certain pathologies as well as engineering synthetic replacements. Laser Speckle Rheology (LSR) is a novel optical technology that enables mechanical evaluation of tissue. In LSR, a coherent laser beam illuminates the tissue and temporal speckle intensity fluctuations are analyzed to evaluate mechanical properties. The rate of temporal speckle fluctuations is, however, influenced by both optical and mechanical properties of tissue. Therefore, in this paper, we develop and validate an approach to estimate and compensate for the contributions of light scattering to speckle dynamics and demonstrate the capability of LSR for the accurate extraction of viscoelastic moduli in phantom samples and biological fluids of varying optical and mechanical properties. PMID:23705028
NASA Technical Reports Server (NTRS)
Wang, Qinglin; Gogineni, S. P.
1991-01-01
A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.
Conjugate adaptive optics with remote focusing in multiphoton microscopy
NASA Astrophysics Data System (ADS)
Tao, Xiaodong; Lam, Tuwin; Zhu, Bingzhao; Li, Qinggele; Reinig, Marc R.; Kubby, Joel
2018-02-01
The small correction volume for conventional wavefront shaping methods limits their application in biological imaging through scattering media. In this paper, we take advantage of conjugate adaptive optics (CAO) and remote focusing (CAORF) to achieve three-dimensional (3D) scanning through a scattering layer with a single correction. Our results show that the proposed system can provide 10 times wider axial field of view compared with a conventional conjugate AO system when 16,384 segments are used on a spatial light modulator. We demonstrate two-photon imaging with CAORF through mouse skull. The fluorescent microspheres embedded under the scattering layers can be clearly observed after applying the correction.
Interplay of threshold resummation and hadron mass corrections in deep inelastic processes
Accardi, Alberto; Anderle, Daniele P.; Ringer, Felix
2015-02-01
We discuss hadron mass corrections and threshold resummation for deep-inelastic scattering lN-->l'X and semi-inclusive annihilation e +e - → hX processes, and provide a prescription how to consistently combine these two corrections respecting all kinematic thresholds. We find an interesting interplay between threshold resummation and target mass corrections for deep-inelastic scattering at large values of Bjorken x B. In semi-inclusive annihilation, on the contrary, the two considered corrections are relevant in different kinematic regions and do not affect each other. A detailed analysis is nonetheless of interest in the light of recent high precision data from BaBar and Belle onmore » pion and kaon production, with which we compare our calculations. For both deep inelastic scattering and single inclusive annihilation, the size of the combined corrections compared to the precision of world data is shown to be large. Therefore, we conclude that these theoretical corrections are relevant for global QCD fits in order to extract precise parton distributions at large Bjorken x B, and fragmentation functions over the whole kinematic range.« less
NASA Astrophysics Data System (ADS)
Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan
2014-12-01
Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitaker, Katherine E.; Van Dokkum, Pieter G.; Brammer, Gabriel
2010-08-20
With a complete, mass-selected sample of quiescent galaxies from the NEWFIRM Medium-Band Survey, we study the stellar populations of the oldest and most massive galaxies (>10{sup 11} M{sub sun}) to high redshift. The sample includes 570 quiescent galaxies selected based on their extinction-corrected U - V colors out to z = 2.2, with accurate photometric redshifts, {sigma} {sub z}/(1 + z) {approx} 2%, and rest-frame colors, {sigma}{sub U-V} {approx} 0.06 mag. We measure an increase in the intrinsic scatter of the rest-frame U - V colors of quiescent galaxies with redshift. This scatter in color arises from the spread inmore » ages of the quiescent galaxies, where we see both relatively quiescent red, old galaxies and quiescent blue, younger galaxies toward higher redshift. The trends between color and age are consistent with the observed composite rest-frame spectral energy distributions (SEDs) of these galaxies. The composite SEDs of the reddest and bluest quiescent galaxies are fundamentally different, with remarkably well-defined 4000 A and Balmer breaks, respectively. Some of the quiescent galaxies may be up to four times older than the average age and up to the age of the universe, if the assumption of solar metallicity is correct. By matching the scatter predicted by models that include growth of the red sequence by the transformation of blue galaxies to the observed intrinsic scatter, the data indicate that most early-type galaxies formed their stars at high redshift with a burst of star formation prior to migrating to the red sequence. The observed U - V color evolution with redshift is weaker than passive evolution predicts; possible mechanisms to slow the color evolution include increasing amounts of dust in quiescent galaxies toward higher redshift, red mergers at z {approx}< 1, and a frosting of relatively young stars from star formation at later times.« less
Evaluation of a scattering correction method for high energy tomography
NASA Astrophysics Data System (ADS)
Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel
2018-01-01
One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where experimental complexities must be avoided. This approach has been previously tested successfully in the energy range of 100 keV - 6 MeV. In this paper, the kernels are simulated using MCNP in order to take into account both photons and electronic processes in scattering radiation contribution. We present scatter correction results on a large object scanned with a 9 MeV linear accelerator.
NASA Technical Reports Server (NTRS)
Shertzer, Janine; Temkin, Aaron
2004-01-01
The development of a practical method of accurately calculating the full scattering amplitude, without making a partial wave decomposition is continued. The method is developed in the context of electron-hydrogen scattering, and here exchange is dealt with by considering e-H scattering in the static exchange approximation. The Schroedinger equation in this approximation can be simplified to a set of coupled integro-differential equations. The equations are solved numerically for the full scattering wave function. The scattering amplitude can most accurately be calculated from an integral expression for the amplitude; that integral can be formally simplified, and then evaluated using the numerically determined wave function. The results are essentially identical to converged partial wave results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.
2012-07-01
A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied.more » (authors)« less
Fully relativistic form factor for Thomson scattering.
Palastro, J P; Ross, J S; Pollock, B; Divol, L; Froula, D H; Glenzer, S H
2010-03-01
We derive a fully relativistic form factor for Thomson scattering in unmagnetized plasmas valid to all orders in the normalized electron velocity, beta[over ]=v[over ]/c. The form factor is compared to a previously derived expression where the lowest order electron velocity, beta[over], corrections are included [J. Sheffield, (Academic Press, New York, 1975)]. The beta[over ] expansion approach is sufficient for electrostatic waves with small phase velocities such as ion-acoustic waves, but for electron-plasma waves the phase velocities can be near luminal. At high phase velocities, the electron motion acquires relativistic corrections including effective electron mass, relative motion of the electrons and electromagnetic wave, and polarization rotation. These relativistic corrections alter the scattered emission of thermal plasma waves, which manifest as changes in both the peak power and width of the observed Thomson-scattered spectra.
Radiative corrections to elastic proton-electron scattering measured in coincidence
NASA Astrophysics Data System (ADS)
Gakh, G. I.; Konchatnij, M. I.; Merenkov, N. P.; Tomasi-Gustafsson, E.
2017-05-01
The differential cross section for elastic scattering of protons on electrons at rest is calculated, taking into account the QED radiative corrections to the leptonic part of interaction. These model-independent radiative corrections arise due to emission of the virtual and real soft and hard photons as well as to vacuum polarization. We analyze an experimental setup when both the final particles are recorded in coincidence and their energies are determined within some uncertainties. The kinematics, the cross section, and the radiative corrections are calculated and numerical results are presented.
Maslowski, Alexander; Wang, Adam; Sun, Mingshan; Wareing, Todd; Davis, Ian; Star-Lack, Josh
2018-05-01
To describe Acuros ® CTS, a new software tool for rapidly and accurately estimating scatter in x-ray projection images by deterministically solving the linear Boltzmann transport equation (LBTE). The LBTE describes the behavior of particles as they interact with an object across spatial, energy, and directional (propagation) domains. Acuros CTS deterministically solves the LBTE by modeling photon transport associated with an x-ray projection in three main steps: (a) Ray tracing photons from the x-ray source into the object where they experience their first scattering event and form scattering sources. (b) Propagating photons from their first scattering sources across the object in all directions to form second scattering sources, then repeating this process until all high-order scattering sources are computed using the source iteration method. (c) Ray-tracing photons from scattering sources within the object to the detector, accounting for the detector's energy and anti-scatter grid responses. To make this process computationally tractable, a combination of analytical and discrete methods is applied. The three domains are discretized using the Linear Discontinuous Finite Elements, Multigroup, and Discrete Ordinates methods, respectively, which confer the ability to maintain the accuracy of a continuous solution. Furthermore, through the implementation in CUDA, we sought to exploit the parallel computing capabilities of graphics processing units (GPUs) to achieve the speeds required for clinical utilization. Acuros CTS was validated against Geant4 Monte Carlo simulations using two digital phantoms: (a) a water phantom containing lung, air, and bone inserts (WLAB phantom) and (b) a pelvis phantom derived from a clinical CT dataset. For these studies, we modeled the TrueBeam ® (Varian Medical Systems, Palo Alto, CA) kV imaging system with a source energy of 125 kVp. The imager comprised a 600 μm-thick Cesium Iodide (CsI) scintillator and a 10:1 one-dimensional anti-scatter grid. For the WLAB studies, the full-fan geometry without a bowtie filter was used (with and without the anti-scatter grid). For the pelvis phantom studies, a half-fan geometry with bowtie was used (with the anti-scatter grid). Scattered and primary photon fluences and energies deposited in the detector were recorded. The Acuros CTS and Monte Carlo results demonstrated excellent agreement. For the WLAB studies, the average percent difference between the Monte Carlo- and Acuros-generated scattered photon fluences at the face of the detector was -0.7%. After including the detector response, the average percent differences between the Monte Carlo- and Acuros-generated scatter fractions (SF) were -0.1% without the grid and 0.6% with the grid. For the digital pelvis simulation, the Monte Carlo- and Acuros-generated SFs agreed to within 0.1% on average, despite the scatter-to-primary ratios (SPRs) being as high as 5.5. The Acuros CTS computation time for each scatter image was ~1 s using a single GPU. Acuros CTS enables a fast and accurate calculation of scatter images by deterministically solving the LBTE thus offering a computationally attractive alternative to Monte Carlo methods. Part II describes the application of Acuros CTS to scatter correction of CBCT scans on the TrueBeam system. © 2018 American Association of Physicists in Medicine.
Conformational and vibrational reassessment of solid paracetamol
NASA Astrophysics Data System (ADS)
Amado, Ana M.; Azevedo, Celeste; Ribeiro-Claro, Paulo J. A.
2017-08-01
This work provides an answer to the urge for a more detailed and accurate knowledge of the vibrational spectrum of the widely used analgesic/antipyretic drug commonly known as paracetamol. A comprehensive spectroscopic analysis - including infrared, Raman, and inelastic neutron scattering (INS) - is combined with a computational approach which takes account for the effects of intermolecular interactions in the solid state. This allows a full reassessment of the vibrational assignments for Paracetamol, thus preventing the propagation of incorrect data analysis and misassignments already found in the literature. In particular, the vibrational modes involving the hydrogen-bonded Nsbnd H and Osbnd H groups are correctly reallocated to bands shifted by up to 300 cm- 1 relatively to previous assignments.
Coastal High-resolution Observations and Remote Sensing of Ecosystems (C-HORSE)
NASA Technical Reports Server (NTRS)
Guild, Liane
2016-01-01
Coastal benthic marine ecosystems, such as coral reefs, seagrass beds, and kelp forests are highly productive as well as ecologically and commercially important resources. These systems are vulnerable to degraded water quality due to coastal development, terrestrial run-off, and harmful algal blooms. Measurements of these features are important for understanding linkages with land-based sources of pollution and impacts to coastal ecosystems. Challenges for accurate remote sensing of coastal benthic (shallow water) ecosystems and water quality are complicated by atmospheric scattering/absorption (approximately 80+% of the signal), sun glint from the sea surface, and water column scattering (e.g., turbidity). Further, sensor challenges related to signal to noise (SNR) over optically dark targets as well as insufficient radiometric calibration thwart the value of coastal remotely-sensed data. Atmospheric correction of satellite and airborne remotely-sensed radiance data is crucial for deriving accurate water-leaving radiance in coastal waters. C-HORSE seeks to optimize coastal remote sensing measurements by using a novel airborne instrument suite that will bridge calibration, validation, and research capabilities of bio-optical measurements from the sea to the high altitude remote sensing platform. The primary goal of C-HORSE is to facilitate enhanced optical observations of coastal ecosystems using state of the art portable microradiometers with 19 targeted spectral channels and flight planning to optimize measurements further supporting current and future remote sensing missions.
Experimental testing of four correction algorithms for the forward scattering spectrometer probe
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.
1992-01-01
Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.
[Development of a Striatal and Skull Phantom for Quantitative 123I-FP-CIT SPECT].
Ishiguro, Masanobu; Uno, Masaki; Miyazaki, Takuma; Kataoka, Yumi; Toyama, Hiroshi; Ichihara, Takashi
123 Iodine-labelled N-(3-fluoropropyl) -2β-carbomethoxy-3β-(4-iodophenyl) nortropane ( 123 I-FP-CIT) single photon emission computerized tomography (SPECT) images are used for differential diagnosis such as Parkinson's disease (PD). Specific binding ratio (SBR) is affected by scattering and attenuation in SPECT imaging, because gender and age lead to changes in skull density. It is necessary to clarify and correct the influence of the phantom simulating the the skull. The purpose of this study was to develop phantoms that can evaluate scattering and attenuation correction. Skull phantoms were prepared based on the measuring the results of the average computed tomography (CT) value, average skull thickness of 12 males and 16 females. 123 I-FP-CIT SPECT imaging of striatal phantom was performed with these skull phantoms, which reproduced normal and PD. SPECT images, were reconstructed with scattering and attenuation correction. SBR with partial volume effect corrected (SBR act ) and conventional SBR (SBR Bolt ) were measured and compared. The striatum and the skull phantoms along with 123 I-FP-CIT were able to reproduce the normal accumulation and disease state of PD and further those reproduced the influence of skull density on SPECT imaging. The error rate with the true SBR, SBR act was much smaller than SBR Bolt . The effect on SBR could be corrected by scattering and attenuation correction even if the skull density changes with 123 I-FP-CIT on SPECT imaging. The combination of triple energy window method and CT-attenuation correction method would be the best correction method for SBR act .
NASA Astrophysics Data System (ADS)
Qattan, I. A.
2017-06-01
I present a prediction of the e± elastic scattering cross-section ratio, Re+e-, as determined using a new parametrization of the two-photon exchange (TPE) corrections to electron-proton elastic scattering cross section σR. The extracted ratio is compared to several previous phenomenological extractions, TPE hadronic calculations, and direct measurements from the comparison of electron and positron scattering. The TPE corrections and the ratio Re+e- show a clear change of sign at low Q2, which is necessary to explain the high-Q2 form factors discrepancy while being consistent with the known Q2→0 limit. While my predictions are in generally good agreement with previous extractions, TPE hadronic calculations, and existing world data including the recent two measurements from the CLAS and VEPP-3 Novosibirsk experiments, they are larger than the new OLYMPUS measurements at larger Q2 values.
Interference detection and correction applied to incoherent-scatter radar power spectrum measurement
NASA Technical Reports Server (NTRS)
Ying, W. P.; Mathews, J. D.; Rastogi, P. K.
1986-01-01
A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.
A Unified Treatment of the Acoustic and Elastic Scattered Waves from Fluid-Elastic Media
NASA Astrophysics Data System (ADS)
Denis, Max Fernand
In this thesis, contributions are made to the numerical modeling of the scattering fields from fluid-filled poroelastic materials. Of particular interest are highly porous materials that demonstrate strong contrast to the saturating fluid. A Biot's analysis of porous medium serves as the starting point of the elastic-solid and pore-fluid governing equations of motion. The longitudinal scattering waves of the elastic-solid mode and the pore-fluid mode are modeled by the Kirchhoff-Helmholtz integral equation. The integral equation is evaluated using a series approximation, describing the successive perturbation of the material contrasts. To extended the series' validity into larger domains, rational fraction extrapolation methods are employed. The local Pade□ approximant procedure is a technique that allows one to extrapolate from a scattered field of small contrast into larger values, using Pade□ approximants. To ensure the accuracy of the numerical model, comparisons are made with the exact solution of scattering from a fluid sphere. Mean absolute error analyses, yield convergent and accurate results. In addition, the numerical model correctly predicts the Bragg peaks for a periodic lattice of fluid spheres. In the case of trabecular bones, the far-field scattering pressure attenuation is a superposition of the elastic-solid mode and the pore-fluid mode generated waves from the surrounding fluid and poroelastic boundaries. The attenuation is linearly dependent with frequency between 0.2 and 0.6MHz. The slope of the attenuation is nonlinear with porosity, and does not reflect the mechanical properties of the trabecular bone. The attenuation shows the anisotropic effects of the trabeculae structure. Thus, ultrasound can possibly be employed to non-invasively predict the principal structural orientation of trabecular bones.
Local blur analysis and phase error correction method for fringe projection profilometry systems.
Rao, Li; Da, Feipeng
2018-05-20
We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.
NASA Astrophysics Data System (ADS)
Hsu, Juno; Prather, Michael J.; Cameron-Smith, Philip; Veidenbaum, Alex; Nicolau, Alex
2017-07-01
Solar-J is a comprehensive radiative transfer model for the solar spectrum that addresses the needs of both solar heating and photochemistry in Earth system models. Solar-J is a spectral extension of Cloud-J, a standard in many chemical models that calculates photolysis rates in the 0.18-0.8 µm region. The Cloud-J core consists of an eight-stream scattering, plane-parallel radiative transfer solver with corrections for sphericity. Cloud-J uses cloud quadrature to accurately average over correlated cloud layers. It uses the scattering phase function of aerosols and clouds expanded to eighth order and thus avoids isotropic-equivalent approximations prevalent in most solar heating codes. The spectral extension from 0.8 to 12 µm enables calculation of both scattered and absorbed sunlight and thus aerosol direct radiative effects and heating rates throughout the Earth's atmosphere.The Solar-J extension adopts the correlated-k gas absorption bins, primarily water vapor, from the shortwave Rapid Radiative Transfer Model for general circulation model (GCM) applications (RRTMG-SW). Solar-J successfully matches RRTMG-SW's tropospheric heating profile in a clear-sky, aerosol-free, tropical atmosphere. We compare both codes in cloudy atmospheres with a liquid-water stratus cloud and an ice-crystal cirrus cloud. For the stratus cloud, both models use the same physical properties, and we find a systematic low bias of about 3 % in planetary albedo across all solar zenith angles caused by RRTMG-SW's two-stream scattering. Discrepancies with the cirrus cloud using any of RRTMG-SW's three different parameterizations are as large as about 20-40 % depending on the solar zenith angles and occur throughout the atmosphere.Effectively, Solar-J has combined the best components of RRTMG-SW and Cloud-J to build a high-fidelity module for the scattering and absorption of sunlight in the Earth's atmosphere, for which the three major components - wavelength integration, scattering, and averaging over cloud fields - all have comparably small errors. More accurate solutions with Solar-J come with increased computational costs, about 5 times that of RRTMG-SW for a single atmosphere. There are options for reduced costs or computational acceleration that would bring costs down while maintaining improved fidelity and balanced errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Juno; Prather, Michael J.; Cameron-Smith, Philip
Solar-J is a comprehensive radiative transfer model for the solar spectrum that addresses the needs of both solar heating and photochemistry in Earth system models. Solar-J is a spectral extension of Cloud-J, a standard in many chemical models that calculates photolysis rates in the 0.18–0.8 µm region. The Cloud-J core consists of an eight-stream scattering, plane-parallel radiative transfer solver with corrections for sphericity. Cloud-J uses cloud quadrature to accurately average over correlated cloud layers. It uses the scattering phase function of aerosols and clouds expanded to eighth order and thus avoids isotropic-equivalent approximations prevalent in most solar heating codes. Themore » spectral extension from 0.8 to 12 µm enables calculation of both scattered and absorbed sunlight and thus aerosol direct radiative effects and heating rates throughout the Earth's atmosphere. Furthermore, the Solar-J extension adopts the correlated-k gas absorption bins, primarily water vapor, from the shortwave Rapid Radiative Transfer Model for general circulation model (GCM) applications (RRTMG-SW). Solar-J successfully matches RRTMG-SW's tropospheric heating profile in a clear-sky, aerosol-free, tropical atmosphere. Here, we compare both codes in cloudy atmospheres with a liquid-water stratus cloud and an ice-crystal cirrus cloud. For the stratus cloud, both models use the same physical properties, and we find a systematic low bias of about 3 % in planetary albedo across all solar zenith angles caused by RRTMG-SW's two-stream scattering. Discrepancies with the cirrus cloud using any of RRTMG-SW's three different parameterizations are as large as about 20–40 % depending on the solar zenith angles and occur throughout the atmosphere. Effectively, Solar-J has combined the best components of RRTMG-SW and Cloud-J to build a high-fidelity module for the scattering and absorption of sunlight in the Earth's atmosphere, for which the three major components – wavelength integration, scattering, and averaging over cloud fields – all have comparably small errors. More accurate solutions with Solar-J come with increased computational costs, about 5 times that of RRTMG-SW for a single atmosphere. There are options for reduced costs or computational acceleration that would bring costs down while maintaining improved fidelity and balanced errors.« less
Hsu, Juno; Prather, Michael J.; Cameron-Smith, Philip; ...
2017-01-01
Solar-J is a comprehensive radiative transfer model for the solar spectrum that addresses the needs of both solar heating and photochemistry in Earth system models. Solar-J is a spectral extension of Cloud-J, a standard in many chemical models that calculates photolysis rates in the 0.18–0.8 µm region. The Cloud-J core consists of an eight-stream scattering, plane-parallel radiative transfer solver with corrections for sphericity. Cloud-J uses cloud quadrature to accurately average over correlated cloud layers. It uses the scattering phase function of aerosols and clouds expanded to eighth order and thus avoids isotropic-equivalent approximations prevalent in most solar heating codes. Themore » spectral extension from 0.8 to 12 µm enables calculation of both scattered and absorbed sunlight and thus aerosol direct radiative effects and heating rates throughout the Earth's atmosphere. Furthermore, the Solar-J extension adopts the correlated-k gas absorption bins, primarily water vapor, from the shortwave Rapid Radiative Transfer Model for general circulation model (GCM) applications (RRTMG-SW). Solar-J successfully matches RRTMG-SW's tropospheric heating profile in a clear-sky, aerosol-free, tropical atmosphere. Here, we compare both codes in cloudy atmospheres with a liquid-water stratus cloud and an ice-crystal cirrus cloud. For the stratus cloud, both models use the same physical properties, and we find a systematic low bias of about 3 % in planetary albedo across all solar zenith angles caused by RRTMG-SW's two-stream scattering. Discrepancies with the cirrus cloud using any of RRTMG-SW's three different parameterizations are as large as about 20–40 % depending on the solar zenith angles and occur throughout the atmosphere. Effectively, Solar-J has combined the best components of RRTMG-SW and Cloud-J to build a high-fidelity module for the scattering and absorption of sunlight in the Earth's atmosphere, for which the three major components – wavelength integration, scattering, and averaging over cloud fields – all have comparably small errors. More accurate solutions with Solar-J come with increased computational costs, about 5 times that of RRTMG-SW for a single atmosphere. There are options for reduced costs or computational acceleration that would bring costs down while maintaining improved fidelity and balanced errors.« less
NASA Astrophysics Data System (ADS)
Bootsma, Gregory J.
X-ray scatter in cone-beam computed tomography (CBCT) is known to reduce image quality by introducing image artifacts, reducing contrast, and limiting computed tomography (CT) number accuracy. The extent of the effect of x-ray scatter on CBCT image quality is determined by the shape and magnitude of the scatter distribution in the projections. A method to allay the effects of scatter is imperative to enable application of CBCT to solve a wider domain of clinical problems. The work contained herein proposes such a method. A characterization of the scatter distribution through the use of a validated Monte Carlo (MC) model is carried out. The effects of imaging parameters and compensators on the scatter distribution are investigated. The spectral frequency components of the scatter distribution in CBCT projection sets are analyzed using Fourier analysis and found to reside predominately in the low frequency domain. The exact frequency extents of the scatter distribution are explored for different imaging configurations and patient geometries. Based on the Fourier analysis it is hypothesized the scatter distribution can be represented by a finite sum of sine and cosine functions. The fitting of MC scatter distribution estimates enables the reduction of the MC computation time by diminishing the number of photon tracks required by over three orders of magnitude. The fitting method is incorporated into a novel scatter correction method using an algorithm that simultaneously combines multiple MC scatter simulations. Running concurrent MC simulations while simultaneously fitting the results allows for the physical accuracy and flexibility of MC methods to be maintained while enhancing the overall efficiency. CBCT projection set scatter estimates, using the algorithm, are computed on the order of 1--2 minutes instead of hours or days. Resulting scatter corrected reconstructions show a reduction in artifacts and improvement in tissue contrast and voxel value accuracy.
NASA Astrophysics Data System (ADS)
Abbaszadeh, Shiva; Chinn, Garry; Levin, Craig S.
2018-01-01
The kinematics of Compton scatter can be used to estimate the interaction sequence of inter-crystal scatter interactions in 3D position-sensitive cadmium zinc telluride (CZT) detectors. However, in the case of intra-crystal scatter in a ‘cross-strip’ CZT detector slab, multiple anode and cathode strips may be triggered, creating position ambiguity due to uncertainty in possible combinations of anode-cathode pairings. As a consequence, methods such as energy-weighted centroid are not applicable to position the interactions. In practice, since the event position is uncertain, these intra-crystal scatters events are discarded. In this work, we studied using Compton kinematics and a ‘direction difference angle’ to provide a method to correctly identify the anode-cathode pair corresponding to the first interaction position in an intra-crystal scatter event. GATE simulation studies of a NEMA NU4 image quality phantom in a small animal positron emission tomography under development composed of 192, 40~mm×40~mm×5 mm CZT crystals shows that 47% of total numbers of multiple-interaction photon events (MIPEs) are intra-crystal scatter with a 100 keV lower energy threshold per interaction. The sensitivity of the system increases from 0.6 to 4.10 (using 10 keV as system lower energy threshold) by including rather than discarding inter- and intra-crystal scatter. The contrast-to-noise ratio (CNR) also increases from 5.81+/-0.3 to 12.53+/-0.37 . It was shown that a higher energy threshold limits the capability of the system to detect MIPEs and reduces CNR. Results indicate a sensitivity increase (4.1 to 5.88) when raising the lower energy threshold (10 keV to 100 keV) for the case of only two-interaction events. In order to detect MIPEs accurately, a low noise system capable of a low energy threshold (10 keV) per interaction is desired.
Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism
NASA Astrophysics Data System (ADS)
Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; Nicholson, D. M.; Johnson, Duane D.
2014-11-01
The Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an efficient site-centered, electronic-structure technique for addressing an assembly of N scatterers. Wave functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number Lmax=(l,mmax), while scattering matrices, which determine spectral properties, are truncated at Lt r=(l,mt r) where phase shifts δl >ltr are negligible. Historically, Lmax is set equal to Lt r, which is correct for large enough Lmax but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for Lmax>Lt r with δl >ltr set to zero [X.-G. Zhang and W. H. Butler, Phys. Rev. B 46, 7433 (1992), 10.1103/PhysRevB.46.7433]. We present a numerically efficient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R3 process with rank N (ltr+1 ) 2 ] and includes higher-L contributions via linear algebra [R2 process with rank N (lmax+1) 2 ]. The augmented-KKR approach yields properly normalized wave functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe, and L 1 0 CoPt and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus Lmax for a given Lt r.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lynch, Vickie E.; Borreguero, Jose M.; Bhowmik, Debsindhu
Graphical abstract: - Highlights: • An automated workflow to optimize force-field parameters. • Used the workflow to optimize force-field parameter for a system containing nanodiamond and tRNA. • The mechanism relies on molecular dynamics simulation and neutron scattering experimental data. • The workflow can be generalized to any other experimental and simulation techniques. - Abstract: Large-scale simulations and data analysis are often required to explain neutron scattering experiments to establish a connection between the fundamental physics at the nanoscale and data probed by neutrons. However, to perform simulations at experimental conditions it is critical to use correct force-field (FF) parametersmore » which are unfortunately not available for most complex experimental systems. In this work, we have developed a workflow optimization technique to provide optimized FF parameters by comparing molecular dynamics (MD) to neutron scattering data. We describe the workflow in detail by using an example system consisting of tRNA and hydrophilic nanodiamonds in a deuterated water (D{sub 2}O) environment. Quasi-elastic neutron scattering (QENS) data show a faster motion of the tRNA in the presence of nanodiamond than without the ND. To compare the QENS and MD results quantitatively, a proper choice of FF parameters is necessary. We use an efficient workflow to optimize the FF parameters between the hydrophilic nanodiamond and water by comparing to the QENS data. Our results show that we can obtain accurate FF parameters by using this technique. The workflow can be generalized to other types of neutron data for FF optimization, such as vibrational spectroscopy and spin echo.« less
Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism
Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; ...
2014-11-04
Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an ecient sitecentered, electronic-structure technique for addressing an assembly of N scatterers. Wave-functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number L max = (l,m) max, while scattering matrices, which determine spectral properties, are truncated at L tr = (l,m) tr where phase shifts δl>l tr are negligible. Historically, L max is set equal to L tr, which is correct for large enough L max but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for L maxmore » > L tr with δl>l tr set to zero [Zhang and Butler, Phys. Rev. B 46, 7433]. We present a numerically ecient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R 3 process with rank N(l tr + 1) 2] and includes higher-L contributions via linear algebra [R 2 process with rank N(l max +1) 2]. Augmented-KKR approach yields properly normalized wave-functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe and L1 0 CoPt, and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus L max for a given L tr.« less
Non-cancellation of electroweak logarithms in high-energy scattering
Manohar, Aneesh V.; Shotwell, Brian; Bauer, Christian W.; ...
2015-01-01
We study electroweak Sudakov corrections in high energy scattering, and the cancellation between real and virtual Sudakov corrections. Numerical results are given for the case of heavy quark production by gluon collisions involving the rates gg→t¯t, b¯b, t¯bW, t¯tZ, b¯bZ, t¯tH, b¯bH. Gauge boson virtual corrections are related to real transverse gauge boson emission, and Higgs virtual corrections to Higgs and longitudinal gauge boson emission. At the LHC, electroweak corrections become important in the TeV regime. At the proposed 100TeV collider, electroweak interactions enter a new regime, where the corrections are very large and need to be resummed.
Modeling boundary measurements of scattered light using the corrected diffusion approximation
Lehtikangas, Ossi; Tarvainen, Tanja; Kim, Arnold D.
2012-01-01
We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. PMID:22435102
Quadratic electroweak corrections for polarized Moller scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
A. Aleksejevs, S. Barkanova, Y. Kolomensky, E. Kuraev, V. Zykunov
2012-01-01
The paper discusses the two-loop (NNLO) electroweak radiative corrections to the parity violating electron-electron scattering asymmetry induced by squaring one-loop diagrams. The calculations are relevant for the ultra-precise 11 GeV MOLLER experiment planned at Jefferson Laboratory and experiments at high-energy future electron colliders. The imaginary parts of the amplitudes are taken into consideration consistently in both the infrared-finite and divergent terms. The size of the obtained partial correction is significant, which indicates a need for a complete study of the two-loop electroweak radiative corrections in order to meet the precision goals of future experiments.
Library based x-ray scatter correction for dedicated cone beam breast CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correctionmore » on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, L; Zhu, L; Vedantham, S
2016-06-15
Purpose: The image quality of dedicated cone-beam breast CT (CBBCT) is fundamentally limited by substantial x-ray scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose to suppress x-ray scatter in CBBCT images using a deterministic forward projection model. Method: We first use the 1st-pass FDK-reconstructed CBBCT images to segment fibroglandular and adipose tissue. Attenuation coefficients are assigned to the two tissues based on the x-ray spectrum used for imaging acquisition, and is forward projected to simulatemore » scatter-free primary projections. We estimate the scatter by subtracting the simulated primary projection from the measured projection, and then the resultant scatter map is further refined by a Fourier-domain fitting algorithm after discarding untrusted scatter information. The final scatter estimate is subtracted from the measured projection for effective scatter correction. In our implementation, the proposed scatter correction takes 0.5 seconds for each projection. The method was evaluated using the overall image spatial non-uniformity (SNU) metric and the contrast-to-noise ratio (CNR) with 5 clinical datasets of BI-RADS 4/5 subjects. Results: For the 5 clinical datasets, our method reduced the SNU from 7.79% to 1.68% in coronal view and from 6.71% to 3.20% in sagittal view. The average CNR is improved by a factor of 1.38 in coronal view and 1.26 in sagittal view. Conclusion: The proposed scatter correction approach requires no additional scans or prior images and uses a deterministic model for efficient calculation. Evaluation with clinical datasets demonstrates the feasibility and stability of the method. These features are attractive for clinical CBBCT and make our method distinct from other approaches. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
NASA Technical Reports Server (NTRS)
Lin, Z.; Stamnes, S.; Jin, Z.; Laszlo, I.; Tsay, S. C.; Wiscombe, W. J.; Stamnes, K.
2015-01-01
A successor version 3 of DISORT (DISORT3) is presented with important upgrades that improve the accuracy, efficiency, and stability of the algorithm. Compared with version 2 (DISORT2 released in 2000) these upgrades include (a) a redesigned BRDF computation that improves both speed and accuracy, (b) a revised treatment of the single scattering correction, and (c) additional efficiency and stability upgrades for beam sources. In DISORT3 the BRDF computation is improved in the following three ways: (i) the Fourier decomposition is prepared "off-line", thus avoiding the repeated internal computations done in DISORT2; (ii) a large enough number of terms in the Fourier expansion of the BRDF is employed to guarantee accurate values of the expansion coefficients (default is 200 instead of 50 in DISORT2); (iii) in the post processing step the reflection of the direct attenuated beam from the lower boundary is included resulting in a more accurate single scattering correction. These improvements in the treatment of the BRDF have led to improved accuracy and a several-fold increase in speed. In addition, the stability of beam sources has been improved by removing a singularity occurring when the cosine of the incident beam angle is too close to the reciprocal of any of the eigenvalues. The efficiency for beam sources has been further improved from reducing by a factor of 2 (compared to DISORT2) the dimension of the linear system of equations that must be solved to obtain the particular solutions, and by replacing the LINPAK routines used in DISORT2 by LAPACK 3.5 in DISORT3. These beam source stability and efficiency upgrades bring enhanced stability and an additional 5-7% improvement in speed. Numerical results are provided to demonstrate and quantify the improvements in accuracy and efficiency of DISORT3 compared to DISORT2.
Recent Progress in the Development of a Multi-Layer Green's Function Code for Ion Beam Transport
NASA Technical Reports Server (NTRS)
Tweed, John; Walker, Steven A.; Wilson, John W.; Tripathi, Ram K.
2008-01-01
To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiation is needed. To address this need, a new Green's function code capable of simulating high charge and energy ions with either laboratory or space boundary conditions is currently under development. The computational model consists of combinations of physical perturbation expansions based on the scales of atomic interaction, multiple scattering, and nuclear reactive processes with use of the Neumann-asymptotic expansions with non-perturbative corrections. The code contains energy loss due to straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and downshifts. Previous reports show that the new code accurately models the transport of ion beams through a single slab of material. Current research efforts are focused on enabling the code to handle multiple layers of material and the present paper reports on progress made towards that end.
Generalized model screening potentials for Fermi-Dirac plasmas
NASA Astrophysics Data System (ADS)
Akbari-Moghanjoughi, M.
2016-04-01
In this paper, some properties of relativistically degenerate quantum plasmas, such as static ion screening, structure factor, and Thomson scattering cross-section, are studied in the framework of linearized quantum hydrodynamic theory with the newly proposed kinetic γ-correction to Bohm term in low frequency limit. It is found that the correction has a significant effect on the properties of quantum plasmas in all density regimes, ranging from solid-density up to that of white dwarf stars. It is also found that Shukla-Eliasson attractive force exists up to a few times the density of metals, and the ionic correlations are seemingly apparent in the radial distribution function signature. Simplified statically screened attractive and repulsive potentials are presented for zero-temperature Fermi-Dirac plasmas, valid for a wide range of quantum plasma number-density and atomic number values. Moreover, it is observed that crystallization of white dwarfs beyond a critical core number-density persists with this new kinetic correction, but it is shifted to a much higher number-density value of n0 ≃ 1.94 × 1037 cm-3 (1.77 × 1010 gr cm-3), which is nearly four orders of magnitude less than the nuclear density. It is found that the maximal Thomson scattering with the γ-corrected structure factor is a remarkable property of white dwarf stars. However, with the new γ-correction, the maximal scattering shifts to the spectrum region between hard X-ray and low-energy gamma-rays. White dwarfs composed of higher atomic-number ions are observed to maximally Thomson-scatter at slightly higher wavelengths, i.e., they maximally scatter slightly low-energy photons in the presence of correction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Nathan L.; Blunden, Peter G.; Melnitchouk, Wally
2015-12-08
We examine the interference \\gamma Z box corrections to parity-violating elastic electron--proton scattering in the light of the recent observation of quark-hadron duality in parity-violating deep-inelastic scattering from the deuteron, and the approximate isospin independence of duality in the electromagnetic nucleon structure functions down to Q 2 \\approx 1 GeV 2. Assuming that a similar behavior also holds for the \\gamma Z proton structure functions, we find that duality constrains the γ Z box correction to the proton's weak charge to be Re V γ Z V = (5.4 \\pm 0.4) \\times 10 -3 at the kinematics of the Qmore » weak experiment. Within the same model we also provide estimates of the γ Z corrections for future parity-violating experiments, such as MOLLER at Jefferson Lab and MESA at Mainz.« less
Hybrid Theory of Electron-Hydrogenic Systems Elastic Scattering
NASA Technical Reports Server (NTRS)
Bhatia, A. K.
2007-01-01
Accurate electron-hydrogen and electron-hydrogenic cross sections are required to interpret fusion experiments, laboratory plasma physics and properties of the solar and astrophysical plasmas. We have developed a method in which the short-range and long-range correlations can be included at the same time in the scattering equations. The phase shifts have rigorous lower bounds and the scattering lengths have rigorous upper bounds. The phase shifts in the resonance region can be used to calculate very accurately the resonance parameters.
NASA Astrophysics Data System (ADS)
Shipley, Heath; Papovich, Casey
2015-08-01
We provide a new robust star-formation rate (SFR) calibration using the luminosity from polycyclic aromatic hydrogen (PAH) molecules. The PAH features emit strongly in the mid-infrared (mid-IR; 3-19μm), mitigating dust extinction, and they are very luminous, containing 5-10% of the total IR luminosity in galaxies. We derive the calibration of the PAH luminosity as a SFR indicator using a sample of 105 star-forming galaxies covering a range of total IR luminosity, LIR = L(8-1000μm) = 109 - 1012 L⊙ and redshift 0 < z < 0.6. The PAH luminosity correlates linearly with the SFR as measured by the dust-corrected Hα luminosity (using the sum of the Hα and rest-frame 24μm luminosity from Kennicutt et al. 2009), with tight scatter of ~0.15 dex, comparable to the scatter in the dust-corrected Hα SFRs and Paα SFRs. We show this relation is sensitive to galaxy metallicity, where the PAH luminosity of galaxies with Z < 0.7 Z⊙ departs from the linear SFR relationship but in a behaved manor. We derive for this a correction to galaxies below solar metallicity. As a case study for observations with JWST, we apply the PAH SFR calibration to a sample of lensed galaxies at 1 < z < 3 with Spitzer Infrared Spectrograph (IRS) data, and we demonstrate the utility of PAHs to derive SFRs as accurate as those available from any other indicator. This new SFR indicator will be useful for probing the peak of the SFR density of the universe (1 < z < 3) and for studying the coevolution of star-formation and supermassive blackhole accretion contemporaneously in a galaxy.
NASA Astrophysics Data System (ADS)
Fakhri, G. El; Kijewski, M. F.; Moore, S. C.
2001-06-01
Estimates of SPECT activity within certain deep brain structures could be useful for clinical tasks such as early prediction of Alzheimer's disease with Tc-99m or Parkinson's disease with I-123; however, such estimates are biased by poor spatial resolution and inaccurate scatter and attenuation corrections. We compared an analytical approach (AA) of more accurate quantitation to a slower iterative approach (IA). Monte Carlo simulated projections of 12 normal and 12 pathologic Tc-99m perfusion studies, as well as 12, normal and 12 pathologic I-123 neurotransmission studies, were generated using a digital brain phantom and corrected for scatter by a multispectral fitting procedure. The AA included attenuation correction by a modified Metz-Fan algorithm and activity estimation by a technique that incorporated Metz filtering to compensate for variable collimator response (VCR), IA-modeled attenuation, and VCR in the projector/backprojector of an ordered subsets-expectation maximization (OSEM) algorithm. Bias and standard deviation over the 12 normal and 12 pathologic patients were calculated with respect to the reference values in the corpus callosum, caudate nucleus, and putamen. The IA and AA yielded similar quantitation results in both Tc-99m and I-123 studies in all brain structures considered in both normal and pathologic patients. The bias with respect to the reference activity distributions was less than 7% for Tc-99m studies, but greater than 30% for I-123 studies, due to partial volume effect in the striata. Our results were validated using I-123 physical acquisitions of an anthropomorphic brain phantom. The IA yielded quantitation accuracy comparable to that obtained with IA, while requiring much less processing time. However, in most conditions, IA yielded lower noise for the same bias than did AA.
Kato, Haruhisa; Nakamura, Ayako; Takahashi, Kayori; Kinugasa, Shinichi
2012-01-01
Accurate determination of the intensity-average diameter of polystyrene latex (PS-latex) by dynamic light scattering (DLS) was carried out through extrapolation of both the concentration of PS-latex and the observed scattering angle. Intensity-average diameter and size distribution were reliably determined by asymmetric flow field flow fractionation (AFFFF) using multi-angle light scattering (MALS) with consideration of band broadening in AFFFF separation. The intensity-average diameter determined by DLS and AFFFF-MALS agreed well within the estimated uncertainties, although the size distribution of PS-latex determined by DLS was less reliable in comparison with that determined by AFFFF-MALS. PMID:28348293
Chiarelli, Antonio M.; Maclin, Edward L.; Low, Kathy A.; Fantini, Sergio; Fabiani, Monica; Gratton, Gabriele
2017-01-01
Abstract. Near infrared (NIR) light has been widely used for measuring changes in hemoglobin concentration in the human brain (functional NIR spectroscopy, fNIRS). fNIRS is based on the differential measurement and estimation of absorption perturbations, which, in turn, are based on correctly estimating the absolute parameters of light propagation. To do so, it is essential to accurately characterize the baseline optical properties of tissue (absorption and reduced scattering coefficients). However, because of the diffusive properties of the medium, separate determination of absorption and scattering across the head is challenging. The effective attenuation coefficient (EAC), which is proportional to the geometric mean of absorption and reduced scattering coefficients, can be estimated in a simpler fashion by multidistance light decay measurements. EAC mapping could be of interest for the scientific community because of its absolute information content, and because light propagation is governed by the EAC for source–detector distances exceeding 1 cm, which sense depths extending beyond the scalp and skull layers. Here, we report an EAC mapping procedure that can be applied to standard fNIRS recordings, yielding topographic maps with 2- to 3-cm resolution. Application to human data indicates the importance of venous sinuses in determining regional EAC variations, a factor often overlooked. PMID:28466026
Andrew, Rex K; Ganse, Andrew; White, Andrew W; Mercer, James A; Dzieciuch, Matthew A; Worcester, Peter F; Colosi, John A
2016-07-01
Observations of the spread of wander-corrected averaged pulses propagated over 510 km for 54 h in the Philippine Sea are compared to Monte Carlo predictions using a parabolic equation and path-integral predictions. Two simultaneous m-sequence signals are used, one centered at 200 Hz, the other at 300 Hz; both have a bandwidth of 50 Hz. The internal wave field is estimated at slightly less than unity Garrett-Munk strength. The observed spreads in all the early ray-like arrivals are very small, <1 ms (for pulse widths of 17 and 14 ms), which are on the order of the sampling period. Monte Carlo predictions show similar very small spreads. Pulse spread is one consequence of scattering, which is assumed to occur primarily at upper ocean depths where scattering processes are strongest and upward propagating rays refract downward. If scattering effects in early ray-like arrivals accumulate with increasing upper turning points, spread might show a similar dependence. Real and simulation results show no such dependence. Path-integral theory prediction of spread is accurate for the earliest ray-like arrivals, but appears to be increasingly biased high for later ray-like arrivals, which have more upper turning points.
NASA Astrophysics Data System (ADS)
Chang, Vivide Tuan-Chyan; Merisier, Delson; Yu, Bing; Walmer, David K.; Ramanujam, Nirmala
2011-03-01
A significant challenge in detecting cervical pre-cancer in low-resource settings is the lack of effective screening facilities and trained personnel to detect the disease before it is advanced. Light based technologies, particularly quantitative optical spectroscopy, have the potential to provide an effective, low cost, and portable solution for cervical pre-cancer screening in these communities. We have developed and characterized a portable USB-powered optical spectroscopic system to quantify total hemoglobin content, hemoglobin saturation, and reduced scattering coefficient of cervical tissue in vivo. The system consists of a high-power LED as light source, a bifurcated fiber optic assembly, and two USB spectrometers for sample and calibration spectra acquisitions. The system was subsequently tested in Leogane, Haiti, where diffuse reflectance spectra from 33 colposcopically normal sites in 21 patients were acquired. Two different calibration methods, i.e., a post-study diffuse reflectance standard measurement and a real time self-calibration channel were studied. Our results suggest that a self-calibration channel enabled more accurate extraction of scattering contrast through simultaneous real-time correction of intensity drifts in the system. A self-calibration system also minimizes operator bias and required training. Hence, future contact spectroscopy or imaging systems should incorporate a selfcalibration channel to reliably extract scattering contrast.
Chiarelli, Antonio M; Maclin, Edward L; Low, Kathy A; Fantini, Sergio; Fabiani, Monica; Gratton, Gabriele
2017-04-01
Near infrared (NIR) light has been widely used for measuring changes in hemoglobin concentration in the human brain (functional NIR spectroscopy, fNIRS). fNIRS is based on the differential measurement and estimation of absorption perturbations, which, in turn, are based on correctly estimating the absolute parameters of light propagation. To do so, it is essential to accurately characterize the baseline optical properties of tissue (absorption and reduced scattering coefficients). However, because of the diffusive properties of the medium, separate determination of absorption and scattering across the head is challenging. The effective attenuation coefficient (EAC), which is proportional to the geometric mean of absorption and reduced scattering coefficients, can be estimated in a simpler fashion by multidistance light decay measurements. EAC mapping could be of interest for the scientific community because of its absolute information content, and because light propagation is governed by the EAC for source-detector distances exceeding 1 cm, which sense depths extending beyond the scalp and skull layers. Here, we report an EAC mapping procedure that can be applied to standard fNIRS recordings, yielding topographic maps with 2- to 3-cm resolution. Application to human data indicates the importance of venous sinuses in determining regional EAC variations, a factor often overlooked.
Neutron Polarization Analysis for Biphasic Solvent Extraction Systems
Motokawa, Ryuhei; Endo, Hitoshi; Nagao, Michihiro; ...
2016-06-16
Here we performed neutron polarization analysis (NPA) of extracted organic phases containing complexes, comprised of Zr(NO 3) 4 and tri-n-butyl phosphate, which enabled decomposition of the intensity distribution of small-angle neutron scattering (SANS) into the coherent and incoherent scattering components. The coherent scattering intensity, containing structural information, and the incoherent scattering compete over a wide range of magnitude of scattering vector, q, specifically when q is larger than q* ≈ 1/R g, where R g is the radius of gyration of scatterer. Therefore, it is important to determine the incoherent scattering intensity exactly to perform an accurate structural analysis frommore » SANS data when R g is small, such as the aforementioned extracted coordination species. Although NPA is the best method for evaluating the incoherent scattering component for accurately determining the coherent scattering in SANS, this method is not used frequently in SANS data analysis because it is technically challenging. In this study, we successfully demonstrated that experimental determination of the incoherent scattering using NPA is suitable for sample systems containing a small scatterer with a weak coherent scattering intensity, such as extracted complexes in biphasic solvent extraction systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert
Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems,more » the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy.« less
Hwang, Dusun; Yoon, Dong-Jin; Kwon, Il-Bum; Seo, Dae-Cheol; Chung, Youngjoo
2010-05-10
A novel method for auto-correction of fiber optic distributed temperature sensor using anti-Stokes Raman back-scattering and its reflected signal is presented. This method processes two parts of measured signal. One part is the normal back scattered anti-Stokes signal and the other part is the reflected signal which eliminate not only the effect of local losses due to the micro-bending or damages on fiber but also the differential attenuation. Because the beams of the same wavelength are used to cancel out the local variance in transmission medium there is no differential attenuation inherently. The auto correction concept was verified by the bending experiment on different bending points. (c) 2010 Optical Society of America.
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
An empirical correction for moderate multiple scattering in super-heterodyne light scattering.
Botin, Denis; Mapa, Ludmila Marotta; Schweinfurth, Holger; Sieber, Bastian; Wittenberg, Christopher; Palberg, Thomas
2017-05-28
Frequency domain super-heterodyne laser light scattering is utilized in a low angle integral measurement configuration to determine flow and diffusion in charged sphere suspensions showing moderate to strong multiple scattering. We introduce an empirical correction to subtract the multiple scattering background and isolate the singly scattered light. We demonstrate the excellent feasibility of this simple approach for turbid suspensions of transmittance T ≥ 0.4. We study the particle concentration dependence of the electro-kinetic mobility in low salt aqueous suspension over an extended concentration regime and observe a maximum at intermediate concentrations. We further use our scheme for measurements of the self-diffusion coefficients in the fluid samples in the absence or presence of shear, as well as in polycrystalline samples during crystallization and coarsening. We discuss the scope and limits of our approach as well as possible future applications.
Use of the Wigner representation in scattering problems
NASA Technical Reports Server (NTRS)
Bemler, E. A.
1975-01-01
The basic equations of quantum scattering were translated into the Wigner representation, putting quantum mechanics in the form of a stochastic process in phase space, with real valued probability distributions and source functions. The interpretative picture associated with this representation is developed and stressed and results used in applications published elsewhere are derived. The form of the integral equation for scattering as well as its multiple scattering expansion in this representation are derived. Quantum corrections to classical propagators are briefly discussed. The basic approximation used in the Monte-Carlo method is derived in a fashion which allows for future refinement and which includes bound state production. Finally, as a simple illustration of some of the formalism, scattering is treated by a bound two body problem. Simple expressions for single and double scattering contributions to total and differential cross-sections as well as for all necessary shadow corrections are obtained.
Künzel, R; Herdade, S B; Costa, P R; Terini, R A; Levenhagen, R S
2006-04-21
In this study, scattered x-ray distributions were produced by irradiating a tissue equivalent phantom under clinical mammographic conditions by using Mo/Mo, Mo/Rh and W/Rh anode/filter combinations, for 25 and 30 kV tube voltages. Energy spectra of the scattered x-rays have been measured with a Cd(0.9)Zn(0.1)Te (CZT) detector for scattering angles between 30 degrees and 165 degrees . Measurement and correction processes have been evaluated through the comparison between the values of the half-value layer (HVL) and air kerma calculated from the corrected spectra and measured with an ionization chamber in a nonclinical x-ray system with a W/Mo anode/filter combination. The shape of the corrected x-ray spectra measured in the nonclinical system was also compared with those calculated using semi-empirical models published in the literature. Scattered x-ray spectra measured in the clinical x-ray system have been characterized through the calculation of HVL and mean photon energy. Values of the air kerma, ambient dose equivalent and effective dose have been evaluated through the corrected x-ray spectra. Mean conversion coefficients relating the air kerma to the ambient dose equivalent and to the effective dose from the scattered beams for Mo/Mo, Mo/Rh and W/Rh anode/filter combinations were also evaluated. Results show that for the scattered radiation beams the ambient dose equivalent provides an overestimate of the effective dose by a factor of about 5 in the mammography energy range. These results can be used in the control of the dose limits around a clinical unit and in the calculation of more realistic protective shielding barriers in mammography.
Theoretical interpretation of the Venus 1.05-micron CO2 band and the Venus 0.8189-micron H2O line.
NASA Technical Reports Server (NTRS)
Regas, J. L.; Giver, L. P.; Boese, R. W.; Miller, J. H.
1972-01-01
The synthetic-spectrum technique was used in the analysis. The synthetic spectra were constructed with a model which takes into account both isotropic scattering and the inhomogeneity in the Venus atmosphere. The Potter-Hansen correction factor was used to correct for anisotropic scattering. The synthetic spectra obtained are, therefore, the first which contain all the essential physics of line formation. The results confirm Potter's conclusion that the Venus cloud tops resemble terrestrial cirrus or stratus clouds in their scattering properties.
NASA Astrophysics Data System (ADS)
Wang, Chao; Xiao, Jun; Luo, Xiaobing
2016-10-01
The neutron inelastic scattering cross section of 115In has been measured by the activation technique at neutron energies of 2.95, 3.94, and 5.24 MeV with the neutron capture cross sections of 197Au as an internal standard. The effects of multiple scattering and flux attenuation were corrected using the Monte Carlo code GEANT4. Based on the experimental values, the 115In neutron inelastic scattering cross sections data were theoretically calculated between the 1 and 15 MeV with the TALYS software code, the theoretical results of this study are in reasonable agreement with the available experimental results.
A weak-scattering model for turbine-tone haystacking
NASA Astrophysics Data System (ADS)
McAlpine, A.; Powles, C. J.; Tester, B. J.
2013-08-01
Noise and emissions are critical technical issues in the development of aircraft engines. This necessitates the development of accurate models to predict the noise radiated from aero-engines. Turbine tones radiated from the exhaust nozzle of a turbofan engine propagate through turbulent jet shear layers which causes scattering of sound. In the far-field, measurements of the tones may exhibit spectral broadening, where owing to scattering, the tones are no longer narrow band peaks in the spectrum. This effect is known colloquially as 'haystacking'. In this article a comprehensive analytical model to predict spectral broadening for a tone radiated through a circular jet, for an observer in the far field, is presented. This model extends previous work by the authors which considered the prediction of spectral broadening at far-field observer locations outside the cone of silence. The modelling uses high-frequency asymptotic methods and a weak-scattering assumption. A realistic shear layer velocity profile and turbulence characteristics are included in the model. The mathematical formulation which details the spectral broadening, or haystacking, of a single-frequency, single azimuthal order turbine tone is outlined. In order to validate the model, predictions are compared with experimental results, albeit only at polar angle equal to 90°. A range of source frequencies from 4 to 20kHz, and jet velocities from 20 to 60ms-1, are examined for validation purposes. The model correctly predicts how the spectral broadening is affected when the source frequency and jet velocity are varied.
Ionizing Collisions of Electrons with Radical Species OH, H2 O2 and HO2; Theoretical Calculations
NASA Astrophysics Data System (ADS)
Joshipura, K. N.; Pandya, S. H.; Vaishnav, B. G.; Patel, U. R.
2016-05-01
In this paper we present our calculated total ionization cross sections (TICS) of electron impact on radical targets OH, H2 O2 and HO2 at energies from threshold to 2000 eV. Reactive species such as these pose difficulties in measurements of electron scattering cross sections. No measured data have been reported in this regard except an isolated TICS measurement on OH radical, and hence the present work on the title radicals hold significance. These radical species are present in an environment in which water molecules undergo dissociation (neutral or ionic) in interactions with photons or electrons. The embedding environments could be quite diverse, ranging from our atmosphere to membranes of living cells. Ionization of OH, H2 O2 or HO2 can give rise to further chemistry in the relevant bulk medium. Therefore, it is appropriate and meaningful to examine electron impact ionization of these radicals in comparison with that of water molecules, for which accurate da are available. For the OH target single-centre scattering calculations are performed by starting with a 4-term complex potential, that describes simultaneous elastic plus inelastic scattering. TICS are obtained from the total inelastic cross sections in the complex scattering potential - ionization contribution formalism , a well established method. For H2 O2 and HO2 targets, we employ the additivity rule with overlap or screening corrections. Detailed results will be presented in the Conference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.
Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an ecient sitecentered, electronic-structure technique for addressing an assembly of N scatterers. Wave-functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number L max = (l,m) max, while scattering matrices, which determine spectral properties, are truncated at L tr = (l,m) tr where phase shifts δl>l tr are negligible. Historically, L max is set equal to L tr, which is correct for large enough L max but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for L maxmore » > L tr with δl>l tr set to zero [Zhang and Butler, Phys. Rev. B 46, 7433]. We present a numerically ecient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R 3 process with rank N(l tr + 1) 2] and includes higher-L contributions via linear algebra [R 2 process with rank N(l max +1) 2]. Augmented-KKR approach yields properly normalized wave-functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe and L1 0 CoPt, and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus L max for a given L tr.« less
Improving Technology for Vascular Imaging
NASA Astrophysics Data System (ADS)
Rana, Raman
Neuro-endovascular image guided interventions (Neuro-EIGIs) is a minimally invasive procedure that require micro catheters and endovascular devices be inserted into the vasculature via an incision near the femoral artery and guided under low dose fluoroscopy to the vasculature of the head and neck. However, the endovascular devices used for the purpose are of very small size (stents are of the order of 50mum to 100mum) and the success of these EIGIs depends a lot on the accurate placement of these devices. In order to accurately place these devices inside the patient, the interventionalist should be able to see them clearly. Hence, high resolution capabilities are of immense importance in neuro-EIGIs. The high-resolution detectors, MAF-CCD and MAF-CMOS, at the Toshiba Stroke and Vascular Research Center at the University at Buffalo are capable of presenting improved images for better patient care. Focal spot of an x-ray tube plays an important role in performance of these high resolution detectors. The finite size of the focal spot results into the blurriness around the edges of the image of the object resulting in reduced spatial resolution. Hence, knowledge of accurate size of the focal spot of the x-ray tube is very essential for the evaluation of the total system performance. Importance of magnification and image detector blur deconvolution was demonstrated to carry out the more accurate measurement of x-ray focal spot using a pinhole camera. A 30 micron pinhole was used to obtain the focal spot images using flat panel detector (FPD) and different source to image distances (SIDs) were used to achieve different magnifications (3.16, 2.66 and 2.16). These focal spot images were deconvolved with a 2-D modulation transfer function (MTF), obtained using noise response (NR) method, to remove the detector blur present in the images. Using these corrected images, the accurate size of all the three focal spots were obtained and it was also established that effect of detector blur can be reduced significantly by using a higher magnification. As discussed earlier, interventionalist need higher resolution capabilities during EIGIs for more confident and successful treatment of the patient. An experimental MAF-CCD enabled with a Control, Acquisition, Processing, Image Display and Storage (CAPIDS) system was installed and aligned on a detector changer attached to the C-arm of a clinical angiographic unit. The CAPIDS system was developed and implemented using LabVIEW software and provides a user-friendly interface that enables control of several clinical radiographic imaging modes of the MAF including: fluoroscopy, roadmap, radiography, and digital-subtraction-angiography (DSA). Whenever the higher resolution is needed, the MAD-CCD detector can be moved in front of the FPD. A particular set of steps were needed to deploy the MAF in front of the FPD and to transfer the controls to CAPIDS from the Toshiba Systems. In order to minimize any possible negative impact of using two different detectors during a procedure, a well-designed workflow was developed that enables smooth deployment of the MAF at critical stages of clinical procedures. The images obtained using MAF-CCD detector demonstrated the advantages the high resolution imagers have over FPDs. Scatter is inevitable in x-ray imaging as it reduces the image quality. The benefit of removing the scatter is that it improves contrast and also increases the signal-to-Noise (SNR). There are various scatter reduction methods like air-gap techniques, collimation, moving anti-scatter grids, stationary anti-scatter grids. Stationary anti-scatter grids is a preferred choice in dynamic imaging because of its compact design and ease to use. However, when these anti-scatter grids are used with high resolution detector, there will be anti-scatter grid-line pattern present in the image, as structure noise. Because of presence of this anti-scatter grid artifact, the contrast-to-Noise (CNR) of the image decreases when grid is used with high resolution detector. In order to address this issue, grid-line artifact minimization method for high resolution detectors is developed. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
2017-11-01
To deal with these problems investigators usually rely on a calibration method that makes use of a substance with an accurately known set of interatomic distances. The procedure consists of carrying out a diffraction experiment on the chosen calibrating substance, determining the value of the distances with use of the nominal (meter) value of the voltage, and then correcting the nominal voltage by an amount that produces the distances in the calibration substance. Examples of gases that have been used for calibration are carbon dioxide, carbon tetrachloride, carbon disulfide, and benzene; solids such as zinc oxide smoke (powder) deposited on a screen or slit have also been used. The question implied by the use of any standard molecule is, how accurate are the interatomic distance values assigned to the standard? For example, a solid calibrant is subject to heating by the electron beam, possibly producing unknown changes in the lattice constants, and polyatomic gaseous molecules require corrections for vibrational averaging ("shrinkage") effects that are uncertain at best. It has lately been necessary for us to investigate this matter in connection with on-going studies of several molecules in which size is the most important issue. These studies indicated that our usual method for retrieval of data captured on film needed improvement. The following is an account of these two issues - the accuracy of the distances assigned to the chosen standard molecule, and the improvements in our methods of retrieving the scattered intensity data.
NASA Astrophysics Data System (ADS)
Chytyk-Praznik, Krista Joy
Radiation therapy is continuously increasing in complexity due to technological innovation in delivery techniques, necessitating thorough dosimetric verification. Comparing accurately predicted portal dose images to measured images obtained during patient treatment can determine if a particular treatment was delivered correctly. The goal of this thesis was to create a method to predict portal dose images that was versatile and accurate enough to use in a clinical setting. All measured images in this work were obtained with an amorphous silicon electronic portal imaging device (a-Si EPID), but the technique is applicable to any planar imager. A detailed, physics-motivated fluence model was developed to characterize fluence exiting the linear accelerator head. The model was further refined using results from Monte Carlo simulations and schematics of the linear accelerator. The fluence incident on the EPID was converted to a portal dose image through a superposition of Monte Carlo-generated, monoenergetic dose kernels specific to the a-Si EPID. Predictions of clinical IMRT fields with no patient present agreed with measured portal dose images within 3% and 3 mm. The dose kernels were applied ignoring the geometrically divergent nature of incident fluence on the EPID. A computational investigation into this parallel dose kernel assumption determined its validity under clinically relevant situations. Introducing a patient or phantom into the beam required the portal image prediction algorithm to account for patient scatter and attenuation. Primary fluence was calculated by attenuating raylines cast through the patient CT dataset, while scatter fluence was determined through the superposition of pre-calculated scatter fluence kernels. Total dose in the EPID was calculated by convolving the total predicted incident fluence with the EPID-specific dose kernels. The algorithm was tested on water slabs with square fields, agreeing with measurement within 3% and 3 mm. The method was then applied to five prostate and six head-and-neck IMRT treatment courses (˜1900 clinical images). Deviations between the predicted and measured images were quantified. The portal dose image prediction model developed in this thesis work has been shown to be accurate, and it was demonstrated to be able to verify patients' delivered radiation treatments.
NASA Astrophysics Data System (ADS)
Preissler, Natalie; Bierwagen, Oliver; Ramu, Ashok T.; Speck, James S.
2013-08-01
A comprehensive study of the room-temperature electrical and electrothermal transport of single-crystalline indium oxide (In2O3) and indium tin oxide (ITO) films over a wide range of electron concentrations is reported. We measured the room-temperature Hall mobility μH and Seebeck coefficient S of unintentionally doped and Sn-doped high-quality, plasma-assisted molecular-beam-epitaxy-grown In2O3 for volume Hall electron concentrations nH from 7×1016 cm-3 (unintentionally doped) to 1×1021 cm-3 (highly Sn-doped, ITO). The resulting empirical S(nH) relation can be directly used in other In2O3 samples to estimate the volume electron concentration from simple Seebeck coefficient measurements. The mobility and Seebeck coefficient were modeled by a numerical solution of the Boltzmann transport equation. Ionized impurity scattering and polar optical phonon scattering were found to be the dominant scattering mechanisms. Acoustic phonon scattering was found to be negligible. Fitting the temperature-dependent mobility above room temperature of an In2O3 film with high mobility allowed us to find the effective Debye temperature (ΘD=700 K) and number of phonon modes (NOPML=1.33) that best describe the polar optical phonon scattering. The modeling also yielded the Hall scattering factor rH as a function of electron concentration, which is not negligible (rH≈1.4) at nondegenerate electron concentrations. Fitting the Hall-scattering-factor corrected concentration-dependent Seebeck coefficient S(n) for nondegenerate samples to the numerical solution of the Boltzmann transport equation and to widely used, simplified equations allowed us to extract an effective electron mass of m*=(0.30±0.03)me (with free electron mass me). The modeled mobility and Seebeck coefficient based on polar optical phonon and ionized impurity scattering describes the experimental results very accurately up to electron concentrations of 1019 cm-3, and qualitatively explains a mobility plateau or local maximum around 1020 cm-3. Ionized impurity scattering with doubly charged donors best describes the mobility in our unintentionally doped films, consistent with oxygen vacancies as unintentional shallow donors, whereas singly charged donors best describe our Sn-doped films. Our modeling yields a (phonon-limited) maximum theoretical drift mobility and Hall mobility of μ=190 cm2/Vs and μH=270 cm2/Vs, respectively. Simplified equations for the Seebeck coefficient describe the measured values in the nondegenerate regime using a Seebeck scattering parameter of r=-0.55 (which is consistent with the determined Debye temperature), and provide an estimate of the Seebeck coefficient to lower electron concentrations. The simplified equations fail to describe the Seebeck coefficient around the Mott transition (nMott=5.5×1018 cm-3) from nondegenerate to degenerate electron concentrations, whereas the numerical modeling accurately describes this region.
Spectral peculiarities of electromagnetic wave scattering by Veselago's cylinders
NASA Astrophysics Data System (ADS)
Sukhov, S. V.; Shevyakhov, N. S.
2006-03-01
The results are presented of spectral calculations of extinction cross-section for scattering of E- and H-polarized electromagnetic waves by cylinders made of Veselago material. The insolvency of previously developed models of scattering is demonstrated. It is shown that correct description of scattering requires separate consideration of both electric and magnetic subsystems.
Spectral peculiarities of electromagnetic wave scattered by Veselago's cylinders
NASA Astrophysics Data System (ADS)
Sukhov, S. V.; Shevyakhov, N. S.
2005-09-01
The results are presented of spectral calculations of extinction cross-section for scattering of E- and H-polarized electromagnetic waves by cylinders made of Veselago material. The insolvency of previously developed models of scattering is demonstrated. It is shown that correct description of scattering requires separate consideration of both electric and magnetic subsystems.
NASA Astrophysics Data System (ADS)
Ferrer, James
2004-10-01
The BONUS experiment, at the Thomas Jefferson National Accelerator Facility aims to measure structure functions of the neutron via electron scattering. In order to overcome the unavailability of a neutron target, the BONUS collaboration will use a deuterium target. By detecting the recoil (spectator) proton in coincidence with the scattered electron, the kinematics of the electron-neutron interaction will be fully determined, thus overcoming theoretical complications that arise when extracting neutron cross sections. In order to detect low energy recoil protons, in the 70-100 MeV/c range, a (6 cm-radius) radial time projection chamber (RTPC) will be used. The BONUS RTPC is based on the gas electron multiplier (GEM) technology recently developed at CERN. One of the key components of this detector is the gas handling system, designed and built to deliver the correct mixture of gas to the detector safely, accurately, and reliably. The building and testing of this system is the major contribution of James Madison University to the BONUS collaboration. This poster provides a general overview of the BONUS detector, focusing on the gas handling system.
NASA Astrophysics Data System (ADS)
Berlich, R.; Harnisch, B.
2017-11-01
An accurate stray light analysis represents a crucial part in the early design phase of hyperspectral imaging systems, since scattering effects can severely limit the radiometric accuracy performance. In addition to conventional contributors including ghost images and surface scattering, i.e. caused by a residual surface micro-roughness and particle contamination, diffraction effects can result in significant radiometric errors in the spatial and spectral domain of pushbroom scanners. In this paper, we present a mathematical approach that efficiently evaluates these diffraction effects based on a Fourier analysis. It is shown that considering the conventional diffraction at the systems entrance pupil only, significantly overestimates the stray light contribution. In fact, a correct assessment necessitates taking into account the joint influence of the entrance pupil, the spectrometer slit as well as the dispersion element. We quantitatively investigate the corresponding impact on the Instrument Spectral Response Function (ISRF) of the Earth Explorer #8 Mission Candidate FLEX and analyse the expected radiometric error distribution for a typical earth observation scenario requirement.
NASA Astrophysics Data System (ADS)
Porter, J. M.; Jeffries, J. B.; Hanson, R. K.
2009-09-01
A novel three-wavelength mid-infrared laser-based absorption/extinction diagnostic has been developed for simultaneous measurement of temperature and vapor-phase mole fraction in an evaporating hydrocarbon fuel aerosol (vapor and liquid droplets). The measurement technique was demonstrated for an n-decane aerosol with D 50˜3 μ m in steady and shock-heated flows with a measurement bandwidth of 125 kHz. Laser wavelengths were selected from FTIR measurements of the C-H stretching band of vapor and liquid n-decane near 3.4 μm (3000 cm -1), and from modeled light scattering from droplets. Measurements were made for vapor mole fractions below 2.3 percent with errors less than 10 percent, and simultaneous temperature measurements over the range 300 K< T<900 K were made with errors less than 3 percent. The measurement technique is designed to provide accurate values of temperature and vapor mole fraction in evaporating polydispersed aerosols with small mean diameters ( D 50<10 μ m), where near-infrared laser-based scattering corrections are prone to error.
Fortmann, Carsten; Wierling, August; Röpke, Gerd
2010-02-01
The dynamic structure factor, which determines the Thomson scattering spectrum, is calculated via an extended Mermin approach. It incorporates the dynamical collision frequency as well as the local-field correction factor. This allows to study systematically the impact of electron-ion collisions as well as electron-electron correlations due to degeneracy and short-range interaction on the characteristics of the Thomson scattering signal. As such, the plasmon dispersion and damping width is calculated for a two-component plasma, where the electron subsystem is completely degenerate. Strong deviations of the plasmon resonance position due to the electron-electron correlations are observed at increasing Brueckner parameters r(s). These results are of paramount importance for the interpretation of collective Thomson scattering spectra, as the determination of the free electron density from the plasmon resonance position requires a precise theory of the plasmon dispersion. Implications due to different approximations for the electron-electron correlation, i.e., different forms of the one-component local-field correction, are discussed.
Biophotonics of skin: method for correction of deep Raman spectra distorted by elastic scattering
NASA Astrophysics Data System (ADS)
Roig, Blandine; Koenig, Anne; Perraut, François; Piot, Olivier; Gobinet, Cyril; Manfait, Michel; Dinten, Jean-Marc
2015-03-01
Confocal Raman microspectroscopy allows in-depth molecular and conformational characterization of biological tissues non-invasively. Unfortunately, spectral distortions occur due to elastic scattering. Our objective is to correct the attenuation of in-depth Raman peaks intensity by considering this phenomenon, enabling thus quantitative diagnosis. In this purpose, we developed PDMS phantoms mimicking skin optical properties used as tools for instrument calibration and data processing method validation. An optical system based on a fibers bundle has been previously developed for in vivo skin characterization with Diffuse Reflectance Spectroscopy (DRS). Used on our phantoms, this technique allows checking their optical properties: the targeted ones were retrieved. Raman microspectroscopy was performed using a commercial confocal microscope. Depth profiles were constructed from integrated intensity of some specific PDMS Raman vibrations. Acquired on monolayer phantoms, they display a decline which is increasing with the scattering coefficient. Furthermore, when acquiring Raman spectra on multilayered phantoms, the signal attenuation through each single layer is directly dependent on its own scattering property. Therefore, determining the optical properties of any biological sample, obtained with DRS for example, is crucial to correct properly Raman depth profiles. A model, inspired from S.L. Jacques's expression for Confocal Reflectance Microscopy and modified at some points, is proposed and tested to fit the depth profiles obtained on the phantoms as function of the reduced scattering coefficient. Consequently, once the optical properties of a biological sample are known, the intensity of deep Raman spectra distorted by elastic scattering can be corrected with our reliable model, permitting thus to consider quantitative studies for purposes of characterization or diagnosis.
Atmospheric scattering corrections to solar radiometry
NASA Technical Reports Server (NTRS)
Box, M. A.; Deepak, A.
1979-01-01
Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. This paper discusses the correction factors needed to account for the diffuse (i,e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle of less than 5 deg) and relatively clear skies (optical depths less than 0.4), it is shown that the total diffuse contribution represents approximately 1% of the total intensity.
A library least-squares approach for scatter correction in gamma-ray tomography
NASA Astrophysics Data System (ADS)
Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro
2015-03-01
Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.
NASA Astrophysics Data System (ADS)
Itoh, Naoki; Kawana, Youhei; Nozawa, Satoshi; Kohyama, Yasuharu
2001-10-01
We extend the formalism for the calculation of the relativistic corrections to the Sunyaev-Zel'dovich effect for clusters of galaxies and include the multiple scattering effects in the isotropic approximation. We present the results of the calculations by the Fokker-Planck expansion method as well as by the direct numerical integration of the collision term of the Boltzmann equation. The multiple scattering contribution is found to be very small compared with the single scattering contribution. For high-temperature galaxy clusters of kBTe~15keV, the ratio of both the contributions is -0.2 per cent in the Wien region. In the Rayleigh-Jeans region the ratio is -0.03 per cent. Therefore the multiple scattering contribution is safely neglected for the observed galaxy clusters.
Dynamical scattering in coherent hard x-ray nanobeam Bragg diffraction
NASA Astrophysics Data System (ADS)
Pateras, A.; Park, J.; Ahn, Y.; Tilka, J. A.; Holt, M. V.; Kim, H.; Mawst, L. J.; Evans, P. G.
2018-06-01
Unique intensity features arising from dynamical diffraction arise in coherent x-ray nanobeam diffraction patterns of crystals having thicknesses larger than the x-ray extinction depth or exhibiting combinations of nanoscale and mesoscale features. We demonstrate that dynamical scattering effects can be accurately predicted using an optical model combined with the Darwin theory of dynamical x-ray diffraction. The model includes the highly divergent coherent x-ray nanobeams produced by Fresnel zone plate focusing optics and accounts for primary extinction, multiple scattering, and absorption. The simulation accurately reproduces the dynamical scattering features of experimental diffraction patterns acquired from a GaAs/AlGaAs epitaxial heterostructure on a GaAs (001) substrate.
NASA Astrophysics Data System (ADS)
Erhard, M.; Junghans, A. R.; Nair, C.; Schwengner, R.; Beyer, R.; Klug, J.; Kosev, K.; Wagner, A.; Grosse, E.
2010-03-01
Two methods based on bremsstrahlung were applied to the stable even Mo isotopes for the experimental determination of the photon strength function covering the high excitation energy range above 4 MeV with its increasing level density. Photon scattering was used up to the neutron separation energies Sn and data up to the maximum of the isovector giant resonance (GDR) were obtained by photoactivation. After a proper correction for multistep processes the observed quasicontinuous spectra of scattered photons show a remarkably good match to the photon strengths derived from nuclear photoeffect data obtained previously by neutron detection and corrected in absolute scale by using the new activation results. The combined data form an excellent basis to derive a shape dependence of the E1 strength in the even Mo isotopes with increasing deviation from the N=50 neutron shell (i.e., with the impact of quadrupole deformation and triaxiality). The wide energy coverage of the data allows for a stringent assessment of the dipole sum rule and a test of a novel parametrization developed previously which is based on it. This parametrization for the electric dipole strength function in nuclei with A>80 deviates significantly from prescriptions generally used previously. In astrophysical network calculations it may help to quantify the role the p-process plays in cosmic nucleosynthesis. It also has impact on the accurate analysis of neutron capture data of importance for future nuclear energy systems and waste transmutation.
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H
2011-04-01
A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.
NASA Astrophysics Data System (ADS)
Phillips, C. B.; Valenti, M.
2009-12-01
Jupiter's moon Europa likely possesses an ocean of liquid water beneath its icy surface, but estimates of the thickness of the surface ice shell vary from a few kilometers to tens of kilometers. Color images of Europa reveal the existence of a reddish, non-ice component associated with a variety of geological features. The composition and origin of this material is uncertain, as is its relationship to Europa's various landforms. Published analyses of Galileo Near Infrared Mapping Spectrometer (NIMS) observations indicate the presence of highly hydrated sulfate compounds. This non-ice material may also bear biosignatures or other signs of biotic material. Additional spectral information from the Galileo Solid State Imager (SSI) could further elucidate the nature of the surface deposits, particularly when combined with information from the NIMS. However, little effort has been focused on this approach because proper calibration of the color image data is challenging, requiring both skill and patience to process the data and incorporate the appropriate scattered light correction. We are currently working to properly calibrate the color SSI data. The most important and most difficult issue to address in the analysis of multispectral SSI data entails using thorough calibrations and a correction for scattered light. Early in the Galileo mission, studies of the Galileo SSI data for the moon revealed discrepancies of up to 10% in relative reflectance between images containing scattered light and images corrected for scattered light. Scattered light adds a wavelength-dependent low-intensity brightness factor to pixels across an image. For example, a large bright geological feature located just outside the field of view of an image will scatter extra light onto neighboring pixels within the field of view. Scattered light can be seen as a dim halo surrounding an image that includes a bright limb, and can also come from light scattered inside the camera by dirt, edges, and the interfaces of lenses. Because of the wavelength dependence of this effect, a scattered light correction must be performed on any SSI multispectral dataset before quantitative spectral analysis can be done. The process involves using a point-spread function for each filter that helps determine the amount of scattered light expected for a given pixel based on its location and the model attenuation factor for that pixel. To remove scattered light for a particular image taken through a particular filter, the Fourier transform of the attenuation function, which is the point spread function for that filter, is convolved with the Fourier transform of the image at the same wavelength. The result is then filtered for noise in the frequency domain, and then transformed back to the spatial domain. This results in a version of the original image that would have been taken without the scattered light contribution. We will report on our initial results from this calibration.
NASA Astrophysics Data System (ADS)
Nelson, Adam
Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.
Generalization of the Hartree-Fock approach to collision processes
NASA Astrophysics Data System (ADS)
Hahn, Yukap
1997-06-01
The conventional Hartree and Hartree-Fock approaches for bound states are generalized to treat atomic collision processes. All the single-particle orbitals, for both bound and scattering states, are determined simultaneously by requiring full self-consistency. This generalization is achieved by introducing two Ansäauttze: (a) the weak asymptotic boundary condition, which maintains the correct scattering energy and target orbitals with correct number of nodes, and (b) square integrable amputated scattering functions to generate self-consistent field (SCF) potentials for the target orbitals. The exact initial target and final-state asymptotic wave functions are not required and thus need not be specified a priori, as they are determined simultaneously by the SCF iterations. To check the asymptotic behavior of the solution, the theory is applied to elastic electron-hydrogen scattering at low energies. The solution is found to be stable and the weak asymptotic condition is sufficient to produce the correct scattering amplitudes. The SCF potential for the target orbital shows the strong penetration by the projectile electron during the collision, but the exchange term tends to restore the original form. Potential applicabilities of this extension are discussed, including the treatment of ionization and shake-off processes.
Topographic correction realization based on the CBERS-02B image
NASA Astrophysics Data System (ADS)
Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua
2011-08-01
The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.
On the far-field computation of acoustic radiation forces.
Martin, P A
2017-10-01
It is known that the steady acoustic radiation force on a scatterer due to incident time-harmonic waves can be calculated by evaluating certain integrals of velocity potentials over a sphere surrounding the scatterer. The goal is to evaluate these integrals using far-field approximations and appropriate limits. Previous derivations are corrected, clarified, and generalized. Similar corrections are made to textbook derivations of optical theorems.
Corrections on energy spectrum and scatterings for fast neutron radiography at NECTAR facility
NASA Astrophysics Data System (ADS)
Liu, Shu-Quan; Bücherl, Thomas; Li, Hang; Zou, Yu-Bin; Lu, Yuan-Rong; Guo, Zhi-Yu
2013-11-01
Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM- II in Technische Universität München (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abelof, Gabriel; Boughezal, Radja; Liu, Xiaohui
2016-10-17
We compute the Oσ 2σ 2 s perturbative corrections to inclusive jet production in electron-nucleon collisions. This process is of particular interest to the physics program of a future Electron Ion Collider (EIC). We include all relevant partonic processes, including deep-inelastic scattering contributions, photon-initiated corrections, and parton-parton scattering terms that first appear at this order. Upon integration over the final-state hadronic phase space we validate our results for the deep-inelastic corrections against the known next-to-next-to-leading order (NNLO) structure functions. Our calculation uses the N-jettiness subtraction scheme for performing higher-order computations, and allows for a completely differential description of the deep-inelasticmore » scattering process. We describe the application of this method to inclusive jet production in detail, and present phenomenological results for the proposed EIC. The NNLO corrections have a non-trivial dependence on the jet kinematics and arise from an intricate interplay between all contributing partonic channels.« less
[Atmospheric correction of HJ-1 CCD data for water imagery based on dark object model].
Zhou, Li-Guo; Ma, Wei-Chun; Gu, Wan-Hua; Huai, Hong-Yan
2011-08-01
The CCD multi-band data of HJ-1A has great potential in inland water quality monitoring, but the precision of atmospheric correction is a premise and necessary procedure for its application. In this paper, a method based on dark pixel for water-leaving radiance retrieving is proposed. Beside the Rayleigh scattering, the aerosol scattering is important to atmospheric correction, the water quality of inland lakes always are case II water and the value of water leaving radiance is not zero. So the synchronous MODIS shortwave infrared data was used to obtain the aerosol parameters, and in virtue of the characteristic that aerosol scattering is relative stabilized in 560 nm, the water-leaving radiance for each visible and near infrared band were retrieved and normalized, accordingly the remotely sensed reflectance of water was computed. The results show that the atmospheric correction method based on the imagery itself is more effective for the retrieval of water parameters for HJ-1A CCD data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawke, J.; Scannell, R.; Maslov, M.
2013-10-15
This work isolated the cause of the observed discrepancy between the electron temperature (T{sub e}) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. Themore » application of these correction factors resulted in an increase in the observed T{sub e}, resulting in the partial if not complete removal of the observed discrepancy in the measured T{sub e} between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics.« less
Multiple scattering in the remote sensing of natural surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wen-Hao; Weeks, R.; Gillespie, A.R.
1996-07-01
Radiosity models predict the amount of light scattered many times (multiple scattering) among scene elements in addition to light interacting with a surface only once (direct reflectance). Such models are little used in remote sensing studies because they require accurate digital terrain models and, typically, large amounts of computer time. We have developed a practical radiosity model that runs relatively quickly within suitable accuracy limits, and have used it to explore problems caused by multiple-scattering in image calibration, terrain correction, and surface roughness estimation for optical images. We applied the radiosity model to real topographic surfaces sampled at two verymore » different spatial scales: 30 m (rugged mountains) and 1 cm (cobbles and gravel on an alluvial fan). The magnitude of the multiple-scattering (MS) effect varies with solar illumination geometry, surface reflectivity, sky illumination and surface roughness. At the coarse scale, for typical illumination geometries, as much as 20% of the image can be significantly affected (>5%) by MS, which can account for as much as {approximately}10% of the radiance from sunlit slopes, and much more for shadowed slopes, otherwise illuminated only by skylight. At the fine scale, radiance from as much as 30-40% of the scene can have a significant MS component, and the MS contribution is locally as high as {approximately}70%, although integrating to the meter scale reduces this limit to {approximately}10%. Because the amount of MS increases with reflectivity as well as roughness, MS effects will distort the shape of reflectance spectra as well as changing their overall amplitude. The change is proportional to surface roughness. Our results have significant implications for determining reflectivity and surface roughness in remote sensing.« less
Angle-domain inverse scattering migration/inversion in isotropic media
NASA Astrophysics Data System (ADS)
Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan
2018-07-01
The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.
Septal penetration correction in I-131 imaging following thyroid cancer treatment
NASA Astrophysics Data System (ADS)
Barrack, Fiona; Scuffham, James; McQuaid, Sarah
2018-04-01
Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ = 0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ = 0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.
Accurate radiative transfer calculations for layered media.
Selden, Adrian C
2016-07-01
Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics.
NASA Astrophysics Data System (ADS)
Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron
2018-04-01
Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.
Surface areas of fractally rough particles studied by scattering
NASA Astrophysics Data System (ADS)
Hurd, Alan J.; Schaefer, Dale W.; Smith, Douglas M.; Ross, Steven B.; Le Méhauté, Alain; Spooner, Steven
1989-05-01
The small-angle scattering from fractally rough surfaces has the potential to give information on the surface area at a given resolution. By use of quantitative neutron and x-ray scattering, a direct comparison of surface areas of fractally rough powders was made between scattering and adsorption techniques. This study supports a recently proposed correction to the theory for scattering from fractal surfaces. In addition, the scattering data provide an independent calibration of molecular adsorbate areas.
NASA Astrophysics Data System (ADS)
Mustak, S.
2013-09-01
The correction of atmospheric effects is very essential because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering. The objectives of the paper is to find out the haze values present in the all spectral bands and to correct the haze values for urban analysis. In this paper, Improved Dark Object Subtraction method of P. Chavez (1988) is applied for the correction of atmospheric haze in the Resoucesat-1 LISS-4 multispectral satellite image. Dark object Subtraction is a very simple image-based method of atmospheric haze which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance termed as dark object which are clear water body and shadows whose DN values zero (0) or Close to zero in the image. Simple Dark Object Subtraction method is a first order atmospheric correction but Improved Dark Object Subtraction method which tends to correct the Haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere. The haze values extracted using Simple Dark Object Subtraction method for Green band (Band2), Red band (Band3) and NIR band (band4) are 40, 34 and 18 but the haze values extracted using Improved Dark Object Subtraction method are 40, 18.02 and 11.80 for aforesaid bands. Here it is concluded that the haze values extracted by Improved Dark Object Subtraction method provides more realistic results than Simple Dark Object Subtraction method.
Sensitivity estimation in time-of-flight list-mode positron emission tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herraiz, J. L.; Sitek, A., E-mail: sarkadiu@gmail.com
Purpose: An accurate quantification of the images in positron emission tomography (PET) requires knowing the actual sensitivity at each voxel, which represents the probability that a positron emitted in that voxel is finally detected as a coincidence of two gamma rays in a pair of detectors in the PET scanner. This sensitivity depends on the characteristics of the acquisition, as it is affected by the attenuation of the annihilation gamma rays in the body, and possible variations of the sensitivity of the scanner detectors. In this work, the authors propose a new approach to handle time-of-flight (TOF) list-mode PET data,more » which allows performing either or both, a self-attenuation correction, and self-normalization correction based on emission data only. Methods: The authors derive the theory using a fully Bayesian statistical model of complete data. The authors perform an initial evaluation of algorithms derived from that theory and proposed in this work using numerical 2D list-mode simulations with different TOF resolutions and total number of detected coincidences. Effects of randoms and scatter are not simulated. Results: The authors found that proposed algorithms successfully correct for unknown attenuation and scanner normalization for simulated 2D list-mode TOF-PET data. Conclusions: A new method is presented that can be used for corrections for attenuation and normalization (sensitivity) using TOF list-mode data.« less
Sensitivity estimation in time-of-flight list-mode positron emission tomography.
Herraiz, J L; Sitek, A
2015-11-01
An accurate quantification of the images in positron emission tomography (PET) requires knowing the actual sensitivity at each voxel, which represents the probability that a positron emitted in that voxel is finally detected as a coincidence of two gamma rays in a pair of detectors in the PET scanner. This sensitivity depends on the characteristics of the acquisition, as it is affected by the attenuation of the annihilation gamma rays in the body, and possible variations of the sensitivity of the scanner detectors. In this work, the authors propose a new approach to handle time-of-flight (TOF) list-mode PET data, which allows performing either or both, a self-attenuation correction, and self-normalization correction based on emission data only. The authors derive the theory using a fully Bayesian statistical model of complete data. The authors perform an initial evaluation of algorithms derived from that theory and proposed in this work using numerical 2D list-mode simulations with different TOF resolutions and total number of detected coincidences. Effects of randoms and scatter are not simulated. The authors found that proposed algorithms successfully correct for unknown attenuation and scanner normalization for simulated 2D list-mode TOF-PET data. A new method is presented that can be used for corrections for attenuation and normalization (sensitivity) using TOF list-mode data.
Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.
Ripple, Dean C; Hu, Zhishang
2016-03-01
Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.
NASA Astrophysics Data System (ADS)
Marchant, T. E.; Joshi, K. D.; Moore, C. J.
2017-03-01
Cone-beam CT (CBCT) images are routinely acquired to verify patient position in radiotherapy (RT), but are typically not calibrated in Hounsfield Units (HU) and feature non-uniformity due to X-ray scatter and detector persistence effects. This prevents direct use of CBCT for re-calculation of RT delivered dose. We previously developed a prior-image based correction method to restore HU values and improve uniformity of CBCT images. Here we validate the accuracy with which corrected CBCT can be used for dosimetric assessment of RT delivery, using CBCT images and RT plans for 45 patients including pelvis, lung and head sites. Dose distributions were calculated based on each patient's original RT plan and using CBCT image values for tissue heterogeneity correction. Clinically relevant dose metrics were calculated (e.g. median and minimum target dose, maximum organ at risk dose). Accuracy of CBCT based dose metrics was determined using an "override ratio" method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the image is assumed to be constant for each patient, allowing comparison to "gold standard" CT. For pelvis and head images the proportion of dose errors >2% was reduced from 40% to 1.3% after applying shading correction. For lung images the proportion of dose errors >3% was reduced from 66% to 2.2%. Application of shading correction to CBCT images greatly improves their utility for dosimetric assessment of RT delivery, allowing high confidence that CBCT dose calculations are accurate within 2-3%.
Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki
2016-02-01
Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.
Corrections for the geometric distortion of the tube detectors on SANS instruments at ORNL
He, Lilin; Do, Changwoo; Qian, Shuo; ...
2014-11-25
Small-angle neutron scattering instruments at the Oak Ridge National Laboratory's High Flux Isotope Reactor were upgraded in area detectors from the large, single volume crossed-wire detectors originally installed to staggered arrays of linear position-sensitive detectors (LPSDs). The specific geometry of the LPSD array requires that approaches to data reduction traditionally employed be modified. Here, two methods for correcting the geometric distortion produced by the LPSD array are presented and compared. The first method applies a correction derived from a detector sensitivity measurement performed using the same configuration as the samples are measured. In the second method, a solid angle correctionmore » is derived that can be applied to data collected in any instrument configuration during the data reduction process in conjunction with a detector sensitivity measurement collected at a sufficiently long camera length where the geometric distortions are negligible. Furthermore, both methods produce consistent results and yield a maximum deviation of corrected data from isotropic scattering samples of less than 5% for scattering angles up to a maximum of 35°. The results are broadly applicable to any SANS instrument employing LPSD array detectors, which will be increasingly common as instruments having higher incident flux are constructed at various neutron scattering facilities around the world.« less
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.
2015-03-01
Traumatic brain injury (TBI) is a major cause of death and disability. The current front-line imaging modality for TBI detection is CT, which reliably detects intracranial hemorrhage (fresh blood contrast 30-50 HU, size down to 1 mm) in non-contrast-enhanced exams. Compared to CT, flat-panel detector (FPD) cone-beam CT (CBCT) systems offer lower cost, greater portability, and smaller footprint suitable for point-of-care deployment. We are developing FPD-CBCT to facilitate TBI detection at the point-of-care such as in emergent, ambulance, sports, and military applications. However, current FPD-CBCT systems generally face challenges in low-contrast, soft-tissue imaging. Model-based reconstruction can improve image quality in soft-tissue imaging compared to conventional filtered back-projection (FBP) by leveraging high-fidelity forward model and sophisticated regularization. In FPD-CBCT TBI imaging, measurement noise characteristics undergo substantial change following artifact correction, resulting in non-negligible noise amplification. In this work, we extend the penalized weighted least-squares (PWLS) image reconstruction to include the two dominant artifact corrections (scatter and beam hardening) in FPD-CBCT TBI imaging by correctly modeling the variance change following each correction. Experiments were performed on a CBCT test-bench using an anthropomorphic phantom emulating intra-parenchymal hemorrhage in acute TBI, and the proposed method demonstrated an improvement in blood-brain contrast-to-noise ratio (CNR = 14.2) compared to FBP (CNR = 9.6) and PWLS using conventional weights (CNR = 11.6) at fixed spatial resolution (1 mm edge-spread width at the target contrast). The results support the hypothesis that FPD-CBCT can fulfill the image quality requirements for reliable TBI detection, using high-fidelity artifact correction and statistical reconstruction with accurate post-artifact-correction noise models.
Extending 3D Near-Cloud Corrections from Shorter to Longer Wavelengths
NASA Technical Reports Server (NTRS)
Marshak, Alexander; Evans, K. Frank; Varnai, Tamas; Guoyong, Wen
2014-01-01
Satellite observations have shown a positive correlation between cloud amount and aerosol optical thickness (AOT) that can be explained by the humidification of aerosols near clouds, and/or by cloud contamination by sub-pixel size clouds and the cloud adjacency effect. The last effect may substantially increase reflected radiation in cloud-free columns, leading to overestimates in the retrieved AOT. For clear-sky areas near boundary layer clouds the main contribution to the enhancement of clear sky reflectance at shorter wavelengths comes from the radiation scattered into clear areas by clouds and then scattered to the sensor by air molecules. Because of the wavelength dependence of air molecule scattering, this process leads to a larger reflectance increase at shorter wavelengths, and can be corrected using a simple two-layer model. However, correcting only for molecular scattering skews spectral properties of the retrieved AOT. Kassianov and Ovtchinnikov proposed a technique that uses spectral reflectance ratios to retrieve AOT in the vicinity of clouds; they assumed that the cloud adjacency effect influences the spectral ratio between reflectances at two wavelengths less than it influences the reflectances themselves. This paper combines the two approaches: It assumes that the 3D correction for the shortest wavelength is known with some uncertainties, and then it estimates the 3D correction for longer wavelengths using a modified ratio method. The new approach is tested with 3D radiances simulated for 26 cumulus fields from Large-Eddy Simulations, supplemented with 40 aerosol profiles. The results showed that (i) for a variety of cumulus cloud scenes and aerosol profiles over ocean the 3D correction due to cloud adjacency effect can be extended from shorter to longer wavelengths and (ii) the 3D corrections for longer wavelengths are not very sensitive to unbiased random uncertainties in the 3D corrections at shorter wavelengths.
NASA Astrophysics Data System (ADS)
Sun, B.; Yang, P.; Kattawar, G. W.; Zhang, X.
2017-12-01
The ice cloud single-scattering properties can be accurately simulated using the invariant-imbedding T-matrix method (IITM) and the physical-geometric optics method (PGOM). The IITM has been parallelized using the Message Passing Interface (MPI) method to remove the memory limitation so that the IITM can be used to obtain the single-scattering properties of ice clouds for sizes in the geometric optics regime. Furthermore, the results associated with random orientations can be analytically achieved once the T-matrix is given. The PGOM is also parallelized in conjunction with random orientations. The single-scattering properties of a hexagonal prism with height 400 (in units of lambda/2*pi, where lambda is the incident wavelength) and an aspect ratio of 1 (defined as the height over two times of bottom side length) are given by using the parallelized IITM and compared to the counterparts using the parallelized PGOM. The two results are in close agreement. Furthermore, the integrated single-scattering properties, including the asymmetry factor, the extinction cross-section, and the scattering cross-section, are given in a completed size range. The present results show a smooth transition from the exact IITM solution to the approximate PGOM result. Because the calculation of the IITM method has reached the geometric regime, the IITM and the PGOM can be efficiently employed to accurately compute the single-scattering properties of ice cloud in a wide spectral range.
Optical-model potential for electron and positron elastic scattering by atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salvat, Francesc
2003-07-01
An optical-model potential for systematic calculations of elastic scattering of electrons and positrons by atoms and positive ions is proposed. The electrostatic interaction is determined from the Dirac-Hartree-Fock self-consistent atomic electron density. In the case of electron projectiles, the exchange interaction is described by means of the local-approximation of Furness and McCarthy. The correlation-polarization potential is obtained by combining the correlation potential derived from the local density approximation with a long-range polarization interaction, which is represented by means of a Buckingham potential with an empirical energy-dependent cutoff parameter. The absorption potential is obtained from the local-density approximation, using the Born-Ochkurmore » approximation and the Lindhard dielectric function to describe the binary collisions with a free-electron gas. The strength of the absorption potential is adjusted by means of an empirical parameter, which has been determined by fitting available absolute elastic differential cross-section data for noble gases and mercury. The Dirac partial-wave analysis with this optical-model potential provides a realistic description of elastic scattering of electrons and positrons with energies in the range from {approx}100 eV up to {approx}5 keV. At higher energies, correlation-polarization and absorption corrections are small and the usual static-exchange approximation is sufficiently accurate for most practical purposes.« less
NASA Astrophysics Data System (ADS)
Wuhrer, R.; Moran, K.
2014-03-01
Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper.
Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Castano, Diego J.
1987-01-01
Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.
Precision determination of electron scattering angle by differential nuclear recoil energy method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liyanage, N.; Saenboonruang, K.
2015-12-01
The accurate determination of the scattered electron angle is crucial to electron scattering experiments, both with open-geometry large-acceptance spectrometers and ones with dipole-type magnetic spectrometers for electron detection. In particular, for small central-angle experiments using dipole-type magnetic spectrometers, in which surveys are used to measure the spectrometer angle with respect to the primary electron beam, the importance of the scattering angle determination is emphasized. However, given the complexities of large experiments and spectrometers, the accuracy of such surveys is limited and insufficient to meet demands of some experiments. In this article, we present a new technique for determination of themore » electron scattering angle based on an accurate measurement of the primary beam energy and the principle of differential nuclear recoil. This technique was used to determine the scattering angle for several experiments carried out at the Experimental Hall A, Jefferson Lab. Results have shown that the new technique greatly improved the accuracy of the angle determination compared to surveys.« less
Precision Determination of Electron Scattering Angle by Differential Nuclear Recoil Energy Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liyanage, Nilanga; Saenboonruang, Kiadtisak
2015-09-01
The accurate determination of the scattered electron angle is crucial to electron scattering experiments, both with open-geometry large-acceptance spectrometers and ones with dipole-type magnetic spectrometers for electron detection. In particular, for small central-angle experiments using dipole-type magnetic spectrometers, in which surveys are used to measure the spectrometer angle with respect to the primary electron beam, the importance of the scattering angle determination is emphasized. However, given the complexities of large experiments and spectrometers, the accuracy of such surveys is limited and insufficient to meet demands of some experiments. In this article, we present a new technique for determination of themore » electron scattering angle based on an accurate measurement of the primary beam energy and the principle of differential nuclear recoil. This technique was used to determine the scattering angle for several experiments carried out at the Experimental Hall A, Jefferson Lab. Results have shown that the new technique greatly improved the accuracy of the angle determination compared to surveys.« less
Chavez, P.S.
1988-01-01
Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.
Effect of molecular anisotropy on beam scattering measurements
NASA Technical Reports Server (NTRS)
Goldflam, R.; Green, S.; Kouri, D. J.; Monchick, L.
1978-01-01
Within the energy sudden approximation, the total integral and total differential scattering cross sections are given by the angle average of scattering cross sections computed at fixed rotor orientations. Using this formalism the effect of molecular anisotropy on scattering of He by HCl and by CO is examined. Comparisons with accurate close coupling calculations indicate that this approximation is quite reliable, even at very low collision energies, for both of these systems. Comparisons are also made with predictions based on the spherical average of the interaction. For HCl the anisotropy is rather weak and its main effect is a slight quenching of the oscillations in the differential cross sections relative to predictions of the spherical averaged potential. For CO the anisotropy is much stronger, so that the oscillatory pattern is strongly quenched and somewhat shifted. It appears that the sudden approximation provides a simple yet accurate method for describing the effect of molecular anisotropy on scattering measurements.
New Treatment of Strongly Anisotropic Scattering Phase Functions: The Delta-M+ Method
NASA Astrophysics Data System (ADS)
Stamnes, K. H.; Lin, Z.; Chen, N.; Fan, Y.; Li, W.; Stamnes, S.
2017-12-01
The treatment of strongly anisotropic scattering phase functions is still a challenge for accurate radiance computations. The new Delta-M+ method resolves this problem by introducing a reliable, fast, accurate, and easy-to-use Legendre expansion of the scattering phase function with modified moments. Delta-M+ is an upgrade of the widely-used Delta-M method that truncates the forward scattering cone into a Dirac-delta-function (a direct beam), where the + symbol indicates that it essentially matches moments above the first 2M terms. Compared with the original Delta-M method, Delta-M+ has the same computational efficiency, but the accuracy has been increased dramatically. Tests show that the errors for strongly forward-peaked scattering phase functions are greatly reduced. Furthermore, the accuracy and stability of radiance computations are also significantly improved by applying the new Delta-M+ method.
Analysis on Vertical Scattering Signatures in Forestry with PolInSAR
NASA Astrophysics Data System (ADS)
Guo, Shenglong; Li, Yang; Zhang, Jingjing; Hong, Wen
2014-11-01
We apply accurate topographic phase to the Freeman-Durden decomposition for polarimetric SAR interferometry (PolInSAR) data. The cross correlation matrix obtained from PolInSAR observations can be decomposed into three scattering mechanisms matrices accounting for the odd-bounce, double-bounce and volume scattering. We estimate the phase based on the Random volume over Ground (RVoG) model, and as the initial input parameter of the numerical method which is used to solve the parameters of decomposition. In addition, the modified volume scattering model introduced by Y. Yamaguchi is applied to the PolInSAR target decomposition in forest areas rather than the pure random volume scattering as proposed by Freeman-Durden to make best fit to the actual measured data. This method can accurately retrieve the magnitude associated with each mechanism and their vertical location along the vertical dimension. We test the algorithms with L- and P- band simulated data.
Scattering of Femtosecond Laser Pulses on the Negative Hydrogen Ion
NASA Astrophysics Data System (ADS)
Astapenko, V. A.; Moroz, N. N.
2018-05-01
Elastic scattering of ultrashort laser pulses (USLPs) on the negative hydrogen ion is considered. Results of calculations of the USLP scattering probability are presented and analyzed for pulses of two types: the corrected Gaussian pulse and wavelet pulse without carrier frequency depending on the problem parameters.
NASA Astrophysics Data System (ADS)
Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong
2017-10-01
Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.
Berger, Edmond L; Gao, Jun; Li, Chong Sheng; Liu, Ze Long; Zhu, Hua Xing
2016-05-27
We present a fully differential next-to-next-to-leading order calculation of charm-quark production in charged-current deep-inelastic scattering, with full charm-quark mass dependence. The next-to-next-to-leading order corrections in perturbative quantum chromodynamics are found to be comparable in size to the next-to-leading order corrections in certain kinematic regions. We compare our predictions with data on dimuon production in (anti)neutrino scattering from a heavy nucleus. Our results can be used to improve the extraction of the parton distribution function of a strange quark in the nucleon.
SU-F-T-142: An Analytical Model to Correct the Aperture Scattered Dose in Clinical Proton Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, B; Liu, S; Zhang, T
2016-06-15
Purpose: Apertures or collimators are used to laterally shape proton beams in double scattering (DS) delivery and to sharpen the penumbra in pencil beam (PB) delivery. However, aperture-scattered dose is not included in the current dose calculations of treatment planning system (TPS). The purpose of this study is to provide a method to correct the aperture-scattered dose based on an analytical model. Methods: A DS beam with a non-divergent aperture was delivered using a single-room proton machine. Dose profiles were measured with an ion-chamber scanning in water and a 2-D ion chamber matrix with solid-water buildup at various depths. Themore » measured doses were considered as the sum of the non-contaminated dose and the aperture-scattered dose. The non-contaminated dose was calculated by TPS and subtracted from the measured dose. Aperture scattered-dose was modeled as a 1D Gaussian distribution. For 2-D fields, to calculate the scatter-dose from all the edges of aperture, a sum of weighted distance was used in the model based on the distance from calculation point to aperture edge. The gamma index was calculated between the measured and calculated dose with and without scatter correction. Results: For a beam with range of 23 cm and aperture size of 20 cm, the contribution of the scatter horn was ∼8% of the total dose at 4 cm depth and diminished to 0 at 15 cm depth. The amplitude of scatter-dose decreased linearly with the depth increase. The 1D gamma index (2%/2 mm) between the calculated and measured profiles increased from 63% to 98% for 4 cm depth and from 83% to 98% at 13 cm depth. The 2D gamma index (2%/2 mm) at 4 cm depth has improved from 78% to 94%. Conclusion: Using the simple analytical method the discrepancy between the measured and calculated dose has significantly improved.« less
Calculation method for laser radar cross sections of rotationally symmetric targets.
Cao, Yunhua; Du, Yongzhi; Bai, Lu; Wu, Zhensen; Li, Haiying; Li, Yanhui
2017-07-01
The laser radar cross section (LRCS) is a key parameter in the study of target scattering characteristics. In this paper, a practical method for calculating LRCSs of rotationally symmetric targets is presented. Monostatic LRCSs for four kinds of rotationally symmetric targets (cone, rotating ellipsoid, super ellipsoid, and blunt cone) are calculated, and the results verify the feasibility of the method. Compared with the results for the triangular patch method, the correctness of the method is verified, and several advantages of the method are highlighted. For instance, the method does not require geometric modeling and patch discretization. The method uses a generatrix model and double integral, and its calculation is concise and accurate. This work provides a theory analysis for the rapid calculation of LRCS for common basic targets.
High-speed rupture during the initiation of the 2015 Bonin Islands deep earthquake
NASA Astrophysics Data System (ADS)
Zhan, Z.; Ye, L.; Shearer, P. M.; Lay, T.; Kanamori, H.
2015-12-01
Among the long-standing questions on how deep earthquakes rupture, the nucleation phase of large deep events is one of the most puzzling parts. Resolving the rupture properties of the initiation phase is difficult to achieve with far-field data because of the need for accurate corrections for structural effects on the waveforms (e.g., attenuation, scattering, and site effects) and alignment errors. Here, taking the 2015 Mw 7.9 Bonin Islands earthquake (depth = 678 km) as an example, we jointly invert its far-field P waves at multiple stations for the average rupture speed during the first second of the event. We use waveforms from a closely located aftershock as empirical Green's functions, and correct for possible differences in focal mechanisms and waveform misalignments with an iterative approach. We find that the average initial rupture speed is over 5 km/s, significantly higher than the average rupture speed of 3 km/s later in the event. This contrast suggests that rupture speeds of deep earthquakes can be highly variable during individual events and may define different stages of rupture, potentially with different mechanisms.
Application of fracture toughness scaling models to the ductile-to- brittle transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Link, R.E.; Joyce, J.A.
1996-01-01
An experimental investigation of fracture toughness in the ductile-brittle transition range was conducted. A large number of ASTM A533, Grade B steel, bend and tension specimens with varying crack lengths were tested throughout the transition region. Cleavage fracture toughness scaling models were utilized to correct the data for the loss of constraint in short crack specimens and tension geometries. The toughness scaling models were effective in reducing the scatter in the data, but tended to over-correct the results for the short crack bend specimens. A proposed ASTM Test Practice for Fracture Toughness in the Transition Range, which employs a mastermore » curve concept, was applied to the results. The proposed master curve over predicted the fracture toughness in the mid-transition and a modified master curve was developed that more accurately modeled the transition behavior of the material. Finally, the modified master curve and the fracture toughness scaling models were combined to predict the as-measured fracture toughness of the short crack bend and the tension specimens. It was shown that when the scaling models over correct the data for loss of constraint, they can also lead to non-conservative estimates of the increase in toughness for low constraint geometries.« less
Reverberant acoustic energy in auditoria that comprise systems of coupled rooms
NASA Astrophysics Data System (ADS)
Summers, Jason E.
2003-11-01
A frequency-dependent model for reverberant energy in coupled rooms is developed and compared with measurements for a 1:10 scale model and for Bass Hall, Ft. Worth, TX. At high frequencies, prior statistical-acoustics models are improved by geometrical-acoustics corrections for decay within sub-rooms and for energy transfer between sub-rooms. Comparisons of computational geometrical acoustics predictions based on beam-axis tracing with scale model measurements indicate errors resulting from tail-correction assuming constant quadratic growth of reflection density. Using ray tracing in the late part corrects this error. For mid-frequencies, the models are modified to account for wave effects at coupling apertures by including power transmission coefficients. Similarly, statical-acoustics models are improved through more accurate estimates of power transmission measurements. Scale model measurements are in accord with the predicted behavior. The edge-diffraction model is adapted to study transmission through apertures. Multiple-order scattering is theoretically and experimentally shown inaccurate due to neglect of slope diffraction. At low frequencies, perturbation models qualitatively explain scale model measurements. Measurements confirm relation of coupling strength to unperturbed pressure distribution on coupling surfaces. Measurements in Bass Hall exhibit effects of the coupled stage house. High frequency predictions of statistical acoustics and geometrical acoustics models and predictions of coupling apertures all agree with measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ho; Xing Lei; Lee, Rena
2012-05-15
Purpose: X-ray scatter incurred to detectors degrades the quality of cone-beam computed tomography (CBCT) and represents a problem in volumetric image guided and adaptive radiation therapy. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, due to missing information resulting from the obstruction of the blocker, such methods require dual scanning or dynamically moving blocker to obtain a complete volumetric image. Here, we propose a half beam blocker-based approach, in conjunction with a total variation (TV) regularized Feldkamp-Davis-Kress (FDK) algorithm, to correct scatter-induced artifacts by simultaneously acquiring image and scatter information frommore » a single-rotation CBCT scan. Methods: A half beam blocker, comprising lead strips, is used to simultaneously acquire image data on one side of the projection data and scatter data on the other half side. One-dimensional cubic B-Spline interpolation/extrapolation is applied to derive patient specific scatter information by using the scatter distributions on strips. The estimated scatter is subtracted from the projection image acquired at the opposite view. With scatter-corrected projections where this subtraction is completed, the FDK algorithm based on a cosine weighting function is performed to reconstruct CBCT volume. To suppress the noise in the reconstructed CBCT images produced by geometric errors between two opposed projections and interpolated scatter information, total variation regularization is applied by a minimization using a steepest gradient descent optimization method. The experimental studies using Catphan504 and anthropomorphic phantoms were carried out to evaluate the performance of the proposed scheme. Results: The scatter-induced shading artifacts were markedly suppressed in CBCT using the proposed scheme. Compared with CBCT without a blocker, the nonuniformity value was reduced from 39.3% to 3.1%. The root mean square error relative to values inside the regions of interest selected from a benchmark scatter free image was reduced from 50 to 11.3. The TV regularization also led to a better contrast-to-noise ratio. Conclusions: An asymmetric half beam blocker-based FDK acquisition and reconstruction technique has been established. The proposed scheme enables simultaneous detection of patient specific scatter and complete volumetric CBCT reconstruction without additional requirements such as prior images, dual scans, or moving strips.« less
Kashiwagi, Toru; Yutani, Kenji; Fukuchi, Minoru; Naruse, Hitoshi; Iwasaki, Tadaaki; Yokozuka, Koichi; Inoue, Shinichi; Kondo, Shoji
2002-06-01
Improvements in image quality and quantitation measurement, and the addition of detailed anatomical structures are important topics for single-photon emission tomography (SPECT). The goal of this study was to develop a practical system enabling both nonuniform attenuation correction and image fusion of SPECT images by means of high-performance X-ray computed tomography (CT). A SPECT system and a helical X-ray CT system were placed next to each other and linked with Ethernet. To avoid positional differences between the SPECT and X-ray CT studies, identical flat patient tables were used for both scans; body distortion was minimized with laser beams from the upper and lateral directions to detect the position of the skin surface. For the raw projection data of SPECT, a scatter correction was performed with the triple energy window method. Image fusion of the X-ray CT and SPECT images was performed automatically by auto-registration of fiducial markers attached to the skin surface. After registration of the X-ray CT and SPECT images, an X-ray CT-derived attenuation map was created with the calibration curve for 99mTc. The SPECT images were then reconstructed with scatter and attenuation correction by means of a maximum likelihood expectation maximization algorithm. This system was evaluated in torso and cylindlical phantoms and in 4 patients referred for myocardial SPECT imaging with Tc-99m tetrofosmin. In the torso phantom study, the SPECT and X-ray CT images overlapped exactly on the computer display. After scatter and attenuation correction, the artifactual activity reduction in the inferior wall of the myocardium improved. Conversely, the incresed activity around the torso surface and the lungs was reduced. In the abdomen, the liver activity, which was originally uniform, had recovered after scatter and attenuation correction processing. The clinical study also showed good overlapping of cardiac and skin surface outlines on the fused SPECT and X-ray CT images. The effectiveness of the scatter and attenuation correction process was similar to that observed in the phantom study. Because the total time required for computer processing was less than 10 minutes, this method of attenuation correction and image fusion for SPECT images is expected to become popular in clinical practice.
Alterations to the relativistic Love-Franey model and their application to inelastic scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeile, J.R.
The fictitious axial-vector and tensor mesons for the real part of the relativistic Love-Franey interaction are removed. In an attempt to make up for this loss, derivative couplings are used for the {pi} and {rho} mesons. Such derivative couplings require the introduction of axial-vector and tensor contact term corrections. Meson parameters are then fit to free nucleon-nucleon scattering data. The resulting fits are comparable to those of the relativistic Love-Franey model provided that the contact term corrections are included and the fits are weighted over the physically significant quantity of twice the tensor minus the axial-vector Lorentz invariants. Failure tomore » include contact term corrections leads to poor fits at higher energies. The off-shell behavior of this model is then examined by looking at several applications from inelastic proton-nucleus scattering.« less
Realistic simulated MRI and SPECT databases. Application to SPECT/MRI registration evaluation.
Aubert-Broche, Berengere; Grova, Christophe; Reilhac, Anthonin; Evans, Alan C; Collins, D Louis
2006-01-01
This paper describes the construction of simulated SPECT and MRI databases that account for realistic anatomical and functional variability. The data is used as a gold-standard to evaluate four SPECT/MRI similarity-based registration methods. Simulation realism was accounted for using accurate physical models of data generation and acquisition. MRI and SPECT simulations were generated from three subjects to take into account inter-subject anatomical variability. Functional SPECT data were computed from six functional models of brain perfusion. Previous models of normal perfusion and ictal perfusion observed in Mesial Temporal Lobe Epilepsy (MTLE) were considered to generate functional variability. We studied the impact noise and intensity non-uniformity in MRI simulations and SPECT scatter correction may have on registration accuracy. We quantified the amount of registration error caused by anatomical and functional variability. Registration involving ictal data was less accurate than registration involving normal data. MR intensity nonuniformity was the main factor decreasing registration accuracy. The proposed simulated database is promising to evaluate many functional neuroimaging methods, involving MRI and SPECT data.
Ocean color remote sensing using polarization properties of reflected sunlight
NASA Technical Reports Server (NTRS)
Frouin, R.; Pouliquen, E.; Breon, F.-M.
1994-01-01
The effects of the atmosphere and surface on sunlight backscattered to space by the ocean may be substantially reduced by using the unpolarized component of reflectance instead of total reflectance. At 450 nm, a wavelength of interest in ocean color remote sensing, and for typical conditions, 45% of the unpolarized reflectance may originate from the water body instead of 20% of the total reflectance, which represents a gain of a factor 2.2 in useful signal for water composition retrieval. The best viewing geometries are adjacent to the glitter region; they correspond to scattering angles around 100 deg, but they may change slightly depending on the polarization characteristics of the aerosols. As aerosol optical thickness increases, the atmosphere becomes less efficient at polarizing sunlight, and the enhancement of the water body contribution to unpolarized reflectance is reduced. Since the perturbing effects are smaller on unpolarized reflectance, at least for some viewing geometries, they may be more easily corrected, leading to a more accurate water-leaving signal and, therefore, more accurate estimates of phytoplankton pigment concentration.
Coherent beam control through inhomogeneous media in multi-photon microscopy
NASA Astrophysics Data System (ADS)
Paudel, Hari Prasad
Multi-photon fluorescence microscopy has become a primary tool for high-resolution deep tissue imaging because of its sensitivity to ballistic excitation photons in comparison to scattered excitation photons. The imaging depth of multi-photon microscopes in tissue imaging is limited primarily by background fluorescence that is generated by scattered light due to the random fluctuations in refractive index inside the media, and by reduced intensity in the ballistic focal volume due to aberrations within the tissue and at its interface. We built two multi-photon adaptive optics (AO) correction systems, one for combating scattering and aberration problems, and another for compensating interface aberrations. For scattering correction a MEMS segmented deformable mirror (SDM) was inserted at a plane conjugate to the objective back-pupil plane. The SDM can pre-compensate for light scattering by coherent combination of the scattered light to make an apparent focus even at a depths where negligible ballistic light remains (i.e. ballistic limit). This problem was approached by investigating the spatial and temporal focusing characteristics of a broad-band light source through strongly scattering media. A new model was developed for coherent focus enhancement through or inside the strongly media based on the initial speckle contrast. A layer of fluorescent beads under a mouse skull was imaged using an iterative coherent beam control method in the prototype two-photon microscope to demonstrate the technique. We also adapted an AO correction system to an existing in three-photon microscope in a collaborator lab at Cornell University. In the second AO correction approach a continuous deformable mirror (CDM) is placed at a plane conjugate to the plane of an interface aberration. We demonstrated that this "Conjugate AO" technique yields a large field-of-view (FOV) advantage in comparison to Pupil AO. Further, we showed that the extended FOV in conjugate AO is maintained over a relatively large axial misalignment of the conjugate planes of the CDM and the aberrating interface. This dissertation advances the field of microscopy by providing new models and techniques for imaging deeply within strongly scattering tissue, and by describing new adaptive optics approaches to extending imaging FOV due to sample aberrations.
NASA Astrophysics Data System (ADS)
Naserpour, Mahin; Zapata-Rodríguez, Carlos J.
2018-01-01
The evaluation of vector wave fields can be accurately performed by means of diffraction integrals, differential equations and also series expansions. In this paper, a Bessel series expansion which basis relies on the exact solution of the Helmholtz equation in cylindrical coordinates is theoretically developed for the straightforward yet accurate description of low-numerical-aperture focal waves. The validity of this approach is confirmed by explicit application to Gaussian beams and apertured focused fields in the paraxial regime. Finally we discuss how our procedure can be favorably implemented in scattering problems.
Electroweak radiative corrections to neutrino scattering at NuTeV
NASA Astrophysics Data System (ADS)
Park, Kwangwoo; Baur, Ulrich; Wackeroth, Doreen
2007-04-01
The W boson mass extracted by the NuTeV collaboration from the ratios of neutral and charged-current neutrino and anti-neutrino cross sections differs from direct measurements performed at LEP2 and the Fermilab Tevatron by about 3 σ. Several possible sources for the observed difference have been discussed in the literature, including new physics beyond the Standard Model (SM). However, in order to be able to pin down the cause of this discrepancy and to interpret this result as a deviation to the SM, it is important to include the complete electroweak one-loop corrections when extracting the W boson mass from neutrino scattering cross sections. We will present results of a Monte Carlo program for νN (νN) scattering including the complete electroweak O(α) corrections, which will be used to study the effects of these corrections on the extracted values for the electroweak parameters. We will briefly introduce some of the newly developed computational tools for generating Feynman diagrams and corresponding analytic expressions for one-loop matrix elements.
NASA Astrophysics Data System (ADS)
Martinek, Tomas; Duboué-Dijon, Elise; Timr, Štěpán; Mason, Philip E.; Baxová, Katarina; Fischer, Henry E.; Schmidt, Burkhard; Pluhařová, Eva; Jungwirth, Pavel
2018-06-01
We present a combination of force field and ab initio molecular dynamics simulations together with neutron scattering experiments with isotopic substitution that aim at characterizing ion hydration and pairing in aqueous calcium chloride and formate/acetate solutions. Benchmarking against neutron scattering data on concentrated solutions together with ion pairing free energy profiles from ab initio molecular dynamics allows us to develop an accurate calcium force field which accounts in a mean-field way for electronic polarization effects via charge rescaling. This refined calcium parameterization is directly usable for standard molecular dynamics simulations of processes involving this key biological signaling ion.
Analytic Scattering and Refraction Models for Exoplanet Transit Spectra
NASA Astrophysics Data System (ADS)
Robinson, Tyler D.; Fortney, Jonathan J.; Hubbard, William B.
2017-12-01
Observations of exoplanet transit spectra are essential to understanding the physics and chemistry of distant worlds. The effects of opacity sources and many physical processes combine to set the shape of a transit spectrum. Two such key processes—refraction and cloud and/or haze forward-scattering—have seen substantial recent study. However, models of these processes are typically complex, which prevents their incorporation into observational analyses and standard transit spectrum tools. In this work, we develop analytic expressions that allow for the efficient parameterization of forward-scattering and refraction effects in transit spectra. We derive an effective slant optical depth that includes a correction for forward-scattered light, and present an analytic form of this correction. We validate our correction against a full-physics transit spectrum model that includes scattering, and we explore the extent to which the omission of forward-scattering effects may bias models. Also, we verify a common analytic expression for the location of a refractive boundary, which we express in terms of the maximum pressure probed in a transit spectrum. This expression is designed to be easily incorporated into existing tools, and we discuss how the detection of a refractive boundary could help indicate the background atmospheric composition by constraining the bulk refractivity of the atmosphere. Finally, we show that opacity from Rayleigh scattering and collision-induced absorption will outweigh the effects of refraction for Jupiter-like atmospheres whose equilibrium temperatures are above 400-500 K.
Reichardt, J; Hess, M; Macke, A
2000-04-20
Multiple-scattering correction factors for cirrus particle extinction coefficients measured with Raman and high spectral resolution lidars are calculated with a radiative-transfer model. Cirrus particle-ensemble phase functions are computed from single-crystal phase functions derived in a geometrical-optics approximation. Seven crystal types are considered. In cirrus clouds with height-independent particle extinction coefficients the general pattern of the multiple-scattering parameters has a steep onset at cloud base with values of 0.5-0.7 followed by a gradual and monotonic decrease to 0.1-0.2 at cloud top. The larger the scattering particles are, the more gradual is the rate of decrease. Multiple-scattering parameters of complex crystals and of imperfect hexagonal columns and plates can be well approximated by those of projected-area equivalent ice spheres, whereas perfect hexagonal crystals show values as much as 70% higher than those of spheres. The dependencies of the multiple-scattering parameters on cirrus particle spectrum, base height, and geometric depth and on the lidar parameters laser wavelength and receiver field of view, are discussed, and a set of multiple-scattering parameter profiles for the correction of extinction measurements in homogeneous cirrus is provided.
Dispersive approach to two-photon exchange in elastic electron-proton scattering
Blunden, P. G.; Melnitchouk, W.
2017-06-14
We examine the two-photon exchange corrections to elastic electron-nucleon scattering within a dispersive approach, including contributions from both nucleon and Δ intermediate states. The dispersive analysis avoids off-shell uncertainties inherent in traditional approaches based on direct evaluation of loop diagrams, and guarantees the correct unitary behavior in the high energy limit. Using empirical information on the electromagnetic nucleon elastic and NΔ transition form factors, we compute the two-photon exchange corrections both algebraically and numerically. Finally, results are compared with recent measurements of e + p to e - p cross section ratios from the CLAS, VEPP-3 and OLYMPUS experiments.
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.
Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P
2016-04-15
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.
Ultra-high resolution electron microscopy
Oxley, Mark P.; Lupini, Andrew R.; Pennycook, Stephen J.
2016-12-23
The last two decades have seen dramatic advances in the resolution of the electron microscope brought about by the successful correction of lens aberrations that previously limited resolution for most of its history. Here we briefly review these advances, the achievement of sub-Ångstrom resolution and the ability to identify individual atoms, their bonding configurations and even their dynamics and diffusion pathways. We then present a review of the basic physics of electron scattering, lens aberrations and their correction, and an approximate imaging theory for thin crystals which provides physical insight into the various different imaging modes. Then we proceed tomore » describe a more exact imaging theory starting from Yoshioka’s formulation and covering full image simulation methods using Bloch waves, the multislice formulation and the frozen phonon/quantum excitation of phonons models. Delocalization of inelastic scattering has become an important limiting factor at atomic resolution. We therefore discuss this issue extensively, showing how the full-width-half-maximum is the appropriate measure for predicting image contrast, but the diameter containing 50% of the excitation is an important measure of the range of the interaction. These two measures can differ by a factor of 5, are not a simple function of binding energy, and full image simulations are required to match to experiment. The Z-dependence of annular dark field images is also discussed extensively, both for single atoms and for crystals, and we show that temporal incoherence must be included accurately if atomic species are to be identified through matching experimental intensities to simulations. Finally we mention a few promising directions for future investigation.« less
Titus, L. J.; Nunes, Filomena M.
2014-03-12
Here, the effects of non-local potentials have historically been approximately included by applying a correction factor to the solution of the corresponding equation for the local equivalent interaction. This is usually referred to as the Perey correction factor. In this work we investigate the validity of the Perey correction factor for single-channel bound and scattering states, as well as in transfer (p, d) cross sections. Method: We solve the scattering and bound state equations for non-local interactions of the Perey-Buck type, through an iterative method. Using the distorted wave Born approximation, we construct the T-matrix for (p,d) on 17O, 41Ca,more » 49Ca, 127Sn, 133Sn, and 209Pb at 20 and 50 MeV. As a result, we found that for bound states, the Perey corrected wave function resulting from the local equation agreed well with that from the non-local equation in the interior region, but discrepancies were found in the surface and peripheral regions. Overall, the Perey correction factor was adequate for scattering states, with the exception of a few partial waves corresponding to the grazing impact parameters. These differences proved to be important for transfer reactions. In conclusion, the Perey correction factor does offer an improvement over taking a direct local equivalent solution. However, if the desired accuracy is to be better than 10%, the exact solution of the non-local equation should be pursued.« less
NASA Astrophysics Data System (ADS)
Holmes, Timothy W.
2001-01-01
A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.
Spectral structure of laser light scattering revisited: bandwidths of nonresonant scattering lidars.
She, C Y
2001-09-20
It is well known that scattering lidars, i.e., Mie, aerosol-wind, Rayleigh, high-spectral-resolution, molecular-wind, rotational Raman, and vibrational Raman lidars, are workhorses for probing atmospheric properties, including the backscatter ratio, aerosol extinction coefficient, temperature, pressure, density, and winds. The spectral structure of molecular scattering (strength and bandwidth) and its constituent spectra associated with Rayleigh and vibrational Raman scattering are reviewed. Revisiting the correct name by distinguishing Cabannes scattering from Rayleigh scattering, and sharpening the definition of each scattering component in the Rayleigh scattering spectrum, the review allows a systematic, logical, and useful comparison in strength and bandwidth between each scattering component and in receiver bandwidths (for both nighttime and daytime operation) between the various scattering lidars for atmospheric sensing.
Implementation of an Analytical Raman Scattering Correction for Satellite Ocean-Color Processing
NASA Technical Reports Server (NTRS)
McKinna, Lachlan I. W.; Werdell, P. Jeremy; Proctor, Christopher W.
2016-01-01
Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a timeseries study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs.
A hybrid HDRF model of GOMS and SAIL: GOSAIL
NASA Astrophysics Data System (ADS)
Dou, B.; Wu, S.; Wen, J.
2016-12-01
Understanding the surface reflectance anisotropy is the key facet in interpreting the features of land surface from remotely sensed information, which describes the property of land surface to reflect the solar radiation directionally. Most reflectance anisotropy models assumed the nature surface was illuminated only by the direct solar radiation, while the diffuse skylight becomes dominant especially for the over cast sky conditions and high rugged terrain. Correcting the effect of diffuse skylight on the reflectance anisotropy to obtain the intrinsic directional reflectance of land surface is highly desirable for remote sensing applications. This paper developed a hybrid HDRF model of GOMS and SAIL called GOSAIL model for discrete canopies. The accurate area proportions of four scene components are calculated by the GOMS model and the spectral signatures of scene components are provided by the SAIL model. Both the single scattering contribution and the multiple scattering contributions within and between the canopy and background under the clear and diffuse illumination conditions are considered in the GOSAIL model. The HDRF simulated by the 3-D Discrete Anisotropic Radiative Transfer (DART) model and the HDRF measurements over the 100m×100m mature pine stand at the Järvselja, Estonia are used for validating and evaluating the performance of proposed GOSAIL model. The comparison results indicate the GOSAIL model can accurately reproducing the angular feature of discrete canopy for both the clear and overcast atmospheric conditions. The GOSAIL model is promising for the land surface biophysical parameters retrieval (e.g. albedo, leaf area index) over the heterogeneous terrain.
NASA Technical Reports Server (NTRS)
Tsay, Si-Chee; Stamnes, Knut; Wiscombe, Warren; Laszlo, Istvan; Einaudi, Franco (Technical Monitor)
2000-01-01
This update reports a state-of-the-art discrete ordinate algorithm for monochromatic unpolarized radiative transfer in non-isothermal, vertically inhomogeneous, but horizontally homogeneous media. The physical processes included are Planckian thermal emission, scattering with arbitrary phase function, absorption, and surface bidirectional reflection. The system may be driven by parallel or isotropic diffuse radiation incident at the top boundary, as well as by internal thermal sources and thermal emission from the boundaries. Radiances, fluxes, and mean intensities are returned at user-specified angles and levels. DISORT has enjoyed considerable popularity in the atmospheric science and other communities since its introduction in 1988. Several new DISORT features are described in this update: intensity correction algorithms designed to compensate for the 8-M forward-peak scaling and obtain accurate intensities even in low orders of approximation; a more general surface bidirectional reflection option; and an exponential-linear approximation of the Planck function allowing more accurate solutions in the presence of large temperature gradients. DISORT has been designed to be an exemplar of good scientific software as well as a program of intrinsic utility. An extraordinary effort has been made to make it numerically well-conditioned, error-resistant, and user-friendly, and to take advantage of robust existing software tools. A thorough test suite is provided to verify the program both against published results, and for consistency where there are no published results. This careful attention to software design has been just as important in DISORT's popularity as its powerful algorithmic content.
Towards Improved Radiative Transfer Simulations of Hyperspectral Measurements for Cloudy Atmospheres
NASA Astrophysics Data System (ADS)
Natraj, V.; Li, C.; Aumann, H. H.; Yung, Y. L.
2016-12-01
Usage of hyperspectral measurements in the infrared for weather forecasting requires radiative transfer (RT) models that can accurately compute radiances given the atmospheric state. On the other hand, it is necessary for the RT models to be fast enough to meet operational processing processing requirements. Until recently, this has proven to be a very hard challenge. In the last decade, however, significant progress has been made in this regard, due to computer speed increases, and improved and optimized RT models. This presentation will introduce a new technique, based on principal component analysis (PCA) of the inherent optical properties (such as profiles of trace gas absorption and single scattering albedo), to perform fast and accurate hyperspectral RT calculations in clear or cloudy atmospheres. PCA is a technique to compress data while capturing most of the variability in the data. By performing PCA on the optical properties, we limit the number of computationally expensive multiple scattering RT calculations to the PCA-reduced data set, and develop a series of PC-based correction factors to obtain the hyperspectral radiances. This technique has been showed to deliver accuracies of 0.1% of better with respect to brute force, line-by-line (LBL) models such as LBLRTM and DISORT, but is orders of magnitude faster than the LBL models. We will compare the performance of this method against other models on a large atmospheric state data set (7377 profiles) that includes a wide range of thermodynamic and cloud profiles, along with viewing geometry and surface emissivity information. 2016. All rights reserved.
The fully relativistic implementation of the convergent close-coupling method
NASA Astrophysics Data System (ADS)
Bostock, Christopher James
2011-04-01
The calculation of accurate excitation and ionization cross sections for electron collisions with atoms and ions plays a fundamental role in atomic and molecular physics, laser physics, x-ray spectroscopy, plasma physics and chemistry. Within the veil of plasma physics lie important research areas affiliated with the lighting industry, nuclear fusion and astrophysics. For high energy projectiles or targets with a large atomic number it is presently understood that a scattering formalism based on the Dirac equation is required to incorporate relativistic effects. This tutorial outlines the development of the relativistic convergent close-coupling (RCCC) method and highlights the following three main accomplishments. (i) The inclusion of the Breit interaction, a relativistic correction to the Coulomb potential, in the RCCC method. This led to calculations that resolved a discrepancy between theory and experiment for the polarization of x-rays emitted by highly charged hydrogen-like ions excited by electron impact (Bostock et al 2009 Phys. Rev. A 80 052708). (ii) The extension of the RCCC method to accommodate two-electron and quasi-two-electron targets. The method was applied to electron scattering from mercury. Accurate plasma physics modelling of mercury-based fluorescent lamps requires detailed information on a large number of electron impact excitation cross sections involving transitions between various states (Bostock et al 2010 Phys. Rev. A 82 022713). (iii) The third accomplishment outlined in this tutorial is the restructuring of the RCCC computer code to utilize a hybrid OpenMP-MPI parallelization scheme which now enables the RCCC code to run on the latest high performance supercomputer architectures.
Measurement of the main and critical parameters for optimal laser treatment of heart disease
NASA Astrophysics Data System (ADS)
Kabeya, FB; Abrahamse, H.; Karsten, AE
2017-10-01
Laser light is frequently used in the diagnosis and treatment of patients. As in traditional treatments such as medication, bypass surgery, and minimally invasive ways, laser treatment can also fail and present serious side effects. The true reason for laser treatment failure or the side effects thereof, remains unknown. From the literature review conducted, and experimental results generated we conclude that an optimal laser treatment for coronary artery disease (named heart disease) can be obtained if certain critical parameters are correctly measured and understood. These parameters include the laser power, the laser beam profile, the fluence rate, the treatment time, as well as the absorption and scattering coefficients of the target treatment tissue. Therefore, this paper proposes different, accurate methods for the measurement of these critical parameters to determine the optimal laser treatment of heart disease with a minimal risk of side effects. The results from the measurement of absorption and scattering properties can be used in a computer simulation package to predict the fluence rate. The computing technique is a program based on the random number (Monte Carlo) process and probability statistics to track the propagation of photons through a biological tissue.
SU-E-I-20: Dead Time Count Loss Compensation in SPECT/CT: Projection Versus Global Correction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siman, W; Kappadath, S
Purpose: To compare projection-based versus global correction that compensate for deadtime count loss in SPECT/CT images. Methods: SPECT/CT images of an IEC phantom (2.3GBq 99mTc) with ∼10% deadtime loss containing the 37mm (uptake 3), 28 and 22mm (uptake 6) spheres were acquired using a 2 detector SPECT/CT system with 64 projections/detector and 15 s/projection. The deadtime, Ti and the true count rate, Ni at each projection, i was calculated using the monitor-source method. Deadtime corrected SPECT were reconstructed twice: (1) with projections that were individually-corrected for deadtime-losses; and (2) with original projections with losses and then correcting the reconstructed SPECTmore » images using a scaling factor equal to the inverse of the average fractional loss for 5 projections/detector. For both cases, the SPECT images were reconstructed using OSEM with attenuation and scatter corrections. The two SPECT datasets were assessed by comparing line profiles in xyplane and z-axis, evaluating the count recoveries, and comparing ROI statistics. Higher deadtime losses (up to 50%) were also simulated to the individually corrected projections by multiplying each projection i by exp(-a*Ni*Ti), where a is a scalar. Additionally, deadtime corrections in phantoms with different geometries and deadtime losses were also explored. The same two correction methods were carried for all these data sets. Results: Averaging the deadtime losses in 5 projections/detector suffices to recover >99% of the loss counts in most clinical cases. The line profiles (xyplane and z-axis) and the statistics in the ROIs drawn in the SPECT images corrected using both methods showed agreement within the statistical noise. The count-loss recoveries in the two methods also agree within >99%. Conclusion: The projection-based and the global correction yield visually indistinguishable SPECT images. The global correction based on sparse sampling of projections losses allows for accurate SPECT deadtime loss correction while keeping the study duration reasonable.« less
A curvature-corrected Kirchhoff formulation for radar sea-return from the near vertical
NASA Technical Reports Server (NTRS)
Jackson, F. C.
1974-01-01
A new theoretical treatment of the problem of electromagnetic wave scattering from a randomly rough surface is given. A high frequency correction to the Kirchhoff approximation is derived from a field integral equation for a perfectly conducting surface. The correction, which accounts for the effect of local surface curvature, is seen to be identical with an asymptotic form found by Fock (1945) for diffraction by a paraboloid. The corrected boundary values are substituted into the far field Stratton-Chu integral, and average backscattered powers are computed assuming the scattering surface is a homogeneous Gaussian process. Preliminary calculations for K(-4) ocean wave spectrum indicate a resonable modelling of polarization effects near the vertical, theta 45 deg. Correspondence with the results of small perturbation theory is shown.
Joint reconstruction of activity and attenuation in Time-of-Flight PET: A Quantitative Analysis.
Rezaei, Ahmadreza; Deroose, Christophe M; Vahle, Thomas; Boada, Fernando; Nuyts, Johan
2018-03-01
Joint activity and attenuation reconstruction methods from time of flight (TOF) positron emission tomography (PET) data provide an effective solution to attenuation correction when no (or incomplete/inaccurate) information on the attenuation is available. One of the main barriers limiting their use in clinical practice is the lack of validation of these methods on a relatively large patient database. In this contribution, we aim at validating the activity reconstructions of the maximum likelihood activity reconstruction and attenuation registration (MLRR) algorithm on a whole-body patient data set. Furthermore, a partial validation (since the scale problem of the algorithm is avoided for now) of the maximum likelihood activity and attenuation reconstruction (MLAA) algorithm is also provided. We present a quantitative comparison of the joint reconstructions to the current clinical gold-standard maximum likelihood expectation maximization (MLEM) reconstruction with CT-based attenuation correction. Methods: The whole-body TOF-PET emission data of each patient data set is processed as a whole to reconstruct an activity volume covering all the acquired bed positions, which helps to reduce the problem of a scale per bed position in MLAA to a global scale for the entire activity volume. Three reconstruction algorithms are used: MLEM, MLRR and MLAA. A maximum likelihood (ML) scaling of the single scatter simulation (SSS) estimate to the emission data is used for scatter correction. The reconstruction results are then analyzed in different regions of interest. Results: The joint reconstructions of the whole-body patient data set provide better quantification in case of PET and CT misalignments caused by patient and organ motion. Our quantitative analysis shows a difference of -4.2% (±2.3%) and -7.5% (±4.6%) between the joint reconstructions of MLRR and MLAA compared to MLEM, averaged over all regions of interest, respectively. Conclusion: Joint activity and attenuation estimation methods provide a useful means to estimate the tracer distribution in cases where CT-based attenuation images are subject to misalignments or are not available. With an accurate estimate of the scatter contribution in the emission measurements, the joint TOF-PET reconstructions are within clinical acceptable accuracy. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
On the measurement of Rayleigh scattering by gases at 6328A
NASA Technical Reports Server (NTRS)
SHARDANAND; Gupta, S. K.
1973-01-01
The problem of laboratory measurements of Rayleigh scattering and depolarization ratio for atoms and molecules in the gaseous state is described. It is shown that, if the scattered radiation measurements are made at two angles, the normal depolarization ratio cannot be determined meaningfully. However, from scattering measurements, the Rayleigh scattering cross sections can be determined accurately. The measurements of Raleigh scattering from He, H2, Ar, O2, and N2 for unpolarized radiation at 6328A are reported and compared with similar measurements at 6943 and 1215.7A.
Atmospheric correction for remote sensing image based on multi-spectral information
NASA Astrophysics Data System (ADS)
Wang, Yu; He, Hongyan; Tan, Wei; Qi, Wenwen
2018-03-01
The light collected from remote sensors taken from space must transit through the Earth's atmosphere. All satellite images are affected at some level by lightwave scattering and absorption from aerosols, water vapor and particulates in the atmosphere. For generating high-quality scientific data, atmospheric correction is required to remove atmospheric effects and to convert digital number (DN) values to surface reflectance (SR). Every optical satellite in orbit observes the earth through the same atmosphere, but each satellite image is impacted differently because atmospheric conditions are constantly changing. A physics-based detailed radiative transfer model 6SV requires a lot of key ancillary information about the atmospheric conditions at the acquisition time. This paper investigates to achieve the simultaneous acquisition of atmospheric radiation parameters based on the multi-spectral information, in order to improve the estimates of surface reflectance through physics-based atmospheric correction. Ancillary information on the aerosol optical depth (AOD) and total water vapor (TWV) derived from the multi-spectral information based on specific spectral properties was used for the 6SV model. The experimentation was carried out on images of Sentinel-2, which carries a Multispectral Instrument (MSI), recording in 13 spectral bands, covering a wide range of wavelengths from 440 up to 2200 nm. The results suggest that per-pixel atmospheric correction through 6SV model, integrating AOD and TWV derived from multispectral information, is better suited for accurate analysis of satellite images and quantitative remote sensing application.
Stationary table CT dosimetry and anomalous scanner-reported values of CTDI{sub vol}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, Robert L., E-mail: rdixon@wfubmc.edu; Boone, John M.
2014-01-15
Purpose: Anomalous, scanner-reported values of CTDI{sub vol} for stationary phantom/table protocols (having elevated values of CTDI{sub vol} over 300% higher than the actual dose to the phantom) have been observed; which are well-beyond the typical accuracy expected of CTDI{sub vol} as a phantom dose. Recognition of these outliers as “bad data” is important to users of CT dose index tracking systems (e.g., ACR DIR), and a method for recognition and correction is provided. Methods: Rigorous methods and equations are presented which describe the dose distributions for stationary-table CT. A comparison with formulae for scanner-reported values of CTDI{sub vol} clearly identifiesmore » the source of these anomalies. Results: For the stationary table, use of the CTDI{sub 100} formula (applicable to a moving phantom only) overestimates the dose due to extra scatter and also includes an overbeaming correction, both of which are nonexistent when the phantom (or patient) is held stationary. The reported DLP remains robust for the stationary phantom. Conclusions: The CTDI-paradigm does not apply in the case of a stationary phantom and simpler nonintegral equations suffice. A method of correction of the currently reported CTDI{sub vol} using the approach-to-equilibrium formula H(a) and an overbeaming correction factor serves to scale the reported CTDI{sub vol} values to more accurate levels for stationary-table CT, as well as serving as an indicator in the detection of “bad data.”.« less
Mbaye, Moussa; Diaw, Pape Abdoulaye; Gaye-Saye, Diabou; Le Jeune, Bernard; Cavalin, Goulven; Denis, Lydie; Aaron, Jean-Jacques; Delmas, Roger; Giamarchi, Philippe
2018-03-05
Permanent online monitoring of water supply pollution by hydrocarbons is needed for various industrial plants, to serve as an alert when thresholds are exceeded. Fluorescence spectroscopy is a suitable technique for this purpose due to its sensitivity and moderate cost. However, fluorescence measurements can be disturbed by the presence of suspended organic matter, which induces beam scattering and absorption, leading to an underestimation of hydrocarbon content. To overcome this problem, we propose an original technique of fluorescence spectra correction, based on a measure of the excitation beam scattering caused by suspended organic matter on the left side of the Rayleigh scattering spectral line. This correction allowed us to obtain a statistically validated estimate of the naphthalene content (used as representative of the polyaromatic hydrocarbon contamination), regardless of the amount of suspended organic matter in the sample. Moreover, it thus becomes possible, based on this correction, to estimate the amount of suspended organic matter. By this approach, the online warning system remains operational even when suspended organic matter is present in the water supply. Copyright © 2017 Elsevier B.V. All rights reserved.
Kittaka, Daisuke; Takase, Tadashi; Akiyama, Masayuki; Nakazawa, Yasuo; Shinozuka, Akira; Shirai, Muneaki
2011-01-01
(123)I-MIBG Heart-to-Mediastinum activity ratio (H/M) is commonly used as an indicator of relative myocardial (123)I-MIBG uptake. H/M ratios reflect myocardial sympathetic nerve function, therefore it is a useful parameter to assess regional myocardial sympathetic denervation in various cardiac diseases. However, H/M ratio values differ by site, gamma camera system, position and size of region of interest (ROI), and collimator. In addition to these factors, 529 keV scatter component may also affect (123)I-MIBG H/M ratio. In this study, we examined whether the H/M ratio shows correlation between two different gamma camera systems and that sought for H/M ratio calculation formula. Moreover, we assessed the feasibility of (123)I Dual Window (IDW) method, which is a scatter correction method, and compared H/M ratios with and without IDW method. H/M ratio displayed a good correlation between two gamma camera systems. Additionally, we were able to create a new H/M calculation formula. These results indicated that the IDW method is a useful scatter correction method for calculating (123)I-MIBG H/M ratios.
The effect of precipitation on measuring sea surface salinity from space
NASA Astrophysics Data System (ADS)
Jin, Xuchen; Pan, Delu; He, Xianqiang; Wang, Difeng; Zhu, Qiankun; Gong, Fang
2017-10-01
The sea surface salinity (SSS) can be measured from space by using L-band (1.4 GHz) microwave radiometers. The L-band has been chosen for its sensitivity of brightness temperature to the change of salinity. However, SSS remote sensing is still challenging due to the low sensitivity of brightness temperature to SSS variation: for the vertical polarization, the sensitivity is about 0.4 to 0.8 K/psu with different incident angles and sea surface temperature; for horizontal polarization, the sensitivity is about 0.2 to 0.6 K/psu. It means that we have to make radiometric measurements with accuracy better than 1K even for the best sensitivity of brightness temperature to SSS. Therefore, in order to retrieve SSS, the measured brightness temperature at the top of atmosphere (TOA) needs to be corrected for many sources of error. One main geophysical source of error comes from atmosphere. Currently, the atmospheric effect at L-band is usually corrected by absorption and emission model, which estimate the radiation absorbed and emitted by atmosphere. However, the radiation scattered by precipitation is neglected in absorption and emission models, which might be significant under heavy precipitation. In this paper, a vector radiative transfer model for coupled atmosphere and ocean systems with a rough surface is developed to simulate the brightness temperature at the TOA under different precipitations. The model is based on the adding-doubling method, which includes oceanic emission and reflection, atmospheric absorption and scattering. For the ocean system with a rough surface, an empirical emission model established by Gabarro and the isotropic Cox-Munk wave model considering shadowing effect are used to simulate the emission and reflection of sea surface. For the atmospheric attenuation, it is divided into two parts: For the rain layer, a Marshall-Palmer distribution is used and the scattering properties of the hydrometeors are calculated by Mie theory (the scattering hydrometeors are assumed to be spherical). For the other atmosphere layers, which are assumed to be clear sky, Liebe's millimeter wave propagation model (MPM93) is used to calculate the absorption coefficients of oxygen, water vapor, and cloud droplets. To simulate the change of brightness temperature caused by different rain rate (0-50 mm/h), we assume a 26-layer precipitation structure corresponding to NCEP FNL data. Our radiative transfer simulations showed that the brightness temperature at TOA can be influenced significantly by the heavy precipitation, the results indicate that the atmospheric attenuation of L-band at incidence angle of 42.5° should be a positive bias, and when rain rate rise up to 50 mm/h, the brightness temperature increases are close to 0.6 K and 0.8 K for horizontally and vertically polarized brightness temperature, respectively. Thus, in the case of heavy precipitation, the current absorption and emission model is not accurate enough to correct atmospheric effect, and a radiative transfer model which considers the effect of radiation scattering should be used.
Reciprocal space mapping and single-crystal scattering rods.
Smilgies, Detlef M; Blasini, Daniel R; Hotta, Shu; Yanagi, Hisao
2005-11-01
Reciprocal space mapping using a linear gas detector in combination with a matching Soller collimator has been applied to map scattering rods of well oriented organic microcrystals grown on a solid surface. Formulae are provided to correct image distortions in angular space and to determine the required oscillation range, in order to measure properly integrated scattering intensities.
Quasi-elastic nuclear scattering at high energies
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Townsend, Lawrence W.; Wilson, John W.
1992-01-01
The quasi-elastic scattering of two nuclei is considered in the high-energy optical model. Energy loss and momentum transfer spectra for projectile ions are evaluated in terms of an inelastic multiple-scattering series corresponding to multiple knockout of target nucleons. The leading-order correction to the coherent projectile approximation is evaluated. Calculations are compared with experiments.
Concept of a Fast and Simple Atmospheric Radiative Transfer Model for Aerosol Retrieval
NASA Astrophysics Data System (ADS)
Seidel, Felix; Kokhanovsky, Alexander A.
2010-05-01
Radiative transfer modelling (RTM) is an indispensable tool for a number of applications, including astrophysics, climate studies and quantitative remote sensing. It simulates the attenuation of light through a translucent medium. Here, we look at the scattering and absorption of solar light on its way to the Earth's surface and back to space or back into a remote sensing instrument. RTM is regularly used in the framework of the so-called atmospheric correction to find properties of the surface. Further, RTM can be inverted to retrieve features of the atmosphere, such as the aerosol optical depth (AOD), for instance. Present-day RTM, such as 6S, MODTRAN, SHARM, RT3, SCIATRAN or RTMOM have errors of only a few percent, however they are rather slow and often not easy to use. We present here a concept for a fast and simple RTM model in the visible spectral range. It is using a blend of different existing RTM approaches with a special emphasis on fast approximative analytical equations and parametrizations. This concept may be helpful for efficient retrieval algorithms, which do not have to rely on the classic look-up-tables (LUT) approach. For example, it can be used to retrieve AOD without complex inversion procedures including multiple iterations. Naturally, there is always a trade-off between speed and modelling accuracy. The code can be run therefore in two different modes. The regular mode provides a reasonable ratio between speed and accuracy, while the optional mode is very fast but less accurate. The normal mode approximates the diffuse scattered light by calculating the first (single scattering) and second order of scattering according to the classical method of successive orders of scattering. The very fast mode calculates only the single scattering approximation, which does not need any slow numerical integration procedure, and uses a simple correction factor to account for multiple scattering. This factor is a parametrization of MODTRAN results, which provide a typical ratio between single and multiple scattered light. A comparison of the presented RTM concept to the widely accepted 6S RTM reveals errors of up to 10% in standard mode. This is acceptable for certain applications. The very fast mode may lead to errors of up to 30%, but it is still able to reproduce qualitatively the results of 6S. An experimental implementation of this RTM concept is written in the common IDL language. It is therefore very flexible and straightforward to be implemented into custom retrieval algorithms of the remote sensing community. The code might also be used to add an atmosphere on top of an existing vegetation-canopy or water RTM. Due to the ease of use of the RTM code and the comprehensibility of the internal equations, the concept might be useful for educational purposes as well. The very fast mode could be of interest for a real-time applications, such as an in-flight instrument performance check for airborne optical sensors. In the future, the concept can be extended to account for scattering according to Mie theory, polarization and gaseous absorption. It is expected that this would reduce the model error to 5% or less.
NASA Astrophysics Data System (ADS)
Freedman, A.; Onasch, T. B.; Renbaum-Wollf, L.; Lambe, A. T.; Davidovits, P.; Kebabian, P. L.
2015-12-01
Accurate, as compared to precise, measurement of aerosol absorption has always posed a significant problem for the particle radiative properties community. Filter-based instruments do not actually measure absorption but rather light transmission through the filter; absorption must be derived from this data using multiple corrections. The potential for matrix-induced effects is also great for organic-laden aerosols. The introduction of true in situ measurement instruments using photoacoustic or photothermal interferometric techniques represents a significant advance in the state-of-the-art. However, measurement artifacts caused by changes in humidity still represent a significant hurdle as does the lack of a good calibration standard at most measurement wavelengths. And, in the absence of any particle-based absorption standard, there is no way to demonstrate any real level of accuracy. We, along with others, have proposed that under the circumstance of low single scattering albedo (SSA), absorption is best determined by difference using measurement of total extinction and scattering. We discuss a robust, compact, field deployable instrument (the CAPS PMssa) that simultaneously measures airborne particle light extinction and scattering coefficients and thus the single scattering albedo (SSA) on the same sample volume. The extinction measurement is based on cavity attenuated phase shift (CAPS) techniques as employed in the CAPS PMex particle extinction monitor; scattering is measured using integrating nephelometry by incorporating a Lambertian integrating sphere within the sample cell. The scattering measurement is calibrated using the extinction measurement of non-absorbing particles. For small particles and low SSA, absorption can be measured with an accuracy of 6-8% at absorption levels as low as a few Mm-1. We present new results of the measurement of the mass absorption coefficient (MAC) of soot generated by an inverted methane diffusion flame at 630 nm. A value of 6.60 ±0.2 m2 g-1 was determined where the uncertainty refers to the precision of the measurement. The overall accuracy of the measurement, traceable to the properties of polystyrene latex particles, is estimated to be better than ±10%.
Anomalous Rayleigh scattering with dilute concentrations of elements of biological importance
NASA Astrophysics Data System (ADS)
Hugtenburg, Richard P.; Bradley, David A.
2004-01-01
The anomalous scattering factor (ASF) correction to the relativistic form-factor approximation for Rayleigh scattering is examined in support of its utilization in radiographic imaging. ASF corrected total cross-section data have been generated for a low resolution grid for the Monte Carlo code EGS4 for the biologically important elements, K, Ca, Mn, Fe, Cu and Zn. Points in the fixed energy grid used by EGS4 as well as 8 other points in the vicinity of the K-edge have been chosen to achieve an uncertainty in the ASF component of 20% according to the Thomas-Reiche-Kuhn sum rule and an energy resolution of 20 eV. Such data is useful for analysis of imaging with a quasi-monoenergetic source. Corrections to the sampled distribution of outgoing photons, due to ASF, are given and new total cross-section data including that of the photoelectric effect have been computed using the Slater exchange self-consistent potential with the Latter tail. A measurement of Rayleigh scattering in a dilute aqueous solution of manganese (II) was performed, this system enabling determination of the absolute cross-section, although background subtraction was necessary to remove K β fluorescence and resonant Raman scattering occurring within several 100 eV of the edge. Measurements confirm the presence of below edge bound-bound structure and variation in the structure due to the ionic state that are not currently included in tabulations.
Chiral symmetry constraints on resonant amplitudes
NASA Astrophysics Data System (ADS)
Bruns, Peter C.; Mai, Maxim
2018-03-01
We discuss the impact of chiral symmetry constraints on the quark-mass dependence of meson resonance pole positions, which are encoded in non-perturbative parametrizations of meson scattering amplitudes. Model-independent conditions on such parametrizations are derived, which are shown to guarantee the correct functional form of the leading quark-mass corrections to the resonance pole positions. Some model amplitudes for ππ scattering, widely used for the determination of ρ and σ resonance properties from results of lattice simulations, are tested explicitly with respect to these conditions.
Kouri, Donald J [Houston, TX; Vijay, Amrendra [Houston, TX; Zhang, Haiyan [Houston, TX; Zhang, Jingfeng [Houston, TX; Hoffman, David K [Ames, IA
2007-05-01
A method and system for solving the inverse acoustic scattering problem using an iterative approach with consideration of half-off-shell transition matrix elements (near-field) information, where the Volterra inverse series correctly predicts the first two moments of the interaction, while the Fredholm inverse series is correct only for the first moment and that the Volterra approach provides a method for exactly obtaining interactions which can be written as a sum of delta functions.
Holographic corrections to meson scattering amplitudes
NASA Astrophysics Data System (ADS)
Armoni, Adi; Ireson, Edwin
2017-06-01
We compute meson scattering amplitudes using the holographic duality between confining gauge theories and string theory, in order to consider holographic corrections to the Veneziano amplitude and associated higher-point functions. The generic nature of such computations is explained, thanks to the well-understood nature of confining string backgrounds, and two different examples of the calculation in given backgrounds are used to illustrate the details. The effect we discover, whilst only qualitative, is re-obtainable in many such examples, in four-point but also higher point amplitudes.
Automated Selection of Metal-Poor Stars in the Galaxy
NASA Astrophysics Data System (ADS)
Rhee, Jaehyon
2000-08-01
In this thesis I have developed algorithms for the efficient reduction and analysis of a large set of objective-prism data, and for the reliable selection of extremely metal-poor candidate stars in the Galaxy. Automated computer scans of the 308 photographic plates in the HK objective-prism / interference-filter survey of Beers and colleagues have been carried out with the Automatic Plate Measuring (APM) machine in Cambridge, England. Highly automated software tools have been developed in order to identify useful spectra and remove unusable spectra, to locate the positions of the Ca II H (3969 Å) and K (3933 Å) absorption lines, and to construct approximate continua. Equivalent widths of the Ca II H and K lines were then measured directly from these reduced spectra. A subset of 294,039 spectra from 87 of the HK survey plates (located within approximately 30 degrees of the South Galactic Pole) were extracted. Of these, 221,670 (75.4%) proved to be useful for subsequent analysis. I have explored new methodology, making use of an Artificial Neural Network (ANN) analysis approach, in order to select extremely metal-poor star candidates with high efficiency. The ANNs were trained to predict metallicity, [Fe/H], and to classify stars into 6 groups separated by temperature and metal abundance, based on two accurately measured parameters -- the de-reddened broadband (B-V)0 color for known HK survey stars with available photometric information, and the equivalent width of the Ca II K line in an 18 Å band, the K18 index, as measured from follow-up medium-resolution spectroscopy taken during the course of the HK survey. When provided with accurate input data, the trained networks were able to estimate [Fe/H] and to determine the class with high accuracy (with a robust estimated one-sigma scatter of SBI = 0.13 dex, and an overall correction rate of 91%). The ANN approach was then used in order to recover information on the K18 index and (B-V)0 color directly from the APM-extracted spectra. Trained networks fed with known colors, measured peak fluxes, and the raw fluxes of the low-resolution digital spectra were able to predict the K18 index with a one-sigma scatter in the range 1.2 < SBI < 1.4 Å, depending on the color and strength of the line. By feeding on calibrated, multiple-band, photographic measurements of apparent magnitudes, peak fluxes, and the fluxes of estimated continua of the extracted APM spectra, the trained networks were able to estimate (B-V)0 colors with a scatter in the range 0.13 < SBI < 0.16 magnitudes. From an application of the ANN approach, using the less accurate information obtained from the calibrated estimates of K18 and (B-V)0 colors, it still proved possible to obtain metal abundance estimates with a scatter of SBI = 0.78 dex, and to carry out classifications with an overall correction rate of 40%. By comparison with a large sample of known metal-poor stars, on the order of 60% of the candidates predicted to have a metallicity [Fe/H] < -2.0 indeed fell in this region of abundance (representing a three-fold improvement over the visual selection criteria previously employed in the HK survey). The recovery rate indicated that at least 30% of all such stars in our sample would be identified in a blind sampling, limited, for the most part, by the lack of accurate color information. Finally we report 481 extremely metal-poor star candidates in 10 plates of the HK survey, selected by our newly developed methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michaelsen, Kelly; Krishnaswamy, Venkat; Pogue, Brian W.
2012-07-15
Purpose: Design optimization and phantom validation of an integrated digital breast tomosynthesis (DBT) and near-infrared spectral tomography (NIRST) system targeting improvement in sensitivity and specificity of breast cancer detection is presented. Factors affecting instrumentation design include minimization of cost, complexity, and examination time while maintaining high fidelity NIRST measurements with sufficient information to recover accurate optical property maps. Methods: Reconstructed DBT slices from eight patients with abnormal mammograms provided anatomical information for the NIRST simulations. A limited frequency domain (FD) and extensive continuous wave (CW) NIRST system was modeled. The FD components provided tissue scattering estimations used in the reconstructionmore » of the CW data. Scattering estimates were perturbed to study the effects on hemoglobin recovery. Breast mimicking agar phantoms with inclusions were imaged using the combined DBT/NIRST system for comparison with simulation results. Results: Patient simulations derived from DBT images show successful reconstruction of both normal and malignant lesions in the breast. They also demonstrate the importance of accurately quantifying tissue scattering. Specifically, 20% errors in optical scattering resulted in 22.6% or 35.1% error in quantification of total hemoglobin concentrations, depending on whether scattering was over- or underestimated, respectively. Limited frequency-domain optical signal sampling provided two regions scattering estimates (for fat and fibroglandular tissues) that led to hemoglobin concentrations that reduced the error in the tumor region by 31% relative to when a single estimate of optical scattering was used throughout the breast volume of interest. Acquiring frequency-domain data with six wavelengths instead of three did not significantly improve the hemoglobin concentration estimates. Simulation results were confirmed through experiments in two-region breast mimicking gelatin phantoms. Conclusions: Accurate characterization of scattering is necessary for quantification of hemoglobin. Based on this study, a system design is described to optimally combine breast tomosynthesis with NIRST.« less
NASA Astrophysics Data System (ADS)
Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.
2012-07-01
Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.
NASA Astrophysics Data System (ADS)
Hanrieder, N.; Wilbert, S.; Pitz-Paal, R.; Emde, C.; Gasteiger, J.; Mayer, B.; Polo, J.
2015-05-01
Losses of reflected Direct Normal Irradiance due to atmospheric extinction in concentrating solar tower plants can vary significantly with site and time. The losses of the direct normal irradiance between the heliostat field and receiver in a solar tower plant are mainly caused by atmospheric scattering and absorption by aerosol and water vapor concentration in the atmospheric boundary layer. Due to a high aerosol particle number, radiation losses can be significantly larger in desert environments compared to the standard atmospheric conditions which are usually considered in raytracing or plant optimization tools. Information about on-site atmospheric extinction is only rarely available. To measure these radiation losses, two different commercially available instruments were tested and more than 19 months of measurements were collected at the Plataforma Solar de Almería and compared. Both instruments are primarily used to determine the meteorological optical range (MOR). The Vaisala FS11 scatterometer is based on a monochromatic near-infrared light source emission and measures the strength of scattering processes in a small air volume mainly caused by aerosol particles. The Optec LPV4 long-path visibility transmissometer determines the monochromatic attenuation between a light-emitting diode (LED) light source at 532 nm and a receiver and therefore also accounts for absorption processes. As the broadband solar attenuation is of interest for solar resource assessment for Concentrating Solar Power (CSP), a correction procedure for these two instruments is developed and tested. This procedure includes a spectral correction of both instruments from monochromatic to broadband attenuation. That means the attenuation is corrected for the actual, time-dependent by the collector reflected solar spectrum. Further, an absorption correction for the Vaisala FS11 scatterometer is implemented. To optimize the Absorption and Broadband Correction (ABC) procedure, additional measurement input of a nearby sun photometer is used to enhance on-site atmospheric assumptions for description of the atmosphere in the algorithm. Comparing both uncorrected and spectral- and absorption-corrected extinction data from one year measurements at the Plataforma Solar de Almería, the mean difference between the scatterometer and the transmissometer is reduced from 4.4 to 0.6%. Applying the ABC procedure without the usage of additional input data from a sun photometer still reduces the difference between both sensors to about 0.8%. Applying an expert guess assuming a standard aerosol profile for continental regions instead of additional sun photometer input results in a mean difference of 0.81%. Therefore, applying this new correction method, both instruments can now be utilized to determine the solar broadband extinction in tower plants sufficiently accurate.
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2011-01-01
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali
2011-04-15
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less
Atmospheric monitoring in MAGIC and data corrections
NASA Astrophysics Data System (ADS)
Fruck, Christian; Gaug, Markus
2015-03-01
A method for analyzing returns of a custom-made "micro"-LIDAR system, operated alongside the two MAGIC telescopes is presented. This method allows for calculating the transmission through the atmospheric boundary layer as well as thin cloud layers. This is achieved by applying exponential fits to regions of the back-scattering signal that are dominated by Rayleigh scattering. Making this real-time transmission information available for the MAGIC data stream allows to apply atmospheric corrections later on in the analysis. Such corrections allow for extending the effective observation time of MAGIC by including data taken under adverse atmospheric conditions. In the future they will help reducing the systematic uncertainties of energy and flux.
NASA Technical Reports Server (NTRS)
Green, Sheldon; Boissoles, J.; Boulet, C.
1988-01-01
The first accurate theoretical values for off-diagonal (i.e., line-coupling) pressure-broadening cross sections are presented. Calculations were done for CO perturbed by He at thermal collision energies using an accurate ab initio potential energy surface. Converged close coupling, i.e., numerically exact values, were obtained for coupling to the R(0) and R(2) lines. These were used to test the coupled states (CS) and infinite order sudden (IOS) approximate scattering methods. CS was found to be of quantitative accuracy (a few percent) and has been used to obtain coupling values for lines to R(10). IOS values are less accurate, but, owing to their simplicity, may nonetheless prove useful as has been recently demonstrated.
Characterization of Lorenz number with Seebeck coefficient measurement
Kim, Hyun -Sik; Gibbs, Zachary M.; Tang, Yinglu; ...
2015-04-01
In analyzing zT improvements due to lattice thermal conductivity (κ L ) reduction, electrical conductivity (σ) and total thermal conductivity (κ Total ) are often used to estimate the electronic component of the thermal conductivity (κ E ) and in turn κ L from κ L = ~ κ Total - LσT. The Wiedemann-Franz law, κ E = LσT, where L is Lorenz number, is widely used to estimate κ E from σ measurements. It is a common practice to treat L as a universal factor with 2.44 × 10⁻⁸ WΩK⁻² (degenerate limit). However, significant deviations from the degenerate limitmore » (approximately 40% or more for Kane bands) are known to occur for non-degenerate semiconductors where L converges to 1.5 × 10⁻⁸ WΩK⁻² for acoustic phonon scattering. The decrease in L is correlated with an increase in thermopower (absolute value of Seebeck coefficient (S)). Thus, a first order correction to the degenerate limit of L can be based on the measured thermopower, |S|, independent of temperature or doping. We propose the equation: (where L is in 10⁻⁸ WΩK⁻² and S in μV/K) as a satisfactory approximation for L. This equation is accurate within 5% for single parabolic band/acoustic phonon scattering assumption and within 20% for PbSe, PbS, PbTe, Si₀.₈Ge₀.₂ where more complexity is introduced, such as non-parabolic Kane bands, multiple bands, and/or alternate scattering mechanisms. The use of this equation for L rather than a constant value (when detailed band structure and scattering mechanism is not known) will significantly improve the estimation of lattice thermal conductivity. L = 1.5 + exp [-|S|116]« less
Regional Distribution of Forest Height and Biomass from Multisensor Data Fusion
NASA Technical Reports Server (NTRS)
Yu, Yifan; Saatchi, Sassan; Heath, Linda S.; LaPoint, Elizabeth; Myneni, Ranga; Knyazikhin, Yuri
2010-01-01
Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM derived elevation (30 m), Landsat Enhanced Thematic Mapper (ETM) bands (30 m), derived vegetation index (VI) and NLCD2001 land cover map. The first fusion algorithm corrects for missing or erroneous NED data using an iterative interpolation approach and produces distribution of scattering phase centers from SRTM-NED in three dominant forest types of evergreen conifers, deciduous, and mixed stands. The second fusion technique integrates the USDA Forest Service, Forest Inventory and Analysis (FIA) ground-based plot data to develop an algorithm to transform the scattering phase centers into mean forest height and aboveground biomass. Height estimates over evergreen (R2 = 0.86, P < 0.001; RMSE = 1.1 m) and mixed forests (R2 = 0.93, P < 0.001, RMSE = 0.8 m) produced the best results. Estimates over deciduous forests were less accurate because of the winter acquisition of SRTM data and loss of scattering phase center from tree ]surface interaction. We used two methods to estimate AGLB; algorithms based on direct estimation from the scattering phase center produced higher precision (R2 = 0.79, RMSE = 25 Mg/ha) than those estimated from forest height (R2 = 0.25, RMSE = 66 Mg/ha). We discuss sources of uncertainty and implications of the results in the context of mapping regional and continental scale forest biomass distribution.
Dose and scatter characteristics of a novel cone beam CT system for musculoskeletal extremities
NASA Astrophysics Data System (ADS)
Zbijewski, W.; Sisniega, A.; Vaquero, J. J.; Muhit, A.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Carrino, J. A.; Siewerdsen, J. H.
2012-03-01
A novel cone-beam CT (CBCT) system has been developed with promising capabilities for musculoskeletal imaging (e.g., weight-bearing extremities and combined radiographic / volumetric imaging). The prototype system demonstrates diagnostic-quality imaging performance, while the compact geometry and short scan orbit raise new considerations for scatter management and dose characterization that challenge conventional methods. The compact geometry leads to elevated, heterogeneous x-ray scatter distributions - even for small anatomical sites (e.g., knee or wrist), and the short scan orbit results in a non-uniform dose distribution. These complex dose and scatter distributions were investigated via experimental measurements and GPU-accelerated Monte Carlo (MC) simulation. The combination provided a powerful basis for characterizing dose distributions in patient-specific anatomy, investigating the benefits of an antiscatter grid, and examining distinct contributions of coherent and incoherent scatter in artifact correction. Measurements with a 16 cm CTDI phantom show that the dose from the short-scan orbit (0.09 mGy/mAs at isocenter) varies from 0.16 to 0.05 mGy/mAs at various locations on the periphery (all obtained at 80 kVp). MC estimation agreed with dose measurements within 10-15%. Dose distribution in patient-specific anatomy was computed with MC, confirming such heterogeneity and highlighting the elevated energy deposition in bone (factor of ~5-10) compared to soft-tissue. Scatter-to-primary ratio (SPR) up to ~1.5-2 was evident in some regions of the knee. A 10:1 antiscatter grid was found earlier to result in significant improvement in soft-tissue imaging performance without increase in dose. The results of MC simulations elucidated the mechanism behind scatter reduction in the presence of a grid. A ~3-fold reduction in average SPR was found in the MC simulations; however, a linear grid was found to impart additional heterogeneity in the scatter distribution, mainly due to the increase in the contribution of coherent scatter with increased spatial variation. Scatter correction using MC-generated scatter distributions demonstrated significant improvement in cupping and streaks. Physical experimentation combined with GPU-accelerated MC simulation provided a sophisticated, yet practical approach in identifying low-dose acquisition techniques, optimizing scatter correction methods, and evaluating patientspecific dose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, S.; Park, S.; Makowski, L.
Small angle X-ray scattering (SAXS) is an increasingly powerful technique to characterize the structure of biomolecules in solution. We present a computational method for accurately and efficiently computing the solution scattering curve from a protein with dynamical fluctuations. The method is built upon a coarse-grained (CG) representation of the protein. This CG approach takes advantage of the low-resolution character of solution scattering. It allows rapid determination of the scattering pattern from conformations extracted from CG simulations to obtain scattering characterization of the protein conformational landscapes. Important elements incorporated in the method include an effective residue-based structure factor for each aminomore » acid, an explicit treatment of the hydration layer at the surface of the protein, and an ensemble average of scattering from all accessible conformations to account for macromolecular flexibility. The CG model is calibrated and illustrated to accurately reproduce the experimental scattering curve of Hen egg white lysozyme. We then illustrate the computational method by calculating the solution scattering pattern of several representative protein folds and multiple conformational states. The results suggest that solution scattering data, when combined with a reliable computational method, have great potential for a better structural description of multi-domain complexes in different functional states, and for recognizing structural folds when sequence similarity to a protein of known structure is low. Possible applications of the method are discussed.« less
Combined Henyey-Greenstein and Rayleigh phase function.
Liu, Quanhua; Weng, Fuzhong
2006-10-01
The phase function is an important parameter that affects the distribution of scattered radiation. In Rayleigh scattering, a scatterer is approximated by a dipole, and its phase function is analytically related to the scattering angle. For the Henyey-Greenstein (HG) approximation, the phase function preserves only the correct asymmetry factor (i.e., the first moment), which is essentially important for anisotropic scattering. When the HG function is applied to small particles, it produces a significant error in radiance. In addition, the HG function is applied only for an intensity radiative transfer. We develop a combined HG and Rayleigh (HG-Rayleigh) phase function. The HG phase function plays the role of modulator extending the application of the Rayleigh phase function for small asymmetry scattering. The HG-Rayleigh phase function guarantees the correct asymmetry factor and is valid for a polarization radiative transfer. It approaches the Rayleigh phase function for small particles. Thus the HG-Rayleigh phase function has wider applications for both intensity and polarimetric radiative transfers. For microwave radiative transfer modeling in this study, the largest errors in the brightness temperature calculations for weak asymmetry scattering are generally below 0.02 K by using the HG-Rayleigh phase function. The errors can be much larger, in the 1-3 K range, if the Rayleigh and HG functions are applied separately.
NASA Technical Reports Server (NTRS)
Flesia, C.; Schwendimann, P.
1992-01-01
The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.
Image Reconstruction for a Partially Collimated Whole Body PET Scanner
Alessio, Adam M.; Schmitz, Ruth E.; MacDonald, Lawrence R.; Wollenweber, Scott D.; Stearns, Charles W.; Ross, Steven G.; Ganin, Alex; Lewellen, Thomas K.; Kinahan, Paul E.
2008-01-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary. PMID:19096731
Image Reconstruction for a Partially Collimated Whole Body PET Scanner.
Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E
2008-06-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, S; Meredith, R; Azure, M
Purpose: To support the phase I trial for toxicity, biodistribution and pharmacokinetics of intra-peritoneal (IP) 212Pb-TCMC-trastuzumab in patients with HER-2 expressing malignancy. A whole body gamma camera imaging method was developed for estimating amount of 212Pb-TCMC-trastuzumab left in the peritoneal cavity. Methods: {sup 212}Pb decays to {sup 212}Bi via beta emission. {sup 212}Bi emits an alpha particle at an average of 6.1 MeV. The 238.6 keV gamma ray with a 43.6% yield can be exploited for imaging. Initial phantom was made of saline bags with 212Pb. Images were collected for 238.6 keV with a medium energy general purpose collimator. Theremore » are other high energy gamma emissions (e.g. 511keV, 8%; 583 keV, 31%) that penetrate the septae of the collimator and contribute scatter into 238.6 keV. An upper scatter window was used for scatter correction for these high energy gammas. Results: A small source containing 212Pb can be easily visualized. Scatter correction on images of a small 212Pb source resulted in a ∼50% reduction in the full width at tenth maximum (FWTM), while change in full width at half maximum (FWHM) was <10%. For photopeak images, substantial scatter around phantom source extended to > 5 cm outside; scatter correction improved image contrast by removing this scatter around the sources. Patient imaging, in the 1st cohort (n=3) showed little redistribution of 212Pb-TCMC-trastuzumab out of the peritoneal cavity. Compared to the early post-treatment images, the 18-hour post-injection images illustrated the shift to more uniform anterior/posterior abdominal distribution and the loss of intensity due to radioactive decay. Conclusion: Use of medium energy collimator, 15% width of 238.6 keV photopeak, and a 7.5% upper scatter window is adequate for quantification of 212Pb radioactivity inside peritoneal cavity for alpha radioimmunotherapy of ovarian cancer. Research Support: AREVA Med, NIH 1UL1RR025777-01.« less
Extending generalized Kubelka-Munk to three-dimensional radiative transfer.
Sandoval, Christopher; Kim, Arnold D
2015-08-10
The generalized Kubelka-Munk (gKM) approximation is a linear transformation of the double spherical harmonics of order one (DP1) approximation of the radiative transfer equation. Here, we extend the gKM approximation to study problems in three-dimensional radiative transfer. In particular, we derive the gKM approximation for the problem of collimated beam propagation and scattering in a plane-parallel slab composed of a uniform absorbing and scattering medium. The result is an 8×8 system of partial differential equations that is much easier to solve than the radiative transfer equation. We compare the solutions of the gKM approximation with Monte Carlo simulations of the radiative transfer equation to identify the range of validity for this approximation. We find that the gKM approximation is accurate for isotropic scattering media that are sufficiently thick and much less accurate for anisotropic, forward-peaked scattering media.
Evaluation of atmospheric correction algorithms for processing SeaWiFS data
NASA Astrophysics Data System (ADS)
Ransibrahmanakul, Varis; Stumpf, Richard; Ramachandran, Sathyadev; Hughes, Kent
2005-08-01
To enable the production of the best chlorophyll products from SeaWiFS data NOAA (Coastwatch and NOS) evaluated the various atmospheric correction algorithms by comparing the satellite derived water reflectance derived for each algorithm with in situ data. Gordon and Wang (1994) introduced a method to correct for Rayleigh and aerosol scattering in the atmosphere so that water reflectance may be derived from the radiance measured at the top of the atmosphere. However, since the correction assumed near infrared scattering to be negligible in coastal waters an invalid assumption, the method over estimates the atmospheric contribution and consequently under estimates water reflectance for the lower wavelength bands on extrapolation. Several improved methods to estimate near infrared correction exist: Siegel et al. (2000); Ruddick et al. (2000); Stumpf et al. (2002) and Stumpf et al. (2003), where an absorbing aerosol correction is also applied along with an additional 1.01% calibration adjustment for the 412 nm band. The evaluation show that the near infrared correction developed by Stumpf et al. (2003) result in an overall minimum error for U.S. waters. As of July 2004, NASA (SEADAS) has selected this as the default method for the atmospheric correction used to produce chlorophyll products.
NASA Astrophysics Data System (ADS)
Dong, Fang
1999-09-01
The research described in this dissertation is related to characterization of tissue microstructure using a system- independent spatial autocorrelation function (SAF). The function was determined using a reference phantom method, which employed a well-defined ``point- scatterer'' reference phantom to account for instrumental factors. The SAF's were estimated for several tissue-mimicking (TM) phantoms and fresh dog livers. Both phantom tests and in vitro dog liver measurements showed that the reference phantom method is relatively simple and fairly accurate, providing the bandwidth of the measurement system is sufficient for the size of the scatterer being involved in the scattering process. Implementation of this method in clinical scanner requires that distortions from patient's body wall be properly accounted for. The SAF's were estimated for two phantoms with body-wall-like distortions. The experimental results demonstrated that body wall distortions have little effect if echo data are acquired from a large scattering volume. One interesting application of the SAF is to form a ``scatterer size image''. The scatterer size image may help providing diagnostic tools for those diseases in which the tissue microstructure is different from the normal. Another method, the BSC method, utilizes information contained in the frequency dependence of the backscatter coefficient to estimate the scatterer size. The SAF technique produced accurate scatterer size images of homogeneous TM phantoms and the BSC method was capable of generating accurate size images for heterogeneous phantoms. In the scatterer size image of dog kidneys, the contrast-to-noise-ratio (CNR) between renal cortex and medulla was improved dramatically compared to the gray- scale image. The effect of nonlinear propagation was investigated by using a custom-designed phantom with overlaying TM fat layer. The results showed that the correlation length decreased when the transmitting power increased. The measurement results support the assumption that nonlinear propagation generates harmonic energies and causes underestimation of scatterer diameters. Nonlinear propagation can be further enhanced by those materials with high B/A value-a parameter which characterizes the degree of nonlinearity. Nine versions of TM fat and non-fat materials were measured for their B/A values using a new measurement technique, the ``simplified finite amplitude insertion substitution'' (SFAIS) method.
SU-G-TeP3-02: Determination of Geometry-Specific Backscatter Factors for Radiobiology Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viscariello, N; Culberson, W; Lawless, M
2016-06-15
Purpose: Radiation biology research relies on an accurate radiation dose delivered to the biological target. Large field irradiations in a cabinet irradiator may use the AAPM TG-61 protocol. This relies on an air-kerma measurement and conversion to absorbed dose to water (Dw) on the surface of a water phantom using provided backscatter factors. Cell or small animal studies differ significantly from this reference geometry. This study aims to determine the impact of the lack of full scatter conditions in four representative geometries that may be used in radiobiology studies. Methods: MCNP6 was used to model the Dw on the surfacemore » of a full scatter phantom in a validated orthovoltage x-ray reference beam. Dw in a cylindrical mouse, 100 mm Petri dish, 6-well and 96-well cell culture dishes was simulated and compared to this full scatter geometry. A reference dose rate was measured using the TG-61 protocol in a cabinet irradiator. This nominal dose rate was used to irradiate TLDs in each phantom to a given dose. Doses were obtained based on TLDs calibrated in a NIST-traceable beam. Results: Compared to the full scattering conditions, the simulated dose to water in the representative geometries were found to be underestimated by 12-26%. The discrepancy was smallest with the cylindrical mouse geometry, which most closely approximates adequate lateral- and backscatter. TLDs irradiated in the mouse and petri dish phantoms using the TG-61 determined dose rate showed similarly lower values of Dw. When corrected for this discrepancy, they agreed with the predicted Dw within 5%. Conclusion: Using the TG-61 in-air protocol and given backscatter factors to determine a reference dose rate in a biological irradiator may not be appropriate given the difference in scattering conditions between irradiation and calibration. Without accounting for this, the dose rate is overestimated and is dependent on irradiation geometry.« less
Transient radiative transfer in a scattering slab considering polarization.
Yi, Hongliang; Ben, Xun; Tan, Heping
2013-11-04
The characteristics of the transient and polarization must be considered for a complete and correct description of short-pulse laser transfer in a scattering medium. A Monte Carlo (MC) method combined with a time shift and superposition principle is developed to simulate transient vector (polarized) radiative transfer in a scattering medium. The transient vector radiative transfer matrix (TVRTM) is defined to describe the transient polarization behavior of short-pulse laser propagating in the scattering medium. According to the definition of reflectivity, a new criterion of reflection at Fresnel surface is presented. In order to improve the computational efficiency and accuracy, a time shift and superposition principle is applied to the MC model for transient vector radiative transfer. The results for transient scalar radiative transfer and steady-state vector radiative transfer are compared with those in published literatures, respectively, and an excellent agreement between them is observed, which validates the correctness of the present model. Finally, transient radiative transfer is simulated considering the polarization effect of short-pulse laser in a scattering medium, and the distributions of Stokes vector in angular and temporal space are presented.
NASA Astrophysics Data System (ADS)
Saturno, Jorge; Pöhlker, Christopher; Massabò, Dario; Brito, Joel; Carbone, Samara; Cheng, Yafang; Chi, Xuguang; Ditas, Florian; Hrabě de Angelis, Isabella; Morán-Zuloaga, Daniel; Pöhlker, Mira L.; Rizzo, Luciana V.; Walter, David; Wang, Qiaoqiao; Artaxo, Paulo; Prati, Paolo; Andreae, Meinrat O.
2017-08-01
Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP) data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA), which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June-September 2014). The mean absorption coefficient (at 637 nm) during this period was 1.8 ± 2.1 Mm-1, with a maximum of 15.9 Mm-1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS) retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.
ERIC Educational Resources Information Center
McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric
2014-01-01
This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…
Atmospheric Effect on Remote Sensing of the Earth's Surface
NASA Technical Reports Server (NTRS)
Fraser, R. S.; Kaufman, Y. J. (Principal Investigator)
1985-01-01
Radiative transfer theory (RT) for an atmosphere with a nonuniform surface is the basis for understanding and correcting for the atmospheric effect on remote sensing of surface properties. In the present work the theory is generalized and tested successfully against laboratory and field measurements. There is still a need to generalize the RT approximation for off-nadir directions and to take into account anisotropic reflectance at the surface. The reflectance at the surface. The adjacency effect results in a significant modification of spectral signatures of the surface, and therefore results in modification of classifications, of separability of field classes, and of spatial resolution. For example, the 30 m resolution of the Thematic Mapper is reduced to 100 m by a hazy atmosphere. The adjacency effect depends on several optical parameters of aerosols: optical thickness, depth of aerosol layer, scattering phase function, and absorption. Remote sensing in general depends on these parameter, not just adjacency effects, but they are not known well enough for making accurate atmospheric corrections. It is important to establish methods for estimating these parameters in order to develop correction methods for atmospheric effects. Such estimations can be based on climatological data, which are not available yet, correlations between the optical parameters and meteorological data, and the same satellite measurements of radiances that are used for estimating surface properties. Knowledge about the atmospheric parameters important for remote sensing is being enlarged with current measurements of them.
Analysis of the electromagnetic scattering from an inlet geometry with lossy walls
NASA Technical Reports Server (NTRS)
Myung, N. H.; Pathak, P. H.; Chunang, C. D.
1985-01-01
One of the primary goals is to develop an approximate but sufficiently accurate analysis for the problem of electromagnetic (EM) plane wave scattering by an open ended, perfectly-conducting, semi-infinite hollow circular waveguide (or duct) with a thin, uniform layer of lossy or absorbing material on its inner wall, and with a simple termination inside. The less difficult but useful problem of the EM scattering by a two-dimensional (2-D), semi-infinite parallel plate waveguide with an impedance boundary condition on the inner walls was chosen initially for analysis. The impedance boundary condition in this problem serves to model a thin layer of lossy dielectric/ferrite coating on the otherwise perfectly-conducting interior waveguide walls. An approximate but efficient and accurate ray solution was obtained recently. That solution is presently being extended to the case of a moderately thick dielectric/ferrite coating on the walls so as to be valid for situations where the impedance boundary condition may not remain sufficiently accurate.
Liu, Xin
2014-01-01
This study describes a deterministic method for simulating the first-order scattering in a medical computed tomography scanner. The method was developed based on a physics model of x-ray photon interactions with matter and a ray tracing technique. The results from simulated scattering were compared to the ones from an actual scattering measurement. Two phantoms with homogeneous and heterogeneous material distributions were used in the scattering simulation and measurement. It was found that the simulated scatter profile was in agreement with the measurement result, with an average difference of 25% or less. Finally, tomographic images with artifacts caused by scatter were corrected based on the simulated scatter profiles. The image quality improved significantly.
Commissioning a passive-scattering proton therapy nozzle for accurate SOBP delivery.
Engelsman, M; Lu, H M; Herrup, D; Bussiere, M; Kooy, H M
2009-06-01
Proton radiotherapy centers that currently use passively scattered proton beams do field specific calibrations for a non-negligible fraction of treatment fields, which is time and resource consuming. Our improved understanding of the passive scattering mode of the IBA universal nozzle, especially of the current modulation function, allowed us to re-commission our treatment control system for accurate delivery of SOBPs of any range and modulation, and to predict the output for each of these fields. We moved away from individual field calibrations to a state where continued quality assurance of SOBP field delivery is ensured by limited system-wide measurements that only require one hour per week. This manuscript reports on a protocol for generation of desired SOBPs and prediction of dose output.
NASA Technical Reports Server (NTRS)
Deshpande, Manohar
2011-01-01
A precise knowledge of the interior structure of asteroids, comets, and Near Earth Objects (NEO) is important to assess the consequences of their impacts with the Earth and develop efficient mitigation strategies. Knowledge of their interior structure also provides opportunities for extraction of raw materials for future space activities. Low frequency radio sounding is often proposed for investigating interior structures of asteroids and NEOs. For designing and optimizing radio sounding instrument it is advantageous to have an accurate and efficient numerical simulation model of radio reflection and transmission through large size bodies of asteroid shapes. In this presentation we will present electromagnetic (EM) scattering analysis of electrically large size asteroids using (1) a weak form formulation and (2) also a more accurate hybrid finite element method/method of moments (FEM/MOM) to help estimate their internal structures. Assuming the internal structure with known electrical properties of a sample asteroid, we first develop its forward EM scattering model. From the knowledge of EM scattering as a function of frequency and look angle we will then present the inverse scattering procedure to extract its interior structure image. Validity of the inverse scattering procedure will be presented through few simulation examples.
NASA Astrophysics Data System (ADS)
Gouveia, Diego; Baars, Holger; Seifert, Patric; Wandinger, Ulla; Barbosa, Henrique; Barja, Boris; Artaxo, Paulo; Lopes, Fabio; Landulfo, Eduardo; Ansmann, Albert
2018-04-01
Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS). We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.
Deformation Measurement In The Hayward Fault Zone Using Partially Correlated Persistent Scatterers
NASA Astrophysics Data System (ADS)
Lien, J.; Zebker, H. A.
2013-12-01
Interferometric synthetic aperture radar (InSAR) is an effective tool for measuring temporal changes in the Earth's surface. By combining SAR phase data collected at varying times and orbit geometries, with InSAR we can produce high accuracy, wide coverage images of crustal deformation fields. Changes in the radar imaging geometry, scatterer positions, or scattering behavior between radar passes causes the measured radar return to differ, leading to a decorrelation phase term that obscures the deformation signal and prevents the use of large baseline data. Here we present a new physically-based method of modeling decorrelation from the subset of pixels with the highest intrinsic signal-to-noise ratio, the so-called persistent scatters (PS). This more complete formulation, which includes both phase and amplitude scintillations, better describes the scattering behavior of partially correlated PS pixels and leads to a more reliable selection algorithm. The new method identifies PS pixels using maximum likelihood signal-to-clutter ratio (SCR) estimation based on the joint interferometric stack phase-amplitude distribution. Our PS selection method is unique in that it considers both phase and amplitude; accounts for correlation between all possible pairs of interferometric observations; and models the effect of spatial and temporal baselines on the stack. We use the resulting maximum likelihood SCR estimate as a criterion for PS selection. We implement the partially correlated persistent scatterer technique to analyze a stack of C-band European Remote Sensing (ERS-1/2) interferometric radar data imaging the Hayward Fault Zone from 1995 to 2000. We show that our technique achieves a better trade-off between PS pixel selection accuracy and network density compared to other PS identification methods, particularly in areas of natural terrain. We then present deformation measurements obtained by the selected PS network. Our results demonstrate that the partially correlated persistent scatterer technique can attain accurate deformation measurements even in areas that suffer decorrelation due to natural terrain. The accuracy of phase unwrapping and subsequent deformation estimation on the spatially sparse PS network depends on both pixel selection accuracy and the density of the network. We find that many additional pixels can be added to the PS list if we are able to correctly identify and add those in which the scattering mechanism exhibits partial, rather than complete, correlation across all radar scenes.
Hadron mass corrections in semi-inclusive deep-inelastic scattering
Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; ...
2015-09-24
We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q 2. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.
a Phenomenological Determination of the Pion-Nucleon Scattering Lengths from Pionic Hydrogen
NASA Astrophysics Data System (ADS)
Ericson, T. E. O.; Loiseau, B.; Wycech, S.
A model independent expression for the electromagnetic corrections to a phenomenological hadronic pion-nucleon (πN) scattering length ah, extracted from pionic hydrogen, is obtained. In a non-relativistic approach and using an extended charge distribution, these corrections are derived up to terms of order α2 log α in the limit of a short-range hadronic interaction. We infer ahπ ^-p=0.0870(5)m-1π which gives for the πNN coupling through the GMO relation g2π ^± pn/(4π )=14.04(17).
Corneal topometry by fringe projection: limits and possibilities
NASA Astrophysics Data System (ADS)
Windecker, Robert; Tiziani, Hans J.; Thiel, H.; Jean, Benedikt J.
1996-01-01
A fast and accurate measurement of corneal topography is an important task especially since laser induced corneal reshaping has been used for the correction of ametropia. The classical measuring system uses Placido rings for the measurement and calculation of the topography or local curvatures. Another approach is the projection of a known fringe map to be imaged onto the surface under a certain angle of incidence. We present a set-up using telecentric illumination and detection units. With a special grating we get a synthetic wavelength with a nearly sinusoidal profile. In combination with a very fast data acquisition the topography can be evaluated using as special selfnormalizing phase evaluation algorithm. It calculates local Fourier coefficients and corrects errors caused by imperfect illumination or inhomogeneous scattering by fringe normalization. The topography can be determined over 700 by 256 pixel. The set-up is suitable to measure optically rough silicon replica of the human cornea as well as the cornea in vivo over a field of 8 mm and more. The resolution is mainly limited by noise and is better than two micrometers. We discuss the principle benefits and the drawbacks compared with standard Placido technique.
N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method
NASA Astrophysics Data System (ADS)
Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.
2018-05-01
Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menegotti, L.; Delana, A.; Martignano, A.
Film dosimetry is an attractive tool for dose distribution verification in intensity modulated radiotherapy (IMRT). A critical aspect of radiochromic film dosimetry is the scanner used for the readout of the film: the output needs to be calibrated in dose response and corrected for pixel value and spatial dependent nonuniformity caused by light scattering; these procedures can take a long time. A method for a fast and accurate calibration and uniformity correction for radiochromic film dosimetry is presented: a single film exposure is used to do both calibration and correction. Gafchromic EBT films were read with two flatbed charge coupledmore » device scanners (Epson V750 and 1680Pro). The accuracy of the method is investigated with specific dose patterns and an IMRT beam. The comparisons with a two-dimensional array of ionization chambers using a 18x18 cm{sup 2} open field and an inverse pyramid dose pattern show an increment in the percentage of points which pass the gamma analysis (tolerance parameters of 3% and 3 mm), passing from 55% and 64% for the 1680Pro and V750 scanners, respectively, to 94% for both scanners for the 18x18 open field, and from 76% and 75% to 91% for the inverse pyramid pattern. Application to an IMRT beam also shows better gamma index results, passing from 88% and 86% for the two scanners, respectively, to 94% for both. The number of points and dose range considered for correction and calibration appears to be appropriate for use in IMRT verification. The method showed to be fast and to correct properly the nonuniformity and has been adopted for routine clinical IMRT dose verification.« less
Development of PET projection data correction algorithm
NASA Astrophysics Data System (ADS)
Bazhanov, P. V.; Kotina, E. D.
2017-12-01
Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
Early Calibration Results of CYGNSS Mission
NASA Astrophysics Data System (ADS)
Balasubramaniam, R.; Ruf, C. S.; McKague, D. S.; Clarizia, M. P.; Gleason, S.
2017-12-01
The first of its kind, GNSS-R complete orbital mission, CYGNSS was successfully launched on Dec 15 2016. The goal of this mission is to accurately forecast the intensification of tropical cyclones by modelling its inner core. The 8 micro observatories of CYGNSS carry a passive instrument called Delay Doppler Mapping Instrument (DDMI). The DDMIs form a 2D representation called the Delay-Doppler Map (DDM) of the forward scattered power signal. Each DDMI outputs 4 DDMs per second which are compressed and sent to the ground resulting in a total of 32 sea-surface measurements produced by the CYGNSS constellation per second. These are subsequently used in the Level-2 wind retrieval algorithm to extract wind speed information. In this paper, we perform calibration and validation of CYGNSS measurements for accurate extraction of wind speed information. The calibration stage involves identification and correction for dependence of the CYGNSS observables namely Normalised Bistatic Radar Cross Section and Leading Edge Slope of the Integrated Delay Waveform over instrument parameters, geometry etc. The validation stage involves training of the Geophysical Model Function over a multitude of ground truth sources during the Atlantic hurricane season and also refined validation of high wind speed data products.
NASA Astrophysics Data System (ADS)
Saquet, E.; Emelyanov, N.; Robert, V.; Arlot, J.-E.; Anbazhagan, P.; Baillié, K.; Bardecker, J.; Berezhnoy, A. A.; Bretton, M.; Campos, F.; Capannoli, L.; Carry, B.; Castet, M.; Charbonnier, Y.; Chernikov, M. M.; Christou, A.; Colas, F.; Coliac, J.-F.; Dangl, G.; Dechambre, O.; Delcroix, M.; Dias-Oliveira, A.; Drillaud, C.; Duchemin, Y.; Dunford, R.; Dupouy, P.; Ellington, C.; Fabre, P.; Filippov, V. A.; Finnegan, J.; Foglia, S.; Font, D.; Gaillard, B.; Galli, G.; Garlitz, J.; Gasmi, A.; Gaspar, H. S.; Gault, D.; Gazeas, K.; George, T.; Gorda, S. Y.; Gorshanov, D. L.; Gualdoni, C.; Guhl, K.; Halir, K.; Hanna, W.; Henry, X.; Herald, D.; Houdin, G.; Ito, Y.; Izmailov, I. S.; Jacobsen, J.; Jones, A.; Kamoun, S.; Kardasis, E.; Karimov, A. M.; Khovritchev, M. Y.; Kulikova, A. M.; Laborde, J.; Lainey, V.; Lavayssiere, M.; Le Guen, P.; Leroy, A.; Loader, B.; Lopez, O. C.; Lyashenko, A. Y.; Lyssenko, P. G.; Machado, D. I.; Maigurova, N.; Manek, J.; Marchini, A.; Midavaine, T.; Montier, J.; Morgado, B. E.; Naumov, K. N.; Nedelcu, A.; Newman, J.; Ohlert, J. M.; Oksanen, A.; Pavlov, H.; Petrescu, E.; Pomazan, A.; Popescu, M.; Pratt, A.; Raskhozhev, V. N.; Resch, J.-M.; Robilliard, D.; Roschina, E.; Rothenberg, E.; Rottenborn, M.; Rusov, S. A.; Saby, F.; Saya, L. F.; Selvakumar, G.; Signoret, F.; Slesarenko, V. Y.; Sokov, E. N.; Soldateschi, J.; Sonka, A.; Soulie, G.; Talbot, J.; Tejfel, V. G.; Thuillot, W.; Timerson, B.; Toma, R.; Torsellini, S.; Trabuco, L. L.; Traverse, P.; Tsamis, V.; Unwin, M.; Abbeel, F. Van Den; Vandenbruaene, H.; Vasundhara, R.; Velikodsky, Y. I.; Vienne, A.; Vilar, J.; Vugnon, J.-M.; Wuensche, N.; Zeleny, P.
2018-03-01
During the 2014-2015 mutual events season, the Institut de Mécanique Céleste et de Calcul des Éphémérides (IMCCE), Paris, France, and the Sternberg Astronomical Institute (SAI), Moscow, Russia, led an international observation campaign to record ground-based photometric observations of Galilean moon mutual occultations and eclipses. We focused on processing the complete photometric observations data base to compute new accurate astrometric positions. We used our method to derive astrometric positions from the light curves of the events. We developed an accurate photometric model of mutual occultations and eclipses, while correcting for the satellite albedos, Hapke's light scattering law, the phase effect, and the limb darkening. We processed 609 light curves, and we compared the observed positions of the satellites with the theoretical positions from IMCCE NOE-5-2010-GAL satellite ephemerides and INPOP13c planetary ephemeris. The standard deviation after fitting the light curve in equatorial positions is ±24 mas, or 75 km at Jupiter. The rms (O-C) in equatorial positions is ±50 mas, or 150 km at Jupiter.
NASA Astrophysics Data System (ADS)
Giap, Huan Bosco
Accurate calculation of absorbed dose to target tumors and normal tissues in the body is an important requirement for establishing fundamental dose-response relationships for radioimmunotherapy. Two major obstacles have been the difficulty in obtaining an accurate patient-specific 3-D activity map in-vivo and calculating the resulting absorbed dose. This study investigated a methodology for 3-D internal dosimetry, which integrates the 3-D biodistribution of the radionuclide acquired from SPECT with a dose-point kernel convolution technique to provide the 3-D distribution of absorbed dose. Accurate SPECT images were reconstructed with appropriate methods for noise filtering, attenuation correction, and Compton scatter correction. The SPECT images were converted into activity maps using a calibration phantom. The activity map was convolved with an ^{131}I dose-point kernel using a 3-D fast Fourier transform to yield a 3-D distribution of absorbed dose. The 3-D absorbed dose map was then processed to provide the absorbed dose distribution in regions of interest. This methodology can provide heterogeneous distributions of absorbed dose in volumes of any size and shape with nonuniform distributions of activity. Comparison of the activities quantitated by our SPECT methodology to true activities in an Alderson abdominal phantom (with spleen, liver, and spherical tumor) yielded errors of -16.3% to 4.4%. Volume quantitation errors ranged from -4.0 to 5.9% for volumes greater than 88 ml. The percentage differences of the average absorbed dose rates calculated by this methodology and the MIRD S-values were 9.1% for liver, 13.7% for spleen, and 0.9% for the tumor. Good agreement (percent differences were less than 8%) was found between the absorbed dose due to penetrating radiation calculated from this methodology and TLD measurement. More accurate estimates of the 3 -D distribution of absorbed dose can be used as a guide in specifying the minimum activity to be administered to patients to deliver a prescribed absorbed dose to tumor without exceeding the toxicity limits of normal tissues.
Structural Significance of Lipid Diversity as Studied by Small Angle Neutron and X-ray Scattering
Kučerka, Norbert; Heberle, Frederick A.; Pan, Jianjun; ...
2015-09-21
In this paper, we review recent developments in the rapidly growing field of membrane biophysics, with a focus on the structural properties of single lipid bilayers determined by different scattering techniques, namely neutron and X-ray scattering. The need for accurate lipid structural properties is emphasized by the sometimes conflicting results found in the literature, even in the case of the most studied lipid bilayers. Increasingly, accurate and detailed structural models require more experimental data, such as those from contrast varied neutron scattering and X-ray scattering experiments that are jointly refined with molecular dynamics simulations. This experimental and computational approach producesmore » robust bilayer structural parameters that enable insights, for example, into the interplay between collective membrane properties and its components (e.g., hydrocarbon chain length and unsaturation, and lipid headgroup composition). Finally, from model studies such as these, one is better able to appreciate how a real biological membrane can be tuned by balancing the contributions from the lipid’s different moieties (e.g., acyl chains, headgroups, backbones, etc.).« less
The generalized scattering coefficient method for plane wave scattering in layered structures
NASA Astrophysics Data System (ADS)
Liu, Yu; Li, Chao; Wang, Huai-Yu; Zhou, Yun-Song
2017-02-01
The generalized scattering coefficient (GSC) method is pedagogically derived and employed to study the scattering of plane waves in homogeneous and inhomogeneous layered structures. The numerical stabilities and accuracies of this method and other commonly used numerical methods are discussed and compared. For homogeneous layered structures, concise scattering formulas with clear physical interpretations and strong numerical stability are obtained by introducing the GSCs. For inhomogeneous layered structures, three numerical methods are employed: the staircase approximation method, the power series expansion method, and the differential equation based on the GSCs. We investigate the accuracies and convergence behaviors of these methods by comparing their predictions to the exact results. The conclusions are as follows. The staircase approximation method has a slow convergence in spite of its simple and intuitive implementation, and a fine stratification within the inhomogeneous layer is required for obtaining accurate results. The expansion method results are sensitive to the expansion order, and the treatment becomes very complicated for relatively complex configurations, which restricts its applicability. By contrast, the GSC-based differential equation possesses a simple implementation while providing fast and accurate results.
Kaniu, M I; Angeyo, K H; Mwala, A K; Mwangi, F K
2012-08-30
Soil quality assessment (SQA) calls for rapid, simple and affordable but accurate analysis of soil quality indicators (SQIs). Routine methods of soil analysis are tedious and expensive. Energy dispersive X-ray fluorescence and scattering (EDXRFS) spectrometry in conjunction with chemometrics is a potentially powerful method for rapid SQA. In this study, a 25 m Ci (109)Cd isotope source XRF spectrometer was used to realize EDXRFS spectrometry of soils. Glycerol (a simulate of "organic" soil solution) and kaolin (a model clay soil) doped with soil micro (Fe, Cu, Zn) and macro (NO(3)(-), SO(4)(2-), H(2)PO(4)(-)) nutrients were used to train multivariate chemometric calibration models for direct (non-invasive) analysis of SQIs based on partial least squares (PLS) and artificial neural networks (ANN). The techniques were compared for each SQI with respect to speed, robustness, correction ability for matrix effects, and resolution of spectral overlap. The method was then applied to perform direct rapid analysis of SQIs in field soils. A one-way ANOVA test showed no statistical difference at 95% confidence interval between PLS and ANN results compared to reference soil nutrients. PLS was more accurate analyzing C, N, Na, P and Zn (R(2)>0.9) and low SEP of (0.05%, 0.01%, 0.01%, and 1.98 μg g(-1)respectively), while ANN was better suited for analysis of Mg, Cu and Fe (R(2)>0.9 and SEP of 0.08%, 4.02 μg g(-1), and 0.88 μg g(-1) respectively). Copyright © 2012 Elsevier B.V. All rights reserved.
Furuta, Akihiro; Onishi, Hideo; Amijima, Hizuru
2018-06-01
This study aimed to evaluate the effect of ventricular enlargement on the specific binding ratio (SBR) and to validate the cerebrospinal fluid (CSF)-Mask algorithm for quantitative SBR assessment of 123 I-FP-CIT single-photon emission computed tomography (SPECT) images with the use of a 3D-striatum digital brain (SDB) phantom. Ventricular enlargement was simulated by three-dimensional extensions in a 3D-SDB phantom comprising segments representing the striatum, ventricle, brain parenchyma, and skull bone. The Evans Index (EI) was measured in 3D-SDB phantom images of an enlarged ventricle. Projection data sets were generated from the 3D-SDB phantoms with blurring, scatter, and attenuation. Images were reconstructed using the ordered subset expectation maximization (OSEM) algorithm and corrected for attenuation, scatter, and resolution recovery. We bundled DaTView (Southampton method) with the CSF-Mask processing software for SBR. We assessed SBR with the use of various coefficients (f factor) of the CSF-Mask. Specific binding ratios of 1, 2, 3, 4, and 5 corresponded to SDB phantom simulations with true values. Measured SBRs > 50% that were underestimated with EI increased compared with the true SBR and this trend was outstanding at low SBR. The CSF-Mask improved 20% underestimates and brought the measured SBR closer to the true values at an f factor of 1.0 despite an increase in EI. We connected the linear regression function (y = - 3.53x + 1.95; r = 0.95) with the EI and f factor using root-mean-square error. Processing with CSF-Mask generates accurate quantitative SBR from dopamine transporter SPECT images of patients with ventricular enlargement.
Extraction of Profile Information from Cloud Contaminated Radiances. Appendixes 2
NASA Technical Reports Server (NTRS)
Smith, W. L.; Zhou, D. K.; Huang, H.-L.; Li, Jun; Liu, X.; Larar, A. M.
2003-01-01
Clouds act to reduce the signal level and may produce noise dependence on the complexity of the cloud properties and the manner in which they are treated in the profile retrieval process. There are essentially three ways to extract profile information from cloud contaminated radiances: (1) cloud-clearing using spatially adjacent cloud contaminated radiance measurements, (2) retrieval based upon the assumption of opaque cloud conditions, and (3) retrieval or radiance assimilation using a physically correct cloud radiative transfer model which accounts for the absorption and scattering of the radiance observed. Cloud clearing extracts the radiance arising from the clear air portion of partly clouded fields of view permitting soundings to the surface or the assimilation of radiances as in the clear field of view case. However, the accuracy of the clear air radiance signal depends upon the cloud height and optical property uniformity across the two fields of view used in the cloud clearing process. The assumption of opaque clouds within the field of view permits relatively accurate profiles to be retrieved down to near cloud top levels, the accuracy near the cloud top level being dependent upon the actual microphysical properties of the cloud. The use of a physically correct cloud radiative transfer model enables accurate retrievals down to cloud top levels and below semi-transparent cloud layers (e.g., cirrus). It should also be possible to assimilate cloudy radiances directly into the model given a physically correct cloud radiative transfer model using geometric and microphysical cloud parameters retrieved from the radiance spectra as initial cloud variables in the radiance assimilation process. This presentation reviews the above three ways to extract profile information from cloud contaminated radiances. NPOESS Airborne Sounder Testbed-Interferometer radiance spectra and Aqua satellite AIRS radiance spectra are used to illustrate how cloudy radiances can be used in the profile retrieval process.
Electron-phonon interactions in semiconductor nanostructures
NASA Astrophysics Data System (ADS)
Yu, Segi
In this dissertation, electron-phonon interactions are studied theoretically in semiconductor nanoscale heterostructures. Interactions of electrons with interface optical phonons dominate over other electron-phonon interactions in narrow width heterostructures. Hence, a transfer matrix method is used to establish a formalism for determining the dispersion relations and electrostatic potentials of the interface phonons for multiple-interface heterostructure within the macroscopic dielectric continuum model. This method facilitates systematic calculations for complex structures where the conventional method is difficult to implement. Several specific cases are treated to illustrate advantages of the formalism. Electrophonon resonance (EPR) is studied in cylindrical quantum wires using the confined/interface optical phonons representation and bulk phonon representation. It has been found that interface phonon contribution to EPR is small compared with confined phonon. Different selection rules for bulk phonons and confined phonons result in different EPR behaviors as the radius of cylindrical wire changes. Experiment is suggested to test which phonon representation is appropriate for EPR. The effects of phonon confinement on elect ron-acoustic-phonon scattering is studied in cylindrical and rectangular quantum wires. In the macroscopic elastic continuum model, the confined-phonon dispersion relations are obtained for several crystallographic directions with free-surface and clamped-surface boundary conditions in cylindrical wires. The scattering rates due to the deformation potential are obtained for these confined phonons and are compared with those of bulk-like phonons. The results show that the inclusion of acoustic phonon confinement may be crucial for calculating accurate low-energy electron scattering rates. Furthermore, it has been found that there is a scaling rule governing the directional dependence of the scattering rates. The Hamiltonian describing the deformation-potential of confined acoustic phonons is derived by quantizing the appropriate, experimentally verified approximate compressional acoustic-phonon modes in a free-standing rectangular quantum wire. The scattering rate is obtained for GaAs quantum wires with a range of cross-sectional dimensions. The results demonstrate that a proper treatment of confined acoustic phonons may be essential to correctly model electron scattering rates at low energies in nanoscale structures.
Han, Buhm; Kang, Hyun Min; Eskin, Eleazar
2009-01-01
With the development of high-throughput sequencing and genotyping technologies, the number of markers collected in genetic association studies is growing rapidly, increasing the importance of methods for correcting for multiple hypothesis testing. The permutation test is widely considered the gold standard for accurate multiple testing correction, but it is often computationally impractical for these large datasets. Recently, several studies proposed efficient alternative approaches to the permutation test based on the multivariate normal distribution (MVN). However, they cannot accurately correct for multiple testing in genome-wide association studies for two reasons. First, these methods require partitioning of the genome into many disjoint blocks and ignore all correlations between markers from different blocks. Second, the true null distribution of the test statistic often fails to follow the asymptotic distribution at the tails of the distribution. We propose an accurate and efficient method for multiple testing correction in genome-wide association studies—SLIDE. Our method accounts for all correlation within a sliding window and corrects for the departure of the true null distribution of the statistic from the asymptotic distribution. In simulations using the Wellcome Trust Case Control Consortium data, the error rate of SLIDE's corrected p-values is more than 20 times smaller than the error rate of the previous MVN-based methods' corrected p-values, while SLIDE is orders of magnitude faster than the permutation test and other competing methods. We also extend the MVN framework to the problem of estimating the statistical power of an association study with correlated markers and propose an efficient and accurate power estimation method SLIP. SLIP and SLIDE are available at http://slide.cs.ucla.edu. PMID:19381255
Improving Lidar Turbulence Estimates for Wind Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.
2016-10-06
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. This presentation primarily focuses on the physics-based corrections, which include corrections for instrument noise, volume averaging, and variance contamination. As different factors affect TI under different stability conditions, the combination of physical corrections applied in L-TERRA changes depending on the atmospheric stability during each 10-minute time period. This stability-dependent version of L-TERRA performed well at both sites, reducing TI error and bringing lidar TI estimates closer to estimates from instruments on towers. However, there is still scatter evident in the lidar TI estimates, indicating that there are physics that are not being captured in the current version of L-TERRA. Two options are discussed for modeling the remainder of the TI error physics in L-TERRA: machine learning and lidar simulations. Lidar simulations appear to be a better approach, as they can help improve understanding of atmospheric effects on TI error and do not require a large training data set.« less
Grošev, Darko; Gregov, Marin; Wolfl, Miroslava Radić; Krstonošić, Branislav; Debeljuh, Dea Dundara
2018-06-07
To make quantitative methods of nuclear medicine more available, four centres in Croatia participated in the national intercomparison study, following the materials and methods used in the previous international study organized by the International Atomic Energy Agency (IAEA). The study task was to calculate the activities of four Ba sources (T1/2=10.54 years; Eγ=356 keV) using planar and single-photon emission computed tomography (SPECT) or SPECT/CT acquisitions of the sources inside a water-filled cylindrical phantom. The sources were previously calibrated by the US National Institute of Standards and Technology. Triple-energy window was utilized for scatter correction. Planar studies were corrected for attenuation correction (AC) using the conjugate-view method. For SPECT/CT studies, data from X-ray computed tomography were used for attenuation correction (CT-AC), whereas for SPECT-only acquisition, the Chang-AC method was applied. Using the lessons learned from the IAEA study, data were acquired according to the harmonized data acquisition protocol, and the acquired images were then processed using centralized data analysis. The accuracy of the activity quantification was evaluated as the ratio R between the calculated activity and the value obtained from National Institute of Standards and Technology. For planar studies, R=1.06±0.08; for SPECT/CT study using CT-AC, R=1.00±0.08; and for Chang-AC, R=0.89±0.12. The results are in accordance with those obtained within the larger IAEA study and confirm that SPECT/CT method is the most appropriate for accurate activity quantification.
Wangerin, Kristen A; Baratto, Lucia; Khalighi, Mohammad Mehdi; Hope, Thomas A; Gulaka, Praveen K; Deller, Timothy W; Iagaru, Andrei H
2018-06-06
Gallium-68-labeled radiopharmaceuticals pose a challenge for scatter estimation because their targeted nature can produce high contrast in these regions of the kidneys and bladder. Even small errors in the scatter estimate can result in washout artifacts. Administration of diuretics can reduce these artifacts, but they may result in adverse events. Here, we investigated the ability of algorithmic modifications to mitigate washout artifacts and eliminate the need for diuretics or other interventions. The model-based scatter algorithm was modified to account for PET/MRI scanner geometry and challenges of non-FDG tracers. Fifty-three clinical 68 Ga-RM2 and 68 Ga-PSMA-11 whole-body images were reconstructed using the baseline scatter algorithm. For comparison, reconstruction was also processed with modified sampling in the single-scatter estimation and with an offset in the scatter tail-scaling process. None of the patients received furosemide to attempt to decrease the accumulation of radiopharmaceuticals in the bladder. The images were scored independently by three blinded reviewers using the 5-point Likert scale. The scatter algorithm improvements significantly decreased or completely eliminated the washout artifacts. When comparing the baseline and most improved algorithm, the image quality increased and image artifacts were reduced for both 68 Ga-RM2 and for 68 Ga-PSMA-11 in the kidneys and bladder regions. Image reconstruction with the improved scatter correction algorithm mitigated washout artifacts and recovered diagnostic image quality in 68 Ga PET, indicating that the use of diuretics may be avoided.
NASA Astrophysics Data System (ADS)
Biegun, A. K.; Takatsu, J.; Nakaji, T.; van Goethem, M. J.; van der Graaf, E. R.; Koffeman, E. N.; Visser, J.; Brandenburg, S.
2016-12-01
The novel proton radiography imaging technique has a large potential to be used in direct measurement of the proton energy loss (proton stopping power, PSP) in various tissues in the patient. The uncertainty of PSPs, currently obtained from translation of X-ray Computed Tomography (xCT) images, should be minimized from 3-5% or higher to less than 1%, to make the treatment plan with proton beams more accurate, and thereby better treatment for the patient. With Geant4 we simulated a proton radiography detection system with two position-sensitive and residual energy detectors. A complex phantom filled with various materials (including tissue surrogates), was placed between the position sensitive detectors. The phantom was irradiated with 150 MeV protons and the energy loss radiograph and scattering angles were studied. Protons passing through different materials in the phantom lose energy, which was used to create a radiography image of the phantom. The multiple Coulomb scattering of a proton traversing different materials causes blurring of the image. To improve image quality and material identification in the phantom, we selected protons with small scattering angles. A good quality proton radiography image, in which various materials can be recognized accurately, and in combination with xCT can lead to more accurate relative stopping powers predictions.
NASA Astrophysics Data System (ADS)
Gonzales, Matthew Alejandro
The calculation of the thermal neutron Doppler temperature reactivity feedback co-efficient, a key parameter in the design and safe operation of advanced reactors, using first order perturbation theory in continuous energy Monte Carlo codes is challenging as the continuous energy adjoint flux is not readily available. Traditional approaches of obtaining the adjoint flux attempt to invert the random walk process as well as require data corresponding to all temperatures and their respective temperature derivatives within the system in order to accurately calculate the Doppler temperature feedback. A new method has been developed using adjoint-weighted tallies and On-The-Fly (OTF) generated continuous energy cross sections within the Monte Carlo N-Particle (MCNP6) transport code. The adjoint-weighted tallies are generated during the continuous energy k-eigenvalue Monte Carlo calculation. The weighting is based upon the iterated fission probability interpretation of the adjoint flux, which is the steady state population in a critical nuclear reactor caused by a neutron introduced at that point in phase space. The adjoint-weighted tallies are produced in a forward calculation and do not require an inversion of the random walk. The OTF cross section database uses a high order functional expansion between points on a user-defined energy-temperature mesh in which the coefficients with respect to a polynomial fitting in temperature are stored. The coefficients of the fits are generated before run- time and called upon during the simulation to produce cross sections at any given energy and temperature. The polynomial form of the OTF cross sections allows the possibility of obtaining temperature derivatives of the cross sections on-the-fly. The use of Monte Carlo sampling of adjoint-weighted tallies and the capability of computing derivatives of continuous energy cross sections with respect to temperature are used to calculate the Doppler temperature coefficient in a research version of MCNP6. Temperature feedback results from the cross sections themselves, changes in the probability density functions, as well as changes in the density of the materials. The focus of this work is specific to the Doppler temperature feedback which result from Doppler broadening of cross sections as well as changes in the probability density function within the scattering kernel. This method is compared against published results using Mosteller's numerical benchmark to show accurate evaluations of the Doppler temperature coefficient, fuel assembly calculations, and a benchmark solution based on the heavy gas model for free-gas elastic scattering. An infinite medium benchmark for neutron free gas elastic scattering for large scattering ratios and constant absorption cross section has been developed using the heavy gas model. An exact closed form solution for the neutron energy spectrum is obtained in terms of the confluent hypergeometric function and compared against spectra for the free gas scattering model in MCNP6. Results show a quick increase in convergence of the analytic energy spectrum to the MCNP6 code with increasing target size, showing absolute relative differences of less than 5% for neutrons scattering with carbon. The analytic solution has been generalized to accommodate piecewise constant in energy absorption cross section to produce temperature feedback. Results reinforce the constraints in which heavy gas theory may be applied resulting in a significant target size to accommodate increasing cross section structure. The energy dependent piecewise constant cross section heavy gas model was used to produce a benchmark calculation of the Doppler temperature coefficient to show accurate calculations when using the adjoint-weighted method. Results show the Doppler temperature coefficient using adjoint weighting and cross section derivatives accurately obtains the correct solution within statistics as well as reduce computer runtimes by a factor of 50.
NASA Astrophysics Data System (ADS)
Rosenberg, Phil; Dean, Angela; Williams, Paul; Dorsey, James; Minikin, Andreas; Pickering, Martyn; Petzold, Andreas
2013-04-01
Optical Particle Counters (OPCs) are the de-facto standard for in-situ measurements of airborne aerosol size distributions and small cloud particles over a wide size range. This is particularly the case on airborne platforms where fast response is important. OPCs measure scattered light from individual particles and generally bin particles according to the measured peak amount of light scattered (the OPC's response). Most manufacturers provide a table along with their instrument which indicates the particle diameters which represent the edges of each bin. It is important to correct the particle size reported by OPCs for the refractive index of the particles being measured, which is often not the same as for those used during calibration. However, the OPC's response is not a monotonic function of particle diameter and obvious problems occur when refractive index corrections are attempted, but multiple diameters correspond to the same OPC response. Here we recommend that OPCs are calibrated in terms of particle scattering cross section as this is a monotonic (usually linear) function of an OPC's response. We present a method for converting a bin's boundaries in terms of scattering cross section into a bin centre and bin width in terms of diameter for any aerosol species for which the scattering properties are known. The relationship between diameter and scattering cross section can be arbitrarily complex and does not need to be monotonic; it can be based on Mie-Lorenz theory or any other scattering theory. Software has been provided on the Sourceforge open source repository for scientific users to implement such methods in their own measurement and calibration routines. As a case study data is presented showing data from Passive Cavity Aerosol Spectrometer Probe (PCASP) and a Cloud Droplet Probe (CDP) calibrated using polystyrene latex spheres and glass beads before being deployed as part of the Fennec project to measure airborne dust in the inaccessible regions of the Sahara.
García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M
2018-01-01
Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.
Brain single-photon emission CT physics principles.
Accorsi, R
2008-08-01
The basic principles of scintigraphy are reviewed and extended to 3D imaging. Single-photon emission computed tomography (SPECT) is a sensitive and specific 3D technique to monitor in vivo functional processes in both clinical and preclinical studies. SPECT/CT systems are becoming increasingly common and can provide accurately registered anatomic information as well. In general, SPECT is affected by low photon-collection efficiency, but in brain imaging, not all of the large FOV of clinical gamma cameras is needed: The use of fan- and cone-beam collimation trades off the unused FOV for increased sensitivity and resolution. The design of dedicated cameras aims at increased angular coverage and resolution by minimizing the distance from the patient. The corrections needed for quantitative imaging are challenging but can take advantage of the relative spatial uniformity of attenuation and scatter. Preclinical systems can provide submillimeter resolution in small animal brain imaging with workable sensitivity.
Direct imaging of atomic-scale ripples in few-layer graphene.
Wang, Wei L; Bhandari, Sagar; Yi, Wei; Bell, David C; Westervelt, Robert; Kaxiras, Efthimios
2012-05-09
Graphene has been touted as the prototypical two-dimensional solid of extraordinary stability and strength. However, its very existence relies on out-of-plane ripples as predicted by theory and confirmed by experiments. Evidence of the intrinsic ripples has been reported in the form of broadened diffraction spots in reciprocal space, in which all spatial information is lost. Here we show direct real-space images of the ripples in a few-layer graphene (FLG) membrane resolved at the atomic scale using monochromated aberration-corrected transmission electron microscopy (TEM). The thickness of FLG amplifies the weak local effects of the ripples, resulting in spatially varying TEM contrast that is unique up to inversion symmetry. We compare the characteristic TEM contrast with simulated images based on accurate first-principles calculations of the scattering potential. Our results characterize the ripples in real space and suggest that such features are likely common in ultrathin materials, even in the nanometer-thickness range.
A simple and accurate method for calculation of the structure factor of interacting charged spheres.
Wu, Chu; Chan, Derek Y C; Tabor, Rico F
2014-07-15
Calculation of the structure factor of a system of interacting charged spheres based on the Ginoza solution of the Ornstein-Zernike equation has been developed and implemented on a stand-alone spreadsheet. This facilitates direct interactive numerical and graphical comparisons between experimental structure factors with the pioneering theoretical model of Hayter-Penfold that uses the Hansen-Hayter renormalisation correction. The method is used to fit example experimental structure factors obtained from the small-angle neutron scattering of a well-characterised charged micelle system, demonstrating that this implementation, available in the supplementary information, gives identical results to the Hayter-Penfold-Hansen approach for the structure factor, S(q) and provides direct access to the pair correlation function, g(r). Additionally, the intermediate calculations and outputs can be readily accessed and modified within the familiar spreadsheet environment, along with information on the normalisation procedure. Copyright © 2014 Elsevier Inc. All rights reserved.
Spatial reconstruction of single-cell gene expression data.
Satija, Rahul; Farrell, Jeffrey A; Gennert, David; Schier, Alexander F; Regev, Aviv
2015-05-01
Spatial localization is a key determinant of cellular fate and behavior, but methods for spatially resolved, transcriptome-wide gene expression profiling across complex tissues are lacking. RNA staining methods assay only a small number of transcripts, whereas single-cell RNA-seq, which measures global gene expression, separates cells from their native spatial context. Here we present Seurat, a computational strategy to infer cellular localization by integrating single-cell RNA-seq data with in situ RNA patterns. We applied Seurat to spatially map 851 single cells from dissociated zebrafish (Danio rerio) embryos and generated a transcriptome-wide map of spatial patterning. We confirmed Seurat's accuracy using several experimental approaches, then used the strategy to identify a set of archetypal expression patterns and spatial markers. Seurat correctly localizes rare subpopulations, accurately mapping both spatially restricted and scattered groups. Seurat will be applicable to mapping cellular localization within complex patterned tissues in diverse systems.
Spatial reconstruction of single-cell gene expression
Satija, Rahul; Farrell, Jeffrey A.; Gennert, David; Schier, Alexander F.; Regev, Aviv
2015-01-01
Spatial localization is a key determinant of cellular fate and behavior, but spatial RNA assays traditionally rely on staining for a limited number of RNA species. In contrast, single-cell RNA-seq allows for deep profiling of cellular gene expression, but established methods separate cells from their native spatial context. Here we present Seurat, a computational strategy to infer cellular localization by integrating single-cell RNA-seq data with in situ RNA patterns. We applied Seurat to spatially map 851 single cells from dissociated zebrafish (Danio rerio) embryos, inferring a transcriptome-wide map of spatial patterning. We confirmed Seurat’s accuracy using several experimental approaches, and used it to identify a set of archetypal expression patterns and spatial markers. Additionally, Seurat correctly localizes rare subpopulations, accurately mapping both spatially restricted and scattered groups. Seurat will be applicable to mapping cellular localization within complex patterned tissues in diverse systems. PMID:25867923
Simulation of Acoustic Scattering from a Trailing Edge
NASA Technical Reports Server (NTRS)
Singer, Bart A.; Brentner, Kenneth S.; Lockard, David P.; Lilley, Geoffrey M.
1999-01-01
Three model problems were examined to assess the difficulties involved in using a hybrid scheme coupling flow computation with the the Ffowcs Williams and Hawkings equation to predict noise generated by vortices passing over a sharp edge. The results indicate that the Ffowcs Williams and Hawkings equation correctly propagates the acoustic signals when provided with accurate flow information on the integration surface. The most difficult of the model problems investigated inviscid flow over a two-dimensional thin NACA airfoil with a blunt-body vortex generator positioned at 98 percent chord. Vortices rolled up downstream of the blunt body. The shed vortices possessed similarities to large coherent eddies in boundary layers. They interacted and occasionally paired as they convected past the sharp trailing edge of the airfoil. The calculations showed acoustic waves emanating from the airfoil trailing edge. Acoustic directivity and Mach number scaling are shown.
Magnetic Field Effects on the Fluctuation Corrections to the Sound Attenuation in Liquid ^3He
NASA Astrophysics Data System (ADS)
Zhao, Erhai; Sauls, James A.
2002-03-01
We investigated the effect of a magnetic field on the excess sound attenuation due to order parameter fluctuations in bulk liquid ^3He and liquid ^3He in aerogel for temperatures just above the corresponding superfluid transition temperatures. The fluctuation corrections to the acoustic attenuation are sensitive to magnetic field pairbreaking, aerogel scattering as well as the spin correlations of fluctuating pairs. Calculations of the corrections to the zero sound velocity, δ c_0, and attenuation, δα_0, are carried out in the ladder approximation for the singular part of the quasiparticle-quasiparticle scattering amplitude(V. Samalam and J. W. Serene, Phys. Rev. Lett. \\underline41), 497 (1978). as a function of frequency, temperature, impurity scattering and magnetic field strength. The magnetic field suppresses the fluctuation contributions to the attenuation of zero sound. With increasing magnetic field the temperature dependence of δα_0(t) crosses over from δα_0(t) ~√ t to δα_0(t) ~ t, where t=T/Tc -1 is the reduced temperature.
A fourth order accurate finite difference scheme for the computation of elastic waves
NASA Technical Reports Server (NTRS)
Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.
1986-01-01
A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.
Charpentier, Sophie; Galletti, Luca; Kunakova, Gunta; Arpaia, Riccardo; Song, Yuxin; Baghdadi, Reza; Wang, Shu Min; Kalaboukhov, Alexei; Olsson, Eva; Tafuri, Francesco; Golubev, Dmitry; Linder, Jacob; Bauch, Thilo; Lombardi, Floriana
2018-01-30
The original version of this Article contained an error in Fig. 6b. In the top scattering process, while the positioning of both arrows was correct, the colours were switched: the first arrow was red and the second arrow was blue, rather than the correct order of blue then red.
NASA Astrophysics Data System (ADS)
Li, Xuesong; Northrop, William F.
2016-04-01
This paper describes a quantitative approach to approximate multiple scattering through an isotropic turbid slab based on Markov Chain theorem. There is an increasing need to utilize multiple scattering for optical diagnostic purposes; however, existing methods are either inaccurate or computationally expensive. Here, we develop a novel Markov Chain approximation approach to solve multiple scattering angular distribution (AD) that can accurately calculate AD while significantly reducing computational cost compared to Monte Carlo simulation. We expect this work to stimulate ongoing multiple scattering research and deterministic reconstruction algorithm development with AD measurements.
A high-fidelity Monte Carlo evaluation of CANDU-6 safety parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Y.; Hartanto, D.
2012-07-01
Important safety parameters such as the fuel temperature coefficient (FTC) and the power coefficient of reactivity (PCR) of the CANDU-6 (CANada Deuterium Uranium) reactor have been evaluated by using a modified MCNPX code. For accurate analysis of the parameters, the DBRC (Doppler Broadening Rejection Correction) scheme was implemented in MCNPX in order to account for the thermal motion of the heavy uranium nucleus in the neutron-U scattering reactions. In this work, a standard fuel lattice has been modeled and the fuel is depleted by using the MCNPX and the FTC value is evaluated for several burnup points including the mid-burnupmore » representing a near-equilibrium core. The Doppler effect has been evaluated by using several cross section libraries such as ENDF/B-VI, ENDF/B-VII, JEFF, JENDLE. The PCR value is also evaluated at mid-burnup conditions to characterize safety features of equilibrium CANDU-6 reactor. To improve the reliability of the Monte Carlo calculations, huge number of neutron histories are considered in this work and the standard deviation of the k-inf values is only 0.5{approx}1 pcm. It has been found that the FTC is significantly enhanced by accounting for the Doppler broadening of scattering resonance and the PCR are clearly improved. (authors)« less
de Jonge, Niels; Verch, Andreas; Demers, Hendrix
2018-02-01
The spatial resolution of aberration-corrected annular dark field scanning transmission electron microscopy was studied as function of the vertical position z within a sample. The samples consisted of gold nanoparticles (AuNPs) positioned in different horizontal layers within aluminum matrices of 0.6 and 1.0 µm thickness. The highest resolution was achieved in the top layer, whereas the resolution was reduced by beam broadening for AuNPs deeper in the sample. To examine the influence of the beam broadening, the intensity profiles of line scans over nanoparticles at a certain vertical location were analyzed. The experimental data were compared with Monte Carlo simulations that accurately matched the data. The spatial resolution was also calculated using three different theoretical models of the beam blurring as function of the vertical position within the sample. One model considered beam blurring to occur as a single scattering event but was found to be inaccurate for larger depths of the AuNPs in the sample. Two models were adapted and evaluated that include estimates for multiple scattering, and these described the data with sufficient accuracy to be able to predict the resolution. The beam broadening depended on z 1.5 in all three models.
Zeno: Critical Fluid Light Scattering Experiment
NASA Technical Reports Server (NTRS)
Gammon, Robert W.; Shaumeyer, J. N.; Briggs, Matthew E.; Boukari, Hacene; Gent, David A.; Wilkinson, R. Allen
1996-01-01
The Zeno (Critical Fluid Light Scattering) experiment is the culmination of a long history of critical fluid light scattering in liquid-vapor systems. The major limitation to making accurate measurements closer to the critical point was the density stratification which occurs in these extremely compressible fluids. Zeno was to determine the critical density fluctuation decay rates at a pair of supplementary angles in the temperature range 100 mK to 100 (mu)K from T(sub c) in a sample of xenon accurately loaded to the critical density. This paper gives some highlights from operating the instrument on two flights March, 1994 on STS-62 and February, 1996 on STS-75. More detail of the experiment Science Requirements, the personnel, apparatus, and results are displayed on the Web homepage at http://www.zeno.umd.edu.
Commissioning a passive-scattering proton therapy nozzle for accurate SOBP delivery
Engelsman, M.; Lu, H.-M.; Herrup, D.; Bussiere, M.; Kooy, H. M.
2009-01-01
Proton radiotherapy centers that currently use passively scattered proton beams do field specific calibrations for a non-negligible fraction of treatment fields, which is time and resource consuming. Our improved understanding of the passive scattering mode of the IBA universal nozzle, especially of the current modulation function, allowed us to re-commission our treatment control system for accurate delivery of SOBPs of any range and modulation, and to predict the output for each of these fields. We moved away from individual field calibrations to a state where continued quality assurance of SOBP field delivery is ensured by limited system-wide measurements that only require one hour per week. This manuscript reports on a protocol for generation of desired SOBPs and prediction of dose output. PMID:19610306
XUV and x-ray elastic scattering of attosecond electromagnetic pulses on atoms
NASA Astrophysics Data System (ADS)
Rosmej, F. B.; Astapenko, V. A.; Lisitsa, V. S.
2017-12-01
Elastic scattering of electromagnetic pulses on atoms in XUV and soft x-ray ranges is considered for ultra-short pulses. The inclusion of the retardation term, non-dipole interaction and an efficient scattering tensor approximation allowed studying the scattering probability in dependence of the pulse duration for different carrier frequencies. Numerical calculations carried out for Mg, Al and Fe atoms demonstrate that the scattering probability is a highly nonlinear function of the pulse duration and has extrema for pulse carrier frequencies in the vicinity of the resonance-like features of the polarization charge spectrum. Closed expressions for the non-dipole correction and the angular dependence of the scattered radiation are obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henry, T; Robertson, D; Therriault-Proulx, F
2015-06-15
Purpose: Liquid scintillators have been shown to provide fast and high-resolution measurements of radiation beams. However, their linear energy transfer-dependent response (quenching) limits their use in proton beams. The purpose of this study was to develop a simple and fast method to verify the range, spread-out Bragg peak (SOBP) width, and output of a passive-scattering proton beam with a liquid scintillator detector, without the need for quenching correction. Methods: The light signal from a 20×20×20 cm3 liquid scintillator tank was collected with a CCD camera. Reproducible landmarks on the SOBP depth-light curve were identified which possessed a linear relationship withmore » the beam range and SOBP width. The depth-light profiles for three beam energies (140, 160 and 180 MeV) with six SOBP widths at each energy were measured with the detector. Beam range and SOBP width calibration factors were obtained by comparing the depth-light curve landmarks with the nominal range and SOBP width for each beam setting. The daily output stability of the liquid scintillator detector was also studied by making eight repeated output measurements in a cobalt-60 beam over the course of two weeks. Results: The mean difference between the measured and nominal beam ranges was 0.6 mm (σ=0.2 mm), with a maximum difference of 0.9 mm. The mean difference between the measured and nominal SOBP widths was 0.1 mm (σ=1.8 mm), with a maximum difference of 4.0 mm. Finally an output variation of 0.14% was observed for 8 measurements performed over 2 weeks. Conclusion: A method has been developed to determine the range and SOBP width of a passive-scattering proton beam in a liquid scintillator without the need for quenching correction. In addition to providing rapid and accurate beam range and SOBP measurements, the detector is capable of measuring the output consistency with a high degree of precision. This project was supported in part by award number CA182450 from the National Cancer Institute.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.
Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less
Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.; ...
2016-03-01
Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less
NASA Astrophysics Data System (ADS)
Gao, M.; Zhai, P.; Franz, B. A.; Hu, Y.; Knobelspiesse, K. D.; Xu, F.; Ibrahim, A.
2017-12-01
Ocean color remote sensing in coastal waters remains a challenging task due to the complex optical properties of aerosols and ocean water properties. It is highly desirable to develop an advanced ocean color and aerosol retrieval algorithm for coastal waters, to advance our capabilities in monitoring water quality, improve our understanding of coastal carbon cycle dynamics, and allow for the development of more accurate circulation models. However, distinguishing the dissolved and suspended material from absorbing aerosols over coastal waters is challenging as they share similar absorption spectrum within the deep blue to UV range. In this paper we report a research algorithm on aerosol and ocean color retrieval with emphasis on coastal waters. The main features of our algorithm include: 1) combining co-located measurements from a hyperspectral ocean color instrument (OCI) and a multi-angle polarimeter (MAP); 2) using the radiative transfer model for coupled atmosphere and ocean system (CAOS), which is based on the highly accurate and efficient successive order of scattering method; and 3) incorporating a generalized bio-optical model with direct accounting of the total absorption of phytoplankton, CDOM and non-algal particles(NAP), and the total scattering of phytoplankton and NAP for improved description of ocean light scattering. The non-linear least square fitting algorithm is used to optimize the bio-optical model parameters and the aerosol optical and microphysical properties including refractive indices and size distributions for both fine and coarse modes. The retrieved aerosol information is used to calculate the atmospheric path radiance, which is then subtracted from the OCI observations to obtain the water leaving radiance contribution. Our work aims to maximize the use of available information from the co-located dataset and conduct the atmospheric correction with minimal assumptions. The algorithm will contribute to the success of current MAP instruments, such as the Research Scanning Polarimeter (RSP), and future ocean color missions, such as the Plankton, Aerosol, Cloud, and ocean Ecosystem (PACE) mission, by enabling retrieval of ocean biogeochemical properties under optically-complex atmospheric and oceanic conditions.
Social contagion of correct and incorrect information in memory.
Rush, Ryan A; Clark, Steven E
2014-01-01
The present study examines how discussion between individuals regarding a shared memory affects their subsequent individual memory reports. In three experiments pairs of participants recalled items from photographs of common household scenes, discussed their recall with each other, and then recalled the items again individually. Results showed that after the discussion. individuals recalled more correct items and more incorrect items, with very small non-significant increases, or no change, in recall accuracy. The information people were exposed to during the discussion was generally accurate, although not as accurate as individuals' initial recall. Individuals incorporated correct exposure items into their subsequent recall at a higher rate than incorrect exposure items. Participants who were initially more accurate became less accurate, and initially less-accurate participants became more accurate as a result of their discussion. Comparisons to no-discussion control groups suggest that the effects were not simply the product of repeated recall opportunities or self-cueing, but rather reflect the transmission of information between individuals.
Prediction of apparent extinction for optical transmission through rain
NASA Astrophysics Data System (ADS)
Vasseur, H.; Gibbins, C. J.
1996-12-01
At optical wavelengths, geometrical optics holds that the extinction efficiency of raindrops is equal to two. This approximation yields a wavelength-independent extinction coefficient that, however, can hardly be used to predict accurately rain extinction measured in optical transmissions. Actually, in addition to the extinct direct incoming light, a significant part of the power scattered by the rain particles reaches the receiver. This leads to a reduced apparent extinction that depends on both rain characteristics and link parameters. A simple method is proposed to evaluate this apparent extinction. It accounts for the additional scattered power that enters the receiver when one considers the forward-scattering pattern of the raindrops as well as the multiple-scattering effects using, respectively, the Fraunhofer diffraction and Twersky theory. It results in a direct analytical formula that enables a quick and accurate estimation of the rain apparent extinction and highlights the influence of the link parameters. Predictions of apparent extinction through rain are found in excellent agreement with measurements in the visible and IR regions.
a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images
NASA Astrophysics Data System (ADS)
Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei
2018-04-01
Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.
Semenov, Alexander; Dubernet, Marie-Lise; Babikov, Dmitri
2014-09-21
The mixed quantum/classical theory (MQCT) for inelastic molecule-atom scattering developed recently [A. Semenov and D. Babikov, J. Chem. Phys. 139, 174108 (2013)] is extended to treat a general case of an asymmetric-top-rotor molecule in the body-fixed reference frame. This complements a similar theory formulated in the space-fixed reference-frame [M. Ivanov, M.-L. Dubernet, and D. Babikov, J. Chem. Phys. 140, 134301 (2014)]. Here, the goal was to develop an approximate computationally affordable treatment of the rotationally inelastic scattering and apply it to H2O + He. We found that MQCT is somewhat less accurate at lower scattering energies. For example, below E = 1000 cm(-1) the typical errors in the values of inelastic scattering cross sections are on the order of 10%. However, at higher scattering energies MQCT method appears to be rather accurate. Thus, at scattering energies above 2000 cm(-1) the errors are consistently in the range of 1%-2%, which is basically our convergence criterion with respect to the number of trajectories. At these conditions our MQCT method remains computationally affordable. We found that computational cost of the fully-coupled MQCT calculations scales as n(2), where n is the number of channels. This is more favorable than the full-quantum inelastic scattering calculations that scale as n(3). Our conclusion is that for complex systems (heavy collision partners with many internal states) and at higher scattering energies MQCT may offer significant computational advantages.
Follett, R K; Delettrez, J A; Edgell, D H; Henchen, R J; Katz, J; Myatt, J F; Froula, D H
2016-11-01
Collective Thomson scattering is a technique for measuring the plasma conditions in laser-plasma experiments. Simultaneous measurements of ion-acoustic and electron plasma-wave spectra were obtained using a 263.25-nm Thomson-scattering probe beam. A fully reflective collection system was used to record light scattered from electron plasma waves at electron densities greater than 10 21 cm -3 , which produced scattering peaks near 200 nm. An accurate analysis of the experimental Thomson-scattering spectra required accounting for plasma gradients, instrument sensitivity, optical effects, and background radiation. Practical techniques for including these effects when fitting Thomson-scattering spectra are presented and applied to the measured spectra to show the improvements in plasma characterization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laitinen, T.; Dalla, S.; Huttunen-Heikinmaa, K.
2015-06-10
To understand the origin of Solar Energetic Particles (SEPs), we must study their injection time relative to other solar eruption manifestations. Traditionally the injection time is determined using the Velocity Dispersion Analysis (VDA) where a linear fit of the observed event onset times at 1 AU to the inverse velocities of SEPs is used to derive the injection time and path length of the first-arriving particles. VDA does not, however, take into account that the particles that produce a statistically observable onset at 1 AU have scattered in the interplanetary space. We use Monte Carlo test particle simulations of energeticmore » protons to study the effect of particle scattering on the observable SEP event onset above pre-event background, and consequently on VDA results. We find that the VDA results are sensitive to the properties of the pre-event and event particle spectra as well as SEP injection and scattering parameters. In particular, a VDA-obtained path length that is close to the nominal Parker spiral length does not imply that the VDA injection time is correct. We study the delay to the observed onset caused by scattering of the particles and derive a simple estimate for the delay time by using the rate of intensity increase at the SEP onset as a parameter. We apply the correction to a magnetically well-connected SEP event of 2000 June 10, and show it to improve both the path length and injection time estimates, while also increasing the error limits to better reflect the inherent uncertainties of VDA.« less
Laser light-scattering spectroscopy: a new application in the study of ciliary activity.
Lee, W I; Verdugo, P
1976-01-01
A uniquely precise and simple method to study ciliary activity by laser light-scattering spectroscopy has been developed and validated. A concurrent study of the effect of Ca2+ on ciliary activity in vitro by laser scattering spectroscopy and high speed cinematography has demonstrated that this new method is simpler and as accurate and reproducible as the high speed film technique. PMID:963208
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less
An Accurate Analytic Approximation for Light Scattering by Non-absorbing Spherical Aerosol Particles
NASA Astrophysics Data System (ADS)
Lewis, E. R.
2017-12-01
The scattering of light by particles in the atmosphere is a ubiquitous and important phenomenon, with applications to numerous fields of science and technology. The problem of scattering of electromagnetic radiation by a uniform spherical particle can be solved by the method of Mie and Debye as a series of terms depending on the size parameter, x=2πr/λ, and the complex index of refraction, m. However, this solution does not provide insight into the dependence of the scattering on the radius of the particle, the wavelength, or the index of refraction, or how the scattering varies with relative humidity. Van de Hulst demonstrated that the scattering efficiency (the scattering cross section divided by the geometric cross section) of a non-absorbing sphere, over a wide range of particle sizes of atmospheric importance, depends not on x and m separately, but on the quantity 2x(m-1); this is the basis for the anomalous diffraction approximation. Here an analytic approximation for the scattering efficiency of a non-absorbing spherical particle is presented in terms of this new quantity that is accurate over a wide range of particle sizes of atmospheric importance and which readily displays the dependences of the scattering efficiency on particle radius, index of refraction, and wavelength. For an aerosol for which the particle size distribution is parameterized as a gamma function, this approximation also yields analytical results for the scattering coefficient and for the Ångström exponent, with the dependences of scattering properties on wavelength and index of refraction clearly displayed. This approximation provides insight into the dependence of light scattering properties on factors such as relative humidity, readily enables conversion of scattering from one index of refraction to another, and demonstrates the conditions under which the aerosol index (the product of the aerosol optical depth and the Ångström exponent) is a useful proxy for the number of cloud condensation nuclei.
Tissue Equivalent Phantom Design for Characterization of a Coherent Scatter X-ray Imaging System
NASA Astrophysics Data System (ADS)
Albanese, Kathryn Elizabeth
Scatter in medical imaging is typically cast off as image-related noise that detracts from meaningful diagnosis. It is therefore typically rejected or removed from medical images. However, it has been found that every material, including cancerous tissue, has a unique X-ray coherent scatter signature that can be used to identify the material or tissue. Such scatter-based tissue-identification provides the advantage of locating and identifying particular materials over conventional anatomical imaging through X-ray radiography. A coded aperture X-ray coherent scatter spectral imaging system has been developed in our group to classify different tissue types based on their unique scatter signatures. Previous experiments using our prototype have demonstrated that the depth-resolved coherent scatter spectral imaging system (CACSSI) can discriminate healthy and cancerous tissue present in the path of a non-destructive x-ray beam. A key to the successful optimization of CACSSI as a clinical imaging method is to obtain anatomically accurate phantoms of the human body. This thesis describes the development and fabrication of 3D printed anatomical scatter phantoms of the breast and lung. The purpose of this work is to accurately model different breast geometries using a tissue equivalent phantom, and to classify these tissues in a coherent x-ray scatter imaging system. Tissue-equivalent anatomical phantoms were designed to assess the capability of the CACSSI system to classify different types of breast tissue (adipose, fibroglandular, malignant). These phantoms were 3D printed based on DICOM data obtained from CT scans of prone breasts. The phantoms were tested through comparison of measured scatter signatures with those of adipose and fibroglandular tissue from literature. Tumors in the phantom were modeled using a variety of biological tissue including actual surgically excised benign and malignant tissue specimens. Lung based phantoms have also been printed for future testing. Our imaging system has been able to define the location and composition of the various materials in the phantom. These phantoms were used to characterize the CACSSI system in terms of beam width and imaging technique. The result of this work showed accurate modeling and characterization of the phantoms through comparison of the tissue-equivalent form factors to those from literature. The physical construction of the phantoms, based on actual patient anatomy, was validated using mammography and computed tomography to visually compare the clinical images to those of actual patient anatomy.
Improving Pixel Level Cloud Optical Property Retrieval using Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Marshak, Alexander; Cahalan, Robert F.
1999-01-01
The accurate pixel-by-pixel retrieval of cloud optical properties from space is influenced by radiative smoothing due to high order photon scattering and radiative roughening due to low order scattering events. Both are caused by cloud heterogeneity and the three-dimensional nature of radiative transfer and can be studied with the aid of computer simulations. We use Monte Carlo simulations on variable 1-D and 2-D model cloud fields to seek for dependencies of smoothing and roughening phenomena on single scattering albedo, solar zenith angle, and cloud characteristics. The results are discussed in the context of high resolution satellite (such as Landsat) retrieval applications. The current work extends the investigation on the inverse NIPA (Non-local Independent Pixel Approximation) as a tool for removing smoothing and improving retrievals of cloud optical depth. This is accomplished by: (1) Delineating the limits of NIPA applicability; (2) Exploring NIPA parameter dependences on cloud macrostructural features, such as mean cloud optical depth and geometrical thickness, degree of extinction and cloud top height variability. We also compare parameter values from empirical and theoretical considerations; (3) Examining the differences between applying NIPA on radiation quantities vs direct application on optical properties; (4) Studying the radiation budget importance of the NIPA corrections as a function of scale. Finally, we discuss fundamental adjustments that need to be considered for successful radiance inversion at non-conservative wavelengths and oblique Sun angles. These adjustments are necessary to remove roughening signatures which become more prominent with increasing absorption and solar zenith angle.
Influences of 3D PET scanner components on increased scatter evaluated by a Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Hirano, Yoshiyuki; Koshino, Kazuhiro; Iida, Hidehiro
2017-05-01
Monte Carlo simulation is widely applied to evaluate the performance of three-dimensional positron emission tomography (3D-PET). For accurate scatter simulations, all components that generate scatter need to be taken into account. The aim of this work was to identify the components that influence scatter. The simulated geometries of a PET scanner were: a precisely reproduced configuration including all of the components; a configuration with the bed, the tunnel and shields; a configuration with the bed and shields; and the simplest geometry with only the bed. We measured and simulated the scatter fraction using two different set-ups: (1) as prescribed by NEMA-NU 2007 and (2) a similar set-up but with a shorter line source, so that all activity was contained only inside the field-of-view (FOV), in order to reduce influences of components outside the FOV. The scatter fractions for the two experimental set-ups were, respectively, 45% and 38%. Regarding the geometrical configurations, the former two configurations gave simulation results in good agreement with the experimental results, but simulation results of the simplest geometry were significantly different at the edge of the FOV. From the simulation of the precise configuration, the object (scatter phantom) was the source of more than 90% of the scatter. This was also confirmed by visualization of photon trajectories. Then, the bed and the tunnel were mainly the sources of the rest of the scatter. From the simulation results, we concluded that the precise construction was not needed; the shields, the tunnel, the bed and the object were sufficient for accurate scatter simulations.
Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu
2013-06-01
It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.
SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, C; Jin, M; Ouyang, L
2015-06-15
Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less
NASA Astrophysics Data System (ADS)
Shivaei, Irene; Reddy, Naveen A.; Shapley, Alice E.; Kriek, Mariska; Siana, Brian; Mobasher, Bahram; Coil, Alison L.; Freeman, William R.; Sanders, Ryan; Price, Sedona H.; de Groot, Laura; Azadi, Mojegan
2015-12-01
We present results on the star formation rate (SFR) versus stellar mass (M*) relation (i.e., the “main sequence”) among star-forming galaxies at 1.37 ≤ z ≤ 2.61 using the MOSFIRE Deep Evolution Field (MOSDEF) survey. Based on a sample of 261 galaxies with Hα and Hβ spectroscopy, we have estimated robust dust-corrected instantaneous SFRs over a large range in M* (˜109.5-1011.5 M⊙). We find a correlation between log(SFR(Hα)) and log(M*) with a slope of 0.65 ± 0.08 (0.58 ± 0.10) at 1.4 < z < 2.6 (2.1 < z < 2.6). We find that different assumptions for the dust correction, such as using the color excess of the stellar continuum to correct the nebular lines, sample selection biases against red star-forming galaxies, and not accounting for Balmer absorption, can yield steeper slopes of the log(SFR)-log(M*) relation. Our sample is immune from these biases as it is rest-frame optically selected, Hα and Hβ are corrected for Balmer absorption, and the Hα luminosity is dust corrected using the nebular color excess computed from the Balmer decrement. The scatter of the log(SFR(Hα))-log(M*) relation, after accounting for the measurement uncertainties, is 0.31 dex at 2.1 < z < 2.6, which is 0.05 dex larger than the scatter in log(SFR(UV))-log(M*). Based on comparisons to a simulated SFR-M* relation with some intrinsic scatter, we argue that in the absence of direct measurements of galaxy-to-galaxy variations in the attenuation/extinction curves and the initial mass function, one cannot use the difference in the scatter of the SFR(Hα)- and SFR(UV)-M* relations to constrain the stochasticity of star formation in high-redshift galaxies.
Demonstration of a novel technique to measure two-photon exchange effects in elastic e±p scattering
Moteabbed, Maryam; Niroula, Megh; Raue, Brian A.; ...
2013-08-30
The discrepancy between proton electromagnetic form factors extracted using unpolarized and polarized scattering data is believed to be a consequence of two-photon exchange (TPE) effects. However, the calculations of TPE corrections have significant model dependence, and there is limited direct experimental evidence for such corrections. The TPE contributions depend on the sign of the lepton charge in e±p scattering, but the luminosities of secondary positron beams limited past measurement at large scattering angles, where the TPE effects are believe to be most significant. We present the results of a new experimental technique for making direct e±p comparisons, which has themore » potential to make precise measurements over a broad range in Q 2 and scattering angles. We use the Jefferson Laboratory electron beam and the Hall B photon tagger to generate a clean but untagged photon beam. The photon beam impinges on a converter foil to generate a mixed beam of electrons, positrons, and photons. A chicane is used to separate and recombine the electron and positron beams while the photon beam is stopped by a photon blocker. This provides a combined electron and positron beam, with energies from 0.5 to 3.2 GeV, which impinges on a liquid hydrogen target. The large acceptance CLAS detector is used to identify and reconstruct elastic scattering events, determining both the initial lepton energy and the sign of the scattered lepton. The data were collected in two days with a primary electron beam energy of only 3.3 GeV, limiting the data from this run to smaller values of Q 2 and scattering angle. Nonetheless, this measurement yields a data sample for e±p with statistics comparable to those of the best previous measurements. We have shown that we can cleanly identify elastic scattering events and correct for the difference in acceptance for electron and positron scattering. Because we ran with only one polarity for the chicane, we are unable to study the difference between the incoming electron and positron beams. This systematic effect leads to the largest uncertainty in the final ratio of positron to electron scattering: R=1.027±0.005±0.05 for < Q 2 >=0.206 GeV 2 and 0.830 ≤ ε ≤ 0.943. We have demonstrated that the tertiary e ± beam generated using this technique provides the opportunity for dramatically improved comparisons of e±p scattering, covering a significant range in both Q 2 and scattering angle. Combining data with different chicane polarities will allow for detailed studies of the difference between the incoming e + and e - beams.« less
Demonstration of a novel technique to measure two-photon exchange effects in elastic e±p scattering
NASA Astrophysics Data System (ADS)
Moteabbed, M.; Niroula, M.; Raue, B. A.; Weinstein, L. B.; Adikaram, D.; Arrington, J.; Brooks, W. K.; Lachniet, J.; Rimal, Dipak; Ungaro, M.; Afanasev, A.; Adhikari, K. P.; Aghasyan, M.; Amaryan, M. J.; Anefalos Pereira, S.; Avakian, H.; Ball, J.; Baltzell, N. A.; Battaglieri, M.; Batourine, V.; Bedlinskiy, I.; Bennett, R. P.; Biselli, A. S.; Bono, J.; Boiarinov, S.; Briscoe, W. J.; Burkert, V. D.; Carman, D. S.; Celentano, A.; Chandavar, S.; Cole, P. L.; Collins, P.; Contalbrigo, M.; Cortes, O.; Crede, V.; D'Angelo, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Doughty, D.; Dupre, R.; Egiyan, H.; Fassi, L. El; Eugenio, P.; Fedotov, G.; Fegan, S.; Fersch, R.; Fleming, J. A.; Gevorgyan, N.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Gohn, W.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guler, N.; Guo, L.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Harrison, N.; Heddle, D.; Hicks, K.; Ho, D.; Holtrop, M.; Hyde, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Jo, H. S.; Joo, K.; Keller, D.; Khandaker, M.; Kim, A.; Klein, F. J.; Koirala, S.; Kubarovsky, A.; Kubarovsky, V.; Kuhn, S. E.; Kuleshov, S. V.; Lewis, S.; Lu, H. Y.; MacCormick, M.; MacGregor, I. J. D.; Martinez, D.; Mayer, M.; McKinnon, B.; Mineeva, T.; Mirazita, M.; Mokeev, V.; Montgomery, R. A.; Moriya, K.; Moutarde, H.; Munevar, E.; Munoz Camacho, C.; Nadel-Turonski, P.; Nasseripour, R.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Osipenko, M.; Ostrovidov, A. I.; Pappalardo, L. L.; Paremuzyan, R.; Park, K.; Park, S.; Phelps, E.; Phillips, J. J.; Pisano, S.; Pogorelko, O.; Pozdniakov, S.; Price, J. W.; Procureur, S.; Protopopescu, D.; Puckett, A. J. R.; Ripani, M.; Rosner, G.; Rossi, P.; Sabatié, F.; Saini, M. S.; Salgado, C.; Schott, D.; Schumacher, R. A.; Seder, E.; Seraydaryan, H.; Sharabian, Y. G.; Smith, E. S.; Smith, G. D.; Sober, D. I.; Sokhan, D.; Stepanyan, S.; Strauch, S.; Tang, W.; Taylor, C. E.; Tian, Ye; Tkachenko, S.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Wood, M. H.; Zachariou, N.; Zana, L.; Zhang, J.; Zhao, Z. W.; Zonta, I.
2013-08-01
Background: The discrepancy between proton electromagnetic form factors extracted using unpolarized and polarized scattering data is believed to be a consequence of two-photon exchange (TPE) effects. However, the calculations of TPE corrections have significant model dependence, and there is limited direct experimental evidence for such corrections.Purpose: The TPE contributions depend on the sign of the lepton charge in e±p scattering, but the luminosities of secondary positron beams limited past measurement at large scattering angles, where the TPE effects are believe to be most significant. We present the results of a new experimental technique for making direct e±p comparisons, which has the potential to make precise measurements over a broad range in Q2 and scattering angles.Methods: We use the Jefferson Laboratory electron beam and the Hall B photon tagger to generate a clean but untagged photon beam. The photon beam impinges on a converter foil to generate a mixed beam of electrons, positrons, and photons. A chicane is used to separate and recombine the electron and positron beams while the photon beam is stopped by a photon blocker. This provides a combined electron and positron beam, with energies from 0.5 to 3.2 GeV, which impinges on a liquid hydrogen target. The large acceptance CLAS detector is used to identify and reconstruct elastic scattering events, determining both the initial lepton energy and the sign of the scattered lepton.Results: The data were collected in two days with a primary electron beam energy of only 3.3 GeV, limiting the data from this run to smaller values of Q2 and scattering angle. Nonetheless, this measurement yields a data sample for e±p with statistics comparable to those of the best previous measurements. We have shown that we can cleanly identify elastic scattering events and correct for the difference in acceptance for electron and positron scattering. Because we ran with only one polarity for the chicane, we are unable to study the difference between the incoming electron and positron beams. This systematic effect leads to the largest uncertainty in the final ratio of positron to electron scattering: R=1.027±0.005±0.05 for
Theory of thermal conductivity in the disordered electron liquid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwiete, G., E-mail: schwiete@uni-mainz.de; Finkel’stein, A. M.
2016-03-15
We study thermal conductivity in the disordered two-dimensional electron liquid in the presence of long-range Coulomb interactions. We describe a microscopic analysis of the problem using the partition function defined on the Keldysh contour as a starting point. We extend the renormalization group (RG) analysis developed for thermal transport in the disordered Fermi liquid and include scattering processes induced by the long-range Coulomb interaction in the sub-temperature energy range. For the thermal conductivity, unlike for the electrical conductivity, these scattering processes yield a logarithmic correction that may compete with the RG corrections. The interest in this correction arises from themore » fact that it violates the Wiedemann–Franz law. We checked that the sub-temperature correction to the thermal conductivity is not modified either by the inclusion of Fermi liquid interaction amplitudes or as a result of the RG flow. We therefore expect that the answer obtained for this correction is final. We use the theory to describe thermal transport on the metallic side of the metal–insulator transition in Si MOSFETs.« less
ARGOS: the laser guide star system for the LBT
NASA Astrophysics Data System (ADS)
Rabien, S.; Ageorges, N.; Barl, L.; Beckmann, U.; Blümchen, T.; Bonaglia, M.; Borelli, J. L.; Brynnel, J.; Busoni, L.; Carbonaro, L.; Davies, R.; Deysenroth, M.; Durney, O.; Elberich, M.; Esposito, S.; Gasho, V.; Gässler, W.; Gemperlein, H.; Genzel, R.; Green, R.; Haug, M.; Hart, M. L.; Hubbard, P.; Kanneganti, S.; Masciadri, E.; Noenickx, J.; Orban de Xivry, G.; Peter, D.; Quirrenbach, A.; Rademacher, M.; Rix, H. W.; Salinari, P.; Schwab, C.; Storm, J.; Strüder, L.; Thiel, M.; Weigelt, G.; Ziegleder, J.
2010-07-01
ARGOS is the Laser Guide Star adaptive optics system for the Large Binocular Telescope. Aiming for a wide field adaptive optics correction, ARGOS will equip both sides of LBT with a multi laser beacon system and corresponding wavefront sensors, driving LBT's adaptive secondary mirrors. Utilizing high power pulsed green lasers the artificial beacons are generated via Rayleigh scattering in earth's atmosphere. ARGOS will project a set of three guide stars above each of LBT's mirrors in a wide constellation. The returning scattered light, sensitive particular to the turbulence close to ground, is detected in a gated wavefront sensor system. Measuring and correcting the ground layers of the optical distortions enables ARGOS to achieve a correction over a very wide field of view. Taking advantage of this wide field correction, the science that can be done with the multi object spectrographs LUCIFER will be boosted by higher spatial resolution and strongly enhanced flux for spectroscopy. Apart from the wide field correction ARGOS delivers in its ground layer mode, we foresee a diffraction limited operation with a hybrid Sodium laser Rayleigh beacon combination.
WE-AB-204-10: Evaluation of a Novel Dedicated Breast PET System (Mammi-PET)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Z; Swanson, T; O’Connor, M
2015-06-15
Purpose: To evaluate the performance characteristics of a novel dedicated breast PET system (Mammi-PET, Oncovision). The system has 2 detector rings giving axial/transaxial field of view of 8/17 cm. Each ring consists of 12 monolithic LYSO modules coupled to PSPMTs. Methods: Uniformity, sensitivity, energy and spatial resolution were measured according to NEMA standards. Count rate performance was investigated using a source of F-18 (1384uCi) decayed over 5 half-lives. A prototype PET phantom was imaged for 20 min to evaluate image quality, recovery coefficients and partial volume effects. Under an IRB-approved protocol, 11 patients who just underwent whole body PET/CT examsmore » were imaged prone with the breast pendulant at 5–10 minutes/breast. Image quality was assessed with and without scatter/attenuation correction and using different reconstruction algorithms. Results: Integral/differential uniformity were 9.8%/6.0% respectively. System sensitivity was 2.3% on axis, 2.2% and 2.8% at 3.8 cm and 7.8 cm off-axis. Mean energy resolution of all modules was 23.3%. Spatial resolution (FWHM) was 1.82 mm and 2.90 mm on axis and 5.8 cm off axis. Three cylinders (14 mm diameter) in the PET phantom were filled with activity concentration ratios of 4:1, 3:1, and 2:1 relative to the background. Measured cylinder to background ratios were 2.6, 1.8 and 1.5 (without corrections) and 3.6, 2.3 and 1.5 (with attenuation/scatter correction). Five cylinders (14, 10, 6, 4 and 2 mm diameter) each with an activity ratio of 4:1 were measured and showed recovery coefficients of 1, 0.66, 0.45, 0.18 and 0.18 (without corrections), and 1, 0.53, 0.30, 0.13 and 0 (with attenuation/scatter correction). Optimal phantom image quality was obtained with 3D MLEM algorithm, >20 iterations and without attenuation/scatter correction. Conclusion: The MAMMI system demonstrated good performance characteristics. Further work is needed to determine the optimal reconstruction parameters for qualitative and quantitative applications.« less
A method for photon beam Monte Carlo multileaf collimator particle transport
NASA Astrophysics Data System (ADS)
Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe
2002-09-01
Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.
Theory of bright-field scanning transmission electron microscopy for tomography
NASA Astrophysics Data System (ADS)
Levine, Zachary H.
2005-02-01
Radiation transport theory is applied to electron microscopy of samples composed of one or more materials. The theory, originally due to Goudsmit and Saunderson, assumes only elastic scattering and an amorphous medium dominated by atomic interactions. For samples composed of a single material, the theory yields reasonable parameter-free agreement with experimental data taken from the literature for the multiple scattering of 300-keV electrons through aluminum foils up to 25μm thick. For thin films, the theory gives a validity condition for Beer's law. For thick films, a variant of Molière's theory [V. G. Molière, Z. Naturforschg. 3a, 78 (1948)] of multiple scattering leads to a form for the bright-field signal for foils in the multiple-scattering regime. The signal varies as [tln(e1-2γt/τ)]-1 where t is the path length of the beam, τ is the mean free path for elastic scattering, and γ is Euler's constant. The Goudsmit-Saunderson solution interpolates numerically between these two limits. For samples with multiple materials, elemental sensitivity is developed through the angular dependence of the scattering. From the elastic scattering cross sections of the first 92 elements, a singular-value decomposition of a vector space spanned by the elastic scattering cross sections minus a delta function shows that there is a dominant common mode, with composition-dependent corrections of about 2%. A mathematically correct reconstruction procedure beyond 2% accuracy requires the acquisition of the bright-field signal as a function of the scattering angle. Tomographic reconstructions are carried out for three singular vectors of a sample problem with four elements Cr, Cu, Zr, and Te. The three reconstructions are presented jointly as a color image; all four elements are clearly identifiable throughout the image.
Optical property retrievals of subvisual cirrus clouds from OSIRIS limb-scatter measurements
NASA Astrophysics Data System (ADS)
Wiensz, J. T.; Degenstein, D. A.; Lloyd, N. D.; Bourassa, A. E.
2012-08-01
We present a technique for retrieving the optical properties of subvisual cirrus clouds detected by OSIRIS, a limb-viewing satellite instrument that measures scattered radiances from the UV to the near-IR. The measurement set is composed of a ratio of limb radiance profiles at two wavelengths that indicates the presence of cloud-scattering regions. Optical properties from an in-situ database are used to simulate scattering by cloud-particles. With appropriate configurations discussed in this paper, the SASKTRAN successive-orders of scatter radiative transfer model is able to simulate accurately the in-cloud radiances from OSIRIS. Configured in this way, the model is used with a multiplicative algebraic reconstruction technique (MART) to retrieve the cloud extinction profile for an assumed effective cloud particle size. The sensitivity of these retrievals to key auxiliary model parameters is shown, and it is demonstrated that the retrieved extinction profile models accurately the measured in-cloud radiances from OSIRIS. Since OSIRIS has an 11-yr record of subvisual cirrus cloud detections, the work described in this manuscript provides a very useful method for providing a long-term global record of the properties of these clouds.
Quantum hydrodynamics: capturing a reactive scattering resonance.
Derrickson, Sean W; Bittner, Eric R; Kendrick, Brian K
2005-08-01
The hydrodynamic equations of motion associated with the de Broglie-Bohm formulation of quantum mechanics are solved using a meshless method based upon a moving least-squares approach. An arbitrary Lagrangian-Eulerian frame of reference and a regridding algorithm which adds and deletes computational points are used to maintain a uniform and nearly constant interparticle spacing. The methodology also uses averaged fields to maintain unitary time evolution. The numerical instabilities associated with the formation of nodes in the reflected portion of the wave packet are avoided by adding artificial viscosity to the equations of motion. A new and more robust artificial viscosity algorithm is presented which gives accurate scattering results and is capable of capturing quantum resonances. The methodology is applied to a one-dimensional model chemical reaction that is known to exhibit a quantum resonance. The correlation function approach is used to compute the reactive scattering matrix, reaction probability, and time delay as a function of energy. Excellent agreement is obtained between the scattering results based upon the quantum hydrodynamic approach and those based upon standard quantum mechanics. This is the first clear demonstration of the ability of moving grid approaches to accurately and robustly reproduce resonance structures in a scattering system.
Optical artefact characterization and correction in volumetric scintillation dosimetry
Robertson, Daniel; Hui, Cheukkai; Archambault, Louis; Mohan, Radhe; Beddar, Sam
2014-01-01
The goals of this study were (1) to characterize the optical artefacts affecting measurement accuracy in a volumetric liquid scintillation detector, and (2) to develop methods to correct for these artefacts. The optical artefacts addressed were photon scattering, refraction, camera perspective, vignetting, lens distortion, the lens point spread function, stray radiation, and noise in the camera. These artefacts were evaluated by theoretical and experimental means, and specific correction strategies were developed for each artefact. The effectiveness of the correction methods was evaluated by comparing raw and corrected images of the scintillation light from proton pencil beams against validated Monte Carlo calculations. Blurring due to the lens and refraction at the scintillator tank-air interface were found to have the largest effect on the measured light distribution, and lens aberrations and vignetting were important primarily at the image edges. Photon scatter in the scintillator was not found to be a significant source of artefacts. The correction methods effectively mitigated the artefacts, increasing the average gamma analysis pass rate from 66% to 98% for gamma criteria of 2% dose difference and 2 mm distance to agreement. We conclude that optical artefacts cause clinically meaningful errors in the measured light distribution, and we have demonstrated effective strategies for correcting these optical artefacts. PMID:24321820
Norton, G V; Novarini, J C
2007-06-01
Ultrasonic imaging in medical applications involves propagation and scattering of acoustic waves within and by biological tissues that are intrinsically dispersive. Analytical approaches for modeling propagation and scattering in inhomogeneous media are difficult and often require extremely simplifying approximations in order to achieve a solution. To avoid such approximations, the direct numerical solution of the wave equation via the method of finite differences offers the most direct tool, which takes into account diffraction and refraction. It also allows for detailed modeling of the real anatomic structure and combination/layering of tissues. In all cases the correct inclusion of the dispersive properties of the tissues can make the difference in the interpretation of the results. However, the inclusion of dispersion directly in the time domain proved until recently to be an elusive problem. In order to model the transient signal a convolution operator that takes into account the dispersive characteristics of the medium is introduced to the linear wave equation. To test the ability of this operator to handle scattering from localized scatterers, in this work, two-dimensional numerical modeling of scattering from an infinite cylinder with physical properties associated with biological tissue is calculated. The numerical solutions are compared with the exact solution synthesized from the frequency domain for a variety of tissues having distinct dispersive properties. It is shown that in all cases, the use of the convolutional propagation operator leads to the correct solution for the scattered field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergstrom, P
Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less
Survey of background scattering from materials found in small-angle neutron scattering.
Barker, J G; Mildner, D F R
2015-08-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3 He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3 He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.
Survey of background scattering from materials found in small-angle neutron scattering
Barker, J. G.; Mildner, D. F. R.
2015-01-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088
Intermediate energy proton-deuteron elastic scattering
NASA Technical Reports Server (NTRS)
Wilson, J. W.
1973-01-01
A fully symmetrized multiple scattering series is considered for the description of proton-deuteron elastic scattering. An off-shell continuation of the experimentally known twobody amplitudes that retains the exchange symmeteries required for the calculation is presented. The one boson exchange terms of the two body amplitudes are evaluated exactly in this off-shell prescription. The first two terms of the multiple scattering series are calculated explicitly whereas multiple scattering effects are obtained as minimum variance estimates from the 146-MeV data of Postma and Wilson. The multiple scattering corrections indeed consist of low order partial waves as suggested by Sloan based on model studies with separable interactions. The Hamada-Johnston wave function is shown consistent with the data for internucleon distances greater than about 0.84 fm.
Light scattering by tenuous particles - A generalization of the Rayleigh-Gans-Rocard approach
NASA Technical Reports Server (NTRS)
Acquista, C.
1976-01-01
We consider scattering by arbitrarily shaped particles that satisfy two conditions: (1) that the polarizability of the particle relative to the ambient medium be small compared to 1 and (2) that the phase shift introduced by the particle be less than 2. We solve the integro-differential equation proposed by Shifrin by using the method of successive iterations and then applying a Fourier transform. For the second iteration, results are presented that accurately describe scattering by a broad class of particles. The phase function and other elements of the scattering matrix are shown to be in excellent agreement with Mie theory for spherical scatterers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, L; Zhu, L; Vedantham, S
Purpose: Scatter contamination is detrimental to image quality in dedicated cone-beam breast CT (CBBCT), resulting in cupping artifacts and loss of contrast in reconstructed images. Such effects impede visualization of breast lesions and the quantitative accuracy. Previously, we proposed a library-based software approach to suppress scatter on CBBCT images. In this work, we quantify the efficacy and stability of this approach using datasets from 15 human subjects. Methods: A pre-computed scatter library is generated using Monte Carlo simulations for semi-ellipsoid breast models and homogeneous fibroglandular/adipose tissue mixture encompassing the range reported in literature. Projection datasets from 15 human subjects thatmore » cover 95 percentile of breast dimensions and fibroglandular volume fraction were included in the analysis. Our investigations indicate that it is sufficient to consider the breast dimensions alone and variation in fibroglandular fraction does not significantly affect the scatter-to-primary ratio. The breast diameter is measured from a first-pass reconstruction; the appropriate scatter distribution is selected from the library; and, deformed by considering the discrepancy in total projection intensity between the clinical dataset and the simulated semi-ellipsoidal breast. The deformed scatter-distribution is subtracted from the measured projections for scatter correction. Spatial non-uniformity (SNU) and contrast-to-noise ratio (CNR) were used as quantitative metrics to evaluate the results. Results: On the 15 patient cases, our method reduced the overall image spatial non-uniformity (SNU) from 7.14%±2.94% (mean ± standard deviation) to 2.47%±0.68% in coronal view and from 10.14%±4.1% to 3.02% ±1.26% in sagittal view. The average contrast to noise ratio (CNR) improved by a factor of 1.49±0.40 in coronal view and by 2.12±1.54 in sagittal view. Conclusion: We demonstrate the robustness and effectiveness of a library-based scatter correction method using patient datasets with large variability in breast dimensions and composition. The high computational efficiency and simplicity in implementation make this attractive for clinical implementation. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
NASA Technical Reports Server (NTRS)
Luchini, Chris B.
1997-01-01
Development of camera and instrument simulations for space exploration requires the development of scientifically accurate models of the objects to be studied. Several planned cometary missions have prompted the development of a three dimensional, multi-spectral, anisotropic multiple scattering model of cometary coma.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, R; Bednarek, D; Rudin, S
2015-06-15
Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained withoutmore » scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follett, R. K., E-mail: rfollett@lle.rochester.edu; Delettrez, J. A.; Edgell, D. H.
2016-11-15
Collective Thomson scattering is a technique for measuring the plasma conditions in laser-plasma experiments. Simultaneous measurements of ion-acoustic and electron plasma-wave spectra were obtained using a 263.25-nm Thomson-scattering probe beam. A fully reflective collection system was used to record light scattered from electron plasma waves at electron densities greater than 10{sup 21} cm{sup −3}, which produced scattering peaks near 200 nm. An accurate analysis of the experimental Thomson-scattering spectra required accounting for plasma gradients, instrument sensitivity, optical effects, and background radiation. Practical techniques for including these effects when fitting Thomson-scattering spectra are presented and applied to the measured spectra tomore » show the improvements in plasma characterization.« less
NASA Astrophysics Data System (ADS)
Braun, Frank; Schalk, Robert; Heintz, Annabell; Feike, Patrick; Firmowski, Sebastian; Beuermann, Thomas; Methner, Frank-Jürgen; Kränzlin, Bettina; Gretz, Norbert; Rädle, Matthias
2017-07-01
In this report, a quantitative nicotinamide adenine dinucleotide hydrate (NADH) fluorescence measurement algorithm in a liquid tissue phantom using a fiber-optic needle probe is presented. To determine the absolute concentrations of NADH in this phantom, the fluorescence emission spectra at 465 nm were corrected using diffuse reflectance spectroscopy between 600 nm and 940 nm. The patented autoclavable Nitinol needle probe enables the acquisition of multispectral backscattering measurements of ultraviolet, visible, near-infrared and fluorescence spectra. As a phantom, a suspension of calcium carbonate (Calcilit) and water with physiological NADH concentrations between 0 mmol l-1 and 2.0 mmol l-1 were used to mimic human tissue. The light scattering characteristics were adjusted to match the backscattering attributes of human skin by modifying the concentration of Calcilit. To correct the scattering effects caused by the matrices of the samples, an algorithm based on the backscattered remission spectrum was employed to compensate the influence of multiscattering on the optical pathway through the dispersed phase. The monitored backscattered visible light was used to correct the fluorescence spectra and thereby to determine the true NADH concentrations at unknown Calcilit concentrations. Despite the simplicity of the presented algorithm, the root-mean-square error of prediction (RMSEP) was 0.093 mmol l-1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mille, M; Bergstrom, P
2015-06-15
Purpose: To use Monte Carlo radiation transport methods to calculate correction factors for a free-air ionization chamber in support of a national air-kerma standard for low-energy, miniature x-ray sources used for electronic brachytherapy (eBx). Methods: The NIST is establishing a calibration service for well-type ionization chambers used to characterize the strength of eBx sources prior to clinical use. The calibration approach involves establishing the well-chamber’s response to an eBx source whose air-kerma rate at a 50 cm distance is determined through a primary measurement performed using the Lamperti free-air ionization chamber. However, the free-air chamber measurements of charge or currentmore » can only be related to the reference air-kerma standard after applying several corrections, some of which are best determined via Monte Carlo simulation. To this end, a detailed geometric model of the Lamperti chamber was developed in the EGSnrc code based on the engineering drawings of the instrument. The egs-fac user code in EGSnrc was then used to calculate energy-dependent correction factors which account for missing or undesired ionization arising from effects such as: (1) attenuation and scatter of the x-rays in air; (2) primary electrons escaping the charge collection region; (3) lack of charged particle equilibrium; (4) atomic fluorescence and bremsstrahlung radiation. Results: Energy-dependent correction factors were calculated assuming a monoenergetic point source with the photon energy ranging from 2 keV to 60 keV in 2 keV increments. Sufficient photon histories were simulated so that the Monte Carlo statistical uncertainty of the correction factors was less than 0.01%. The correction factors for a specific eBx source will be determined by integrating these tabulated results over its measured x-ray spectrum. Conclusion: The correction factors calculated in this work are important for establishing a national standard for eBx which will help ensure that dose is accurately and consistently delivered to patients.« less
Measurement of event shape variables in deep inelastic e p scattering
NASA Astrophysics Data System (ADS)
Adloff, C.; Aid, S.; Anderson, M.; Andreev, V.; Andrieu, B.; Arkadov, V.; Arndt, C.; Ayyaz, I.; Babaev, A.; Bähr, J.; Bán, J.; Baranov, P.; Barrelet, E.; Barschke, R.; Bartel, W.; Bassler, U.; Beck, H. P.; Beck, M.; Behrend, H.-J.; Belousov, A.; Berger, Ch.; Bernardi, G.; Bertrand-Coremans, G.; Beyer, R.; Biddulph, P.; Bizot, J. C.; Borras, K.; Botterweck, F.; Boudry, V.; Bourov, S.; Braemer, A.; Braunschweig, W.; Brisson, V.; Brown, D. P.; Brückner, W.; Bruel, P.; Bruncko, D.; Brune, C.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Buschhorn, G.; Calvet, D.; Campbell, A. J.; Carli, T.; Charlet, M.; Clarke, D.; Clerbaux, B.; Cocks, S.; Contreras, J. G.; Cormack, C.; Coughlan, J. A.; Cousinou, M.-C.; Cox, B. E.; Cozzika, G.; Cussans, D. G.; Cvach, J.; Dagoret, S.; Dainton, J. B.; Dau, W. D.; Daum, K.; David, M.; de Roeck, A.; de Wolf, E. A.; Delcourt, B.; Dirkmann, M.; Dixon, P.; Dlugosz, W.; Dollfus, C.; Donovan, K. T.; Dowell, J. D.; Dreis, H. B.; Droutskoi, A.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Efremenko, V.; Egli, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Elsen, E.; Erdmann, M.; Fahr, A. B.; Favart, L.; Fedotov, A.; Felst, R.; Feltesse, J.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Fleischer, M.; Flieser, M.; Flügge, G.; Fomenko, A.; Formánek, J.; Foster, J. M.; Franke, G.; Gabathuler, E.; Gabathuler, K.; Gaede, F.; Garvey, J.; Gayler, J.; Gebauer, M.; Gerhards, R.; Glazov, A.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Gonzalez-Pineiro, B.; Gorelov, I.; Grab, C.; Grässler, H.; Greenshaw, T.; Griffiths, R. K.; Grindhammer, G.; Gruber, A.; Gruber, C.; Hadig, T.; Haidt, D.; Hajduk, L.; Haller, T.; Hampel, M.; Haynes, W. J.; Heinemann, B.; Heinzelmann, G.; Henderson, R. C. W.; Hengstmann, S.; Henschel, H.; Herynek, I.; Hess, M. F.; Hewitt, K.; Hiller, K. H.; Hilton, C. D.; Hladký, J.; Höppner, M.; Hoffmann, D.; Holtom, T.; Horisberger, R.; Hudgson, V. L.; Hütte, M.; Ibbotson, M.; İşsever, Ç.; Itterbeck, H.; Jacquet, M.; Jaffre, M.; Janoth, J.; Jansen, D. M.; Jönsson, L.; Johnson, D. P.; Jung, H.; Kalmus, P. I. P.; Kander, M.; Kant, D.; Kathage, U.; Katzy, J.; Kaufmann, H. H.; Kaufmann, O.; Kausch, M.; Kazarian, S.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Köhler, T.; Köhne, J. H.; Kolanoski, H.; Kolya, S. D.; Korbel, V.; Kostka, P.; Kotelnikov, S. K.; Krämerkämper, T.; Krasny, M. W.; Krehbiel, H.; Krücker, D.; Küpper, A.; Küster, H.; Kuhlen, M.; Kurča, T.; Laforge, B.; Landon, M. P. J.; Lange, W.; Langenegger, U.; Lebedev, A.; Lehner, F.; Lemaitre, V.; Levonian, S.; Lindstroem, M.; Linsel, F.; Lipinski, J.; List, B.; Lobo, G.; Lopez, G. C.; Lubimov, V.; Lüke, D.; Lytkin, L.; Magnussen, N.; Mahlke-Krüger, H.; Malinovski, E.; Maraček, R.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, G.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Mavroidis, T.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Merkel, P.; Metlica, F.; Meyer, A.; Meyer, A.; Meyer, H.; Meyer, J.; Meyer, P.-O.; Migliori, A.; Mikocki, S.; Milstead, D.; Moeck, J.; Moreau, F.; Morris, J. V.; Mroczko, E.; Müller, D.; Müller, K.; Murín, P.; Nagovizin, V.; Nahnhauer, R.; Naroska, B.; Naumann, Th.; Négri, I.; Newman, P. R.; Newton, D.; Nguyen, H. K.; Nicholls, T. C.; Niebergall, F.; Niebuhr, C.; Niedzballa, Ch.; Niggli, H.; Nowak, G.; Nunnemann, T.; Oberlack, H.; Olsson, J. E.; Ozerov, D.; Palmen, P.; Panaro, E.; Panitch, A.; Pascaud, C.; Passaggio, S.; Patel, G. D.; Pawletta, H.; Peppel, E.; Perez, E.; Phillips, J. P.; Pieuchot, A.; Pitzl, D.; Pöschl, R.; Pope, G.; Povh, B.; Rabbertz, K.; Reimer, P.; Rick, H.; Reiss, S.; Rizvi, E.; Robmann, P.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Rouse, F.; Royon, C.; Rüter, K.; Rusakov, S.; Rybicki, K.; Sankey, D. P. C.; Schacht, P.; Schiek, S.; Schleif, S.; Schleper, P.; von Schlippe, W.; Schmidt, D.; Schmidt, G.; Schoeffel, L.; Schöning, A.; Schröder, V.; Schuhmann, E.; Schwab, B.; Sefkow, F.; Semenov, A.; Shekelyan, V.; Sheviakov, I.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorn, I. O.; Sloan, T.; Smirnov, P.; Smith, M.; Solochenko, V.; Soloviev, Y.; Specka, A.; Spiekermann, J.; Spielman, S.; Spitzer, H.; Squinabol, F.; Steffen, P.; Steinberg, R.; Steinhart, J.; Stella, B.; Stellberger, A.; Stiewe, J.; Stößlein, U.; Stolze, K.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Tapprogge, S.; Taševský, M.; Tchernyshov, V.; Tchetchelnitski, S.; Theissen, J.; Thompson, G.; Thompson, P. D.; Tobien, N.; Todenhagen, R.; Truöl, P.; Tsipolitis, G.; Turnau, J.; Tzamariudaki, E.; Uelkes, P.; Usik, A.; Valkár, S.; Valkárová, A.; Vallée, C.; van Esch, P.; van Mechelen, P.; Vandenplas, D.; Vazdik, Y.; Verrecchia, P.; Villet, G.; Wacker, K.; Wagener, A.; Wagener, M.; Wallny, R.; Walter, T.; Waugh, B.; Weber, G.; Weber, M.; Wegener, D.; Wegner, A.; Wengler, T.; Werner, M.; West, L. R.; Wiesand, S.; Wilksen, T.; Willard, S.; Winde, M.; Winter, G.-G.; Wittek, C.; Wobisch, M.; Wollatz, H.; Wünsch, E.; ŽáČek, J.; Zarbock, D.; Zhang, Z.; Zhokin, A.; Zini, P.; Zomer, F.; Zsembery, J.; Zurnedden, M.
1997-02-01
Deep inelastic e p scattering data, taken with the H1 detector at HERA, are used to study the event shape variables thrust, jet broadening and jet mass in the current hemisphere of the Breit frame over a large range of momentum transfers Q between 7 GeV and 100 GeV. The data are compared with results from e+e- experiments. Using second order QCD calculations and an approach to relate hadronisation effects to power corrections an analysis of the Q dependences of the means of the event shape parameters is presented, from which both the power corrections and the strong coupling constant are determined without any assumption on fragmentation models. The power corrections of all event shape variables investigated follow a 1/Q behaviour and can be described by a common parameter α0.
Role of oceanic air bubbles in atmospheric correction of ocean color imagery.
Yan, Banghua; Chen, Bingquan; Stamnes, Knut
2002-04-20
Ocean color is the radiance that emanates from the ocean because of scattering by chlorophyll pigments and particles of organic and inorganic origin. Air bubbles in the ocean also scatter light and thus contribute to the water-leaving radiance. This additional water-leaving radiance that is due to oceanic air bubbles could violate the black pixel assumption at near-infrared wavelengths and be attributed to chlorophyll in the visible. Hence, the accuracy of the atmospheric correction required for the retrieval of ocean color from satellite measurements is impaired. A comprehensive radiative transfer code for the coupled atmosphere--ocean system is employed to assess the effect of oceanic air bubbles on atmospheric correction of ocean color imagery. This effect is found to depend on the wavelength-dependent optical properties of oceanic air bubbles as well as atmospheric aerosols.
Role of oceanic air bubbles in atmospheric correction of ocean color imagery
NASA Astrophysics Data System (ADS)
Yan, Banghua; Chen, Bingquan; Stamnes, Knut
2002-04-01
Ocean color is the radiance that emanates from the ocean because of scattering by chlorophyll pigments and particles of organic and inorganic origin. Air bubbles in the ocean also scatter light and thus contribute to the water-leaving radiance. This additional water-leaving radiance that is due to oceanic air bubbles could violate the black pixel assumption at near-infrared wavelengths and be attributed to chlorophyll in the visible. Hence, the accuracy of the atmospheric correction required for the retrieval of ocean color from satellite measurements is impaired. A comprehensive radiative transfer code for the coupled atmosphere-ocean system is employed to assess the effect of oceanic air bubbles on atmospheric correction of ocean color imagery. This effect is found to depend on the wavelength-dependent optical properties of oceanic air bubbles as well as atmospheric aerosols.
Spaceborne lidar for cloud monitoring
NASA Astrophysics Data System (ADS)
Werner, Christian; Krichbaumer, W.; Matvienko, Gennadii G.
1994-12-01
Results of laser cloud top measurements taken from space in 1982 (called PANTHER) are presented. Three sequences of land, water, and cloud data are selected. A comparison with airborne lidar data shows similarities. Using the single scattering lidar equation for these spaceborne lidar measurements one can misinterpret the data if one doesn't correct for multiple scattering.
Subleading Regge limit from a soft anomalous dimension
NASA Astrophysics Data System (ADS)
Brüser, Robin; Caron-Huot, Simon; Henn, Johannes M.
2018-04-01
Wilson lines capture important features of scattering amplitudes, for example soft effects relevant for infrared divergences, and the Regge limit. Beyond the leading power approximation, corrections to the eikonal picture have to be taken into account. In this paper, we study such corrections in a model of massive scattering amplitudes in N=4 super Yang-Mills, in the planar limit, where the mass is generated through a Higgs mechanism. Using known three-loop analytic expressions for the scattering amplitude, we find that the first power suppressed term has a very simple form, equal to a single power law. We propose that its exponent is governed by the anomalous dimension of a Wilson loop with a scalar inserted at the cusp, and we provide perturbative evidence for this proposal. We also analyze other limits of the amplitude and conjecture an exact formula for a total cross-section at high energies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeylikovich, I.; Xu, M., E-mail: mxu@fairfield.edu
The phase of multiply scattered light has recently attracted considerable interest. Coherent backscattering is a striking phenomenon of multiple scattered light in which the coherence of light survives multiple scattering in a random medium and is observable in the direction space as an enhancement of the intensity of backscattered light within a cone around the retroreflection direction. Reciprocity also leads to enhancement of backscattering light in the spatial space. The random medium behaves as a reciprocity mirror which robustly converts a diverging incident beam into a converging backscattering one focusing at a conjugate spot in space. Here we first analyzemore » theoretically this coherent backscattering mirror (CBM) phenomenon and then demonstrate the capability of CBM compensating and correcting both static and dynamic phase distortions occurring along the optical path. CBM may offer novel approaches for high speed dynamic phase corrections in optical systems and find applications in sensing and navigation.« less
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-07-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
Dynamic coherent backscattering mirror
NASA Astrophysics Data System (ADS)
Zeylikovich, I.; Xu, M.
2016-02-01
The phase of multiply scattered light has recently attracted considerable interest. Coherent backscattering is a striking phenomenon of multiple scattered light in which the coherence of light survives multiple scattering in a random medium and is observable in the direction space as an enhancement of the intensity of backscattered light within a cone around the retroreflection direction. Reciprocity also leads to enhancement of backscattering light in the spatial space. The random medium behaves as a reciprocity mirror which robustly converts a diverging incident beam into a converging backscattering one focusing at a conjugate spot in space. Here we first analyze theoretically this coherent backscattering mirror (CBM) phenomenon and then demonstrate the capability of CBM compensating and correcting both static and dynamic phase distortions occurring along the optical path. CBM may offer novel approaches for high speed dynamic phase corrections in optical systems and find applications in sensing and navigation.
Computer image processing: Geologic applications
NASA Technical Reports Server (NTRS)
Abrams, M. J.
1978-01-01
Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.
Using seismic coda waves to resolve intrinsic and scattering attenuation
NASA Astrophysics Data System (ADS)
Wang, W.; Shearer, P. M.
2016-12-01
Seismic attenuation is caused by two factors, scattering and intrinsic absorption. Characterizing scattering and absorbing properties and the power spectrum of crustal heterogeneity is a fundamental problem for informing strong ground motion estimates at high frequencies, where scattering and attenuation effects are critical. Determining the relative amount of attenuation caused by scattering and intrinsic absorption has been a long-standing problem in seismology. The wavetrain following the direct body wave phases is called the coda, which is caused by scattered energy. Many studies have analyzed the coda of local events to constrain crustal and upper-mantle scattering strength and intrinsic attenuation. Here we examine two popular attenuation inversion methods, the Multiple Lapse Time Window Method (MLTWM) and the Coda Qc Method. First, based on our previous work on California attenuation structure, we apply an efficient and accurate method, the Monte Carlo Approach, to synthesize seismic envelope functions. We use this code to generate a series of synthetic data based on several complex and realistic forward models. Although the MLTWM assumes a uniform whole space, we use the MLTWM to invert for both scattering and intrinsic attenuation from the synthetic data to test how accurately it can recover the attenuation models. Results for the coda Qc method depend on choices for the length and starting time of the coda-wave time window. Here we explore the relation between the inversion results for Qc, the windowing parameters, and the intrinsic and scattering Q structure of our synthetic model. These results should help assess the practicality and accuracy of the Multiple Lapse Time Window Method and Coda Qc Method when applied to realistic crustal velocity and attenuation models.
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
NASA Astrophysics Data System (ADS)
Xue, Q.; Horsewill, A. J.; Johnson, M. R.; Trommsdorff, H. P.
2004-06-01
The isotope effects associated with double proton transfer in the hydrogen bonds of benzoic acid (BA) dimers have been measured using field-cycling 1H NMR relaxometry and quasielastic neutron scattering. By studying mixed isotope (hydrogen and deuterium) samples, the dynamics of three isotopologues, BA-HH, BA-HD, and BA-DD, have been investigated. Low temperature measurements provide accurate measurements of the incoherent tunneling rate, k0. This parameter scales accurately with the mass number, m, according to the formula k0=(E/m)e-F√m providing conclusive evidence that the proton transfer process is a strongly correlated motion of two hydrons. Furthermore, we conclude that the tunneling pathway is the same for the three isotopologue species. Measurements at higher temperatures illuminate the through barrier processes that are mediated via intermediate or excited vibrational states. In parallel with the investigation of proton transfer dynamics, the theoretical and experimental aspects of studying spin-lattice relaxation in single crystals of mixed isotope samples are investigated in depth. Heteronuclear dipolar interactions between 1H and 2H isotopes contribute significantly to the overall proton spin-lattice relaxation and it is shown that these must be modeled correctly to obtain accurate values for the proton transfer rates. Since the sample used in the NMR measurements was a single crystal, full account of the orientation dependence of the spin-lattice relaxation with respect to the applied B field was incorporated into the data analysis.
Extracting the σ-term from low-energy pion-nucleon scattering
NASA Astrophysics Data System (ADS)
Ruiz de Elvira, Jacobo; Hoferichter, Martin; Kubis, Bastian; Meißner, Ulf-G.
2018-02-01
We present an extraction of the pion-nucleon (π N) scattering lengths from low-energy π N scattering, by fitting a representation based on Roy-Steiner equations to the low-energy data base. We show that the resulting values confirm the scattering-length determination from pionic atoms, and discuss the stability of the fit results regarding electromagnetic corrections and experimental normalization uncertainties in detail. Our results provide further evidence for a large π N σ-term, {σ }π N=58(5) {{MeV}}, in agreement with, albeit less precise than, the determination from pionic atoms.
The impact of vibrational Raman scattering of air on DOAS measurements of atmospheric trace gases
NASA Astrophysics Data System (ADS)
Lampel, J.; Frieß, U.; Platt, U.
2015-09-01
In remote sensing applications, such as differential optical absorption spectroscopy (DOAS), atmospheric scattering processes need to be considered. After inelastic scattering on N2 and O2 molecules, the scattered photons occur as additional intensity at a different wavelength, effectively leading to "filling-in" of both solar Fraunhofer lines and absorptions of atmospheric constituents, if the inelastic scattering happens after the absorption. Measured spectra in passive DOAS applications are typically corrected for rotational Raman scattering (RRS), also called Ring effect, which represents the main contribution to inelastic scattering. Inelastic scattering can also occur in liquid water, and its influence on DOAS measurements has been observed over clear ocean water. In contrast to that, vibrational Raman scattering (VRS) of N2 and O2 has often been thought to be negligible, but it also contributes. Consequences of VRS are red-shifted Fraunhofer structures in scattered light spectra and filling-in of Fraunhofer lines, additional to RRS. At 393 nm, the spectral shift is 25 and 40 nm for VRS of O2 and N2, respectively. We describe how to calculate VRS correction spectra according to the Ring spectrum. We use the VRS correction spectra in the spectral range of 420-440 nm to determine the relative magnitude of the cross-sections of VRS of O2 and N2 and RRS of air. The effect of VRS is shown for the first time in spectral evaluations of Multi-Axis DOAS data from the SOPRAN M91 campaign and the MAD-CAT MAX-DOAS intercomparison campaign. The measurements yield in agreement with calculated scattering cross-sections that the observed VRS(N2) cross-section at 393 nm amounts to 2.3 ± 0.4 % of the cross-section of RRS at 433 nm under tropospheric conditions. The contribution of VRS(O2) is also found to be in agreement with calculated scattering cross-sections. It is concluded, that this phenomenon has to be included in the spectral evaluation of weak absorbers as it reduces the measurement error significantly and can cause apparent differential optical depth of up to 3 ×10-4. Its influence on the spectral retrieval of IO, glyoxal, water vapour and NO2 in the blue wavelength range is evaluated for M91. For measurements with a large Ring signal a significant and systematic bias of NO2 dSCDs (differential slant column densities) up to (-3.8 ± 0.4) × 1014 molec cm-2 is observed if this effect is not considered. The effect is typically negligible for DOAS fits with an RMS (root mean square) larger than 4 × 10-4.
Stress Wave Scattering: Friend or Enemy of Non Destructive Testing of Concrete?
NASA Astrophysics Data System (ADS)
Aggelis, Dimitrios G.; Shiotani, Tomoki; Philippidis, Theodore P.; Polyzos, Demosthenes
Cementitious materials are by definition inhomogeneous containing cement paste, sand, aggregates as well as air voids. Wave propagation in such a material is characterized by scattering phenomena. Damage in the form of micro or macro cracks certainly enhances scattering influence. Its most obvious manifestation is the velocity variation with frequency and excessive attenuation. The influence becomes stronger with increased mis-match of elastic properties of constituent materials and higher crack content. Therefore, in many cases of large concrete structures, field application of stress waves is hindered since attenuation makes the acquisition of reliable signals troublesome. However, measured wave parameters, combined with investigation with scattering theory can reveal much about the internal condition and supply information that cannot be obtained in any other way. The size and properties of the scatterers leave their signature on the dispersion and attenuation curves making thus the characterization more accurate in case of damage assessment, repair evaluation as well as composition inspection. In this paper, three indicative cases of scattering influence are presented. Namely, the interaction of actual distributed damage, as well as the repair material injected in an old concrete structure with the wave parameters. Other cases are the influence of light plastic inclusions in hardened mortar and the influence of sand and water content in the examination of fresh concrete. In all the above cases, scattering seems to complicate the propagation behavior but also offers the way for a more accurate characterization of the quality of the material.
Tojo, H; Yamada, I; Yasuhara, R; Ejiri, A; Hiratsuka, J; Togashi, H; Yatsuka, E; Hatae, T; Funaba, H; Hayashi, H; Takase, Y; Itami, K
2016-09-01
This paper evaluates the accuracy of electron temperature measurements and relative transmissivities of double-pass Thomson scattering diagnostics. The electron temperature (T e ) is obtained from the ratio of signals from a double-pass scattering system, then relative transmissivities are calculated from the measured T e and intensity of the signals. How accurate the values are depends on the electron temperature (T e ) and scattering angle (θ), and therefore the accuracy of the values was evaluated experimentally using the Large Helical Device (LHD) and the Tokyo spherical tokamak-2 (TST-2). Analyzing the data from the TST-2 indicates that a high T e and a large scattering angle (θ) yield accurate values. Indeed, the errors for scattering angle θ = 135° are approximately half of those for θ = 115°. The method of determining the T e in a wide T e range spanning over two orders of magnitude (0.01-1.5 keV) was validated using the experimental results of the LHD and TST-2. A simple method to provide relative transmissivities, which include inputs from collection optics, vacuum window, optical fibers, and polychromators, is also presented. The relative errors were less than approximately 10%. Numerical simulations also indicate that the T e measurements are valid under harsh radiation conditions. This method to obtain T e can be considered for the design of Thomson scattering systems where there is high-performance plasma that generates harsh radiation environments.
Ho, Derek; Kim, Sanghoon; Drake, Tyler K.; Eldridge, Will J.; Wax, Adam
2014-01-01
We present a fast approach for size determination of spherical scatterers using the continuous wavelet transform of the angular light scattering profile to address the computational limitations of previously developed sizing techniques. The potential accuracy, speed, and robustness of the algorithm were determined in simulated models of scattering by polystyrene beads and cells. The algorithm was tested experimentally on angular light scattering data from polystyrene bead phantoms and MCF-7 breast cancer cells using a 2D a/LCI system. Theoretical sizing of simulated profiles of beads and cells produced strong fits between calculated and actual size (r2 = 0.9969 and r2 = 0.9979 respectively), and experimental size determinations were accurate to within one micron. PMID:25360350
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, K; Li, X; Liu, B
2016-06-15
Purpose: To accurately measure the scatter radiation from a Hologic digital breast tomosynthesis (DBT) system and to provide updated scatter distribution to guide radiation shielding calculation for DBT rooms. Methods: A high sensitivity GOS-based linear detector was used to measure the angular distribution of scatter radiation from a Hologic Selenia Dimensions DBT system. The linear detector was calibrated for its energy response of typical DBT spectra. Following the NCRP147 approach, the measured scatter intensity was normalized by the primary beam area and primary air kerma at 1m from the scatter phantom center and presented as the scatter fraction. Direct comparisonmore » was made against Simpkin’s initial measurement. Key parameters including the phantom size, primary beam area, and kV/anode/target combination were also studied. Results: The measured scatter-to-primary-ratio and scatter fraction data closely matched with previous data from Simpkin. The measured data demonstrated the unique nonisotropic distribution of the scattered radiation around a Hologic DBT system, with two strong peaks around 25° and 160°. The majority scatter radiation (>70%) originated from the imaging detector assembly, instead of the phantom. With a workload from a previous local survey, the scatter air kerma at 1m from the phantom center for wall/door is 0.018mGy/patient, for floor is 0.164mGy/patient, and for ceiling is 0.037mGy/patient. Conclusion: Comparing to Simpkin’s previous data, the scatter air kerma from Holgoic DBT is at least two times higher. The main reasons include the harder primary beam with higher workload, added tomosynthesis acquisition, and strong small angle forward scattering. Due to the highly conservative initial assumptions, the shielding recommendation from NCRP147 is still sufficient for the Hologic DBT system given the workload from a previous local survey. With the data provided from this study, accurate shielding calculation can be performed for Hologic DBT systems with specific workload and barrier distance.« less
Manchikanti, Laxmaiah; Cash, Kim A; Moss, Tammy L; Rivera, Jose; Pampati, Vidyasagar
2003-08-06
BACKGROUND: Fluoroscopic guidance is frequently utilized in interventional pain management. The major purpose of fluoroscopy is correct needle placement to ensure target specificity and accurate delivery of the injectate. Radiation exposure may be associated with risks to physician, patient and personnel. While there have been many studies evaluating the risk of radiation exposure and techniques to reduce this risk in the upper part of the body, the literature is scant in evaluating the risk of radiation exposure in the lower part of the body. METHODS: Radiation exposure risk to the physician was evaluated in 1156 patients undergoing interventional procedures under fluoroscopy by 3 physicians. Monitoring of scattered radiation exposure in the upper and lower body, inside and outside the lead apron was carried out. RESULTS: The average exposure per procedure was 12.0 PlusMinus; 9.8 seconds, 9.0 PlusMinus; 0.37 seconds, and 7.5 PlusMinus; 1.27 seconds in Groups I, II, and III respectively. Scatter radiation exposure ranged from a low of 3.7 PlusMinus; 0.29 seconds for caudal/interlaminar epidurals to 61.0 PlusMinus; 9.0 seconds for discography. Inside the apron, over the thyroid collar on the neck, the scatter radiation exposure was 68 mREM in Group I consisting of 201 patients who had a total of 330 procedures with an average of 0.2060 mREM per procedure and 25 mREM in Group II consisting of 446 patients who had a total of 662 procedures with average of 0.0378 mREM per procedure. The scatter radiation exposure was 0 mREM in Group III consisting of 509 patients who had a total 827 procedures. Increased levels of exposures were observed in Groups I and II compared to Group III, and Group I compared to Group II.Groin exposure showed 0 mREM exposure in Groups I and II and 15 mREM in Group III. Scatter radiation exposure for groin outside the apron in Group I was 1260 mREM and per procedure was 3.8182 mREM. In Group II the scatter radiation exposure was 400 mREM and with 0.6042 mREM per procedure. In Group III the scatter radiation exposure was 1152 mREM with 1.3930 mREM per procedure. CONCLUSION: Results of this study showed that scatter radiation exposure to both the upper and lower parts of the physician's body is present. Protection was offered by traditional measures to the upper body only.
High-fidelity artifact correction for cone-beam CT imaging of the brain
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Xu, J.; Dang, H.; Stayman, J. W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.
2015-02-01
CT is the frontline imaging modality for diagnosis of acute traumatic brain injury (TBI), involving the detection of fresh blood in the brain (contrast of 30-50 HU, detail size down to 1 mm) in a non-contrast-enhanced exam. A dedicated point-of-care imaging system based on cone-beam CT (CBCT) could benefit early detection of TBI and improve direction to appropriate therapy. However, flat-panel detector (FPD) CBCT is challenged by artifacts that degrade contrast resolution and limit application in soft-tissue imaging. We present and evaluate a fairly comprehensive framework for artifact correction to enable soft-tissue brain imaging with FPD CBCT. The framework includes a fast Monte Carlo (MC)-based scatter estimation method complemented by corrections for detector lag, veiling glare, and beam hardening. The fast MC scatter estimation combines GPU acceleration, variance reduction, and simulation with a low number of photon histories and reduced number of projection angles (sparse MC) augmented by kernel de-noising to yield a runtime of ~4 min per scan. Scatter correction is combined with two-pass beam hardening correction. Detector lag correction is based on temporal deconvolution of the measured lag response function. The effects of detector veiling glare are reduced by deconvolution of the glare response function representing the long range tails of the detector point-spread function. The performance of the correction framework is quantified in experiments using a realistic head phantom on a testbench for FPD CBCT. Uncorrected reconstructions were non-diagnostic for soft-tissue imaging tasks in the brain. After processing with the artifact correction framework, image uniformity was substantially improved, and artifacts were reduced to a level that enabled visualization of ~3 mm simulated bleeds throughout the brain. Non-uniformity (cupping) was reduced by a factor of 5, and contrast of simulated bleeds was improved from ~7 to 49.7 HU, in good agreement with the nominal blood contrast of 50 HU. Although noise was amplified by the corrections, the contrast-to-noise ratio (CNR) of simulated bleeds was improved by nearly a factor of 3.5 (CNR = 0.54 without corrections and 1.91 after correction). The resulting image quality motivates further development and translation of the FPD-CBCT system for imaging of acute TBI.
Takeuchi, Wataru; Suzuki, Atsuro; Shiga, Tohru; Kubo, Naoki; Morimoto, Yuichi; Ueno, Yuichiro; Kobashi, Keiji; Umegaki, Kikuo; Tamaki, Nagara
2016-12-01
A brain single-photon emission computed tomography (SPECT) system using cadmium telluride (CdTe) solid-state detectors was previously developed. This CdTe-SPECT system is suitable for simultaneous dual-radionuclide imaging due to its fine energy resolution (6.6 %). However, the problems of down-scatter and low-energy tail due to the spectral characteristics of a pixelated solid-state detector should be addressed. The objective of this work was to develop a system for simultaneous Tc-99m and I-123 brain studies and evaluate its accuracy. A scatter correction method using five energy windows (FiveEWs) was developed. The windows are Tc-lower, Tc-main, shared sub-window of Tc-upper and I-lower, I-main, and I-upper. This FiveEW method uses pre-measured responses for primary gamma rays from each radionuclide to compensate for the overestimation of scatter by the triple-energy window method that is used. Two phantom experiments and a healthy volunteer experiment were conducted using the CdTe-SPECT system. A cylindrical phantom and a six-compartment phantom with five different mixtures of Tc-99m and I-123 and a cold one were scanned. The quantitative accuracy was evaluated using 18 regions of interest for each phantom. In the volunteer study, five healthy volunteers were injected with Tc-99m human serum albumin diethylene triamine pentaacetic acid (HSA-D) and scanned (single acquisition). They were then injected with I-123 N-isopropyl-4-iodoamphetamine hydrochloride (IMP) and scanned again (dual acquisition). The counts of the Tc-99m images for the single and dual acquisitions were compared. In the cylindrical phantom experiments, the percentage difference (PD) between the single and dual acquisitions was 5.7 ± 4.0 % (mean ± standard deviation). In the six-compartment phantom experiment, the PDs between measured and injected activity for Tc-99m and I-123 were 14.4 ± 11.0 and 2.3 ± 1.8 %, respectively. In the volunteer study, the PD between the single and dual acquisitions was 4.5 ± 3.4 %. This CdTe-SPECT system using the FiveEW method can provide accurate simultaneous dual-radionuclide imaging. A solid-state detector SPECT system using the FiveEW method will permit quantitative simultaneous Tc-99m and I-123 study to become clinically applicable.
Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions
NASA Astrophysics Data System (ADS)
Dong, Miao L.; Goyal, Kashika G.; Worth, Bradley W.; Makkar, Sorab S.; Calhoun, William R.; Bali, Lalit M.; Bali, Samir
2013-08-01
A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.
Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions.
Dong, Miao L; Goyal, Kashika G; Worth, Bradley W; Makkar, Sorab S; Calhoun, William R; Bali, Lalit M; Bali, Samir
2013-08-01
A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.
An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT
NASA Astrophysics Data System (ADS)
Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.
2009-06-01
A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.
Brodin, Anders; Urhan, A Utku
2013-07-01
Laboratory studies of scatter hoarding birds have become a model system for spatial memory studies. Considering that such birds are known to have a good spatial memory, recovery success in lab studies seems low. In parids (titmice and chickadees) typically ranging between 25 and 60% if five seeds are cached in 50-128 available caching sites. Since these birds store many thousands of food items in nature in one autumn one might expect that they should easily retrieve five seeds in a laboratory where they know the environment with its caching sites in detail. We designed a laboratory set up to be as similar as possible with previous studies and trained wild caught marsh tits Poecile palustris to store and retrieve in this set up. Our results agree closely with earlier studies, of the first ten looks around 40% were correct when the birds had stored five seeds in 100 available sites both 5 and 24h after storing. The cumulative success curve suggests high success during the first 15 looks where after it declines. Humans performed much better, in the first five looks most subjects were 100% correct. We discuss possible reasons for why the birds were not doing better. Copyright © 2013 Elsevier B.V. All rights reserved.